Sneha is a software development engineer at Amazon where she works on developing personalized recommendation experiences for Amazon customers. She has previously worked for Audible, Pluribus Networks, VMware, and Avaya. She holds a Master’s degree in CS from University of Pennsylvania and a Bachelor’s degree in CS. She’s passionate about AI and working with data and her research focus at Penn was in the fields of Machine Learning, Natural Language Processing, and Deep Learning. She has presented her work at notable conferences like the Annual Meeting of the Association for Computational Linguistic, Mid-Atlantic Student Colloquium on Speech, Language and Learning, Grace Hopper Celebration for Women in Computing, GDG Dev Fest, and AlterConf.
Upcoming Abstract Summary
Learning Antonyms with Paraphrases and a Morphology-Aware Neural Network
Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems. Identifying antonymy and expressions with contrasting meanings is valuable for NLP systems which go beyond recognizing semantic relatedness and require to identify specific semantic relations. In this paper, we present a novel method for deriving antonym pairs using paraphrase pairs containing negation markers. We further propose a neural network model, AntNET, that integrates morphological features indicative of antonymy into a path-based relation detection algorithm. We demonstrate that our model outperforms state-of-the-art models in distinguishing antonyms from other semantic relations and is capable of efficiently handling multi-word expressions.
The main contributions of this paper are: • We present a novel technique of using paraphrases for antonym detection and successfully derive antonym pairs from paraphrases in the PPDB, the largest paraphrase resource currently available. • We demonstrate improvements to an integrated path-based and distributional model, showing that our morphology-aware neural network model, AntNET, performs better than state-of-the-art methods for antonym detection.
While antonymy applies to expressions that represent contrasting meanings, paraphrases are phrases expressing the same meaning, which usually occur in similar textual contexts or have common translations in other languages. Specifically, if two words or phrases are paraphrases, they are unlikely to be antonyms of each other. Our first approach to antonym detection exploits this fact and uses paraphrases for detecting and generating antonyms (The dementors caught Sirius Black/ Black could not escape the dementors).
Our second method is inspired by the recent success of deep learning methods for relation detection. While many path-based and distributional models perform well for hypernyms detection, the path-based component is weak in recognizing synonyms, which do not tend to co-occur, and the distributional information caused confusion between synonyms and antonyms, since both tend to occur in the same contexts. We propose AntNET, a novel extension of LexNET that integrates information about negating prefixes as a new morphological pattern feature and is able to distinguish antonyms from other semantic relations. In addition, we optimize the vector representations of dependency paths between the given term pair, encoded using a neural network, by replacing the embeddings of words with negating prefixes by the embeddings of the base, non-negated, forms of the words.