Browsing by Subject "Vectorial Semantics"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Vectorial representations of meaning for a computational model of language comprehension.(2010-06) Wu, Stephen Tze-InnThis thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that -- models. To the degree that they miss out on information that humans would tap into, they may be improved by considering the human process of language processing in a linguistic, psychological, and cognitive light. Approaches to constructing vectorial semantic spaces often begin with the distributional hypothesis, i.e., that words can be judged `by the company they keep.' Typically, words that occur in the same documents are similar, and will have similar vectorial meaning representations. However, this does not in itself provide a way for two distinct meanings to be composed, and it ignores syntactic context. Both of these problems are solved in Structured Vectorial Semantics (SVS), a new framework that fully unifies vectorial semantics with syntactic parsing. Most approaches that try to combine syntactic and semantic information will either lack a cohesive semantic component or a full-fledged parser, but SVS integrates both. Thus, in the SVS framework, interpretation is interactive, considering both syntax and semantics simultaneously. Cognitively-plausible language models would also be incremental, support linear-time inference, and operate in only a bounded store of short-term memory. Each of these characteristics is supported by right-corner Hierarchical Hidden Markov Model (HHMM) parsing; therefore, SVS will be transformed into right-corner form and mapped to an HHMM parser. The resulting representation will then encode a psycholinguistically plausible incremental SVS language model.