Wu, Stephen Tze-Inn2010-09-302010-09-302010-06https://hdl.handle.net/11299/94285University of Minnesota Ph.D. dissertation. June 2010. Major: Computer Science. Advisor: William Edward Schuler. 1 computer file (PDF); ix, 136 pages, appendices A.This thesis aims to define and extend a line of computational models for text comprehension that are humanly plausible. Since natural language is human by nature, computational models of human language will always be just that -- models. To the degree that they miss out on information that humans would tap into, they may be improved by considering the human process of language processing in a linguistic, psychological, and cognitive light. Approaches to constructing vectorial semantic spaces often begin with the distributional hypothesis, i.e., that words can be judged `by the company they keep.' Typically, words that occur in the same documents are similar, and will have similar vectorial meaning representations. However, this does not in itself provide a way for two distinct meanings to be composed, and it ignores syntactic context. Both of these problems are solved in Structured Vectorial Semantics (SVS), a new framework that fully unifies vectorial semantics with syntactic parsing. Most approaches that try to combine syntactic and semantic information will either lack a cohesive semantic component or a full-fledged parser, but SVS integrates both. Thus, in the SVS framework, interpretation is interactive, considering both syntax and semantics simultaneously. Cognitively-plausible language models would also be incremental, support linear-time inference, and operate in only a bounded store of short-term memory. Each of these characteristics is supported by right-corner Hierarchical Hidden Markov Model (HHMM) parsing; therefore, SVS will be transformed into right-corner form and mapped to an HHMM parser. The resulting representation will then encode a psycholinguistically plausible incremental SVS language model.en-USParsingPsycholinguisticsVectorial SemanticsComputer ScienceVectorial representations of meaning for a computational model of language comprehension.Thesis or Dissertation