Pedersen, Ted2013-06-232013-06-232010-10University of Minnesota Supercomputing Institute Research Report UMSI 2010/118, October 2010https://hdl.handle.net/11299/151596Measuring the similarity of short written contexts is a fundamental problem in Natural Language Processing. This article provides a unifying framework by which short context problems can be categorized both by their intended application and proposed solution. The goal is to show that various problems and methodologies that appear quite different on the surface are in fact very closely related. The axes by which these categorizations are made include the format of the contexts (headed versus headless), the way in which the contexts are to be measured (first-order versus second-order similarity), and the information used to represent the features in the contexts (micro versus macro views). The unifying thread that binds together many short context applications and methods is the fact that similarity decisions must be made between contexts that share few (if any) words in common.en-USdistributional similarityshort contextscontextual similaritynatural language processingComputational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and MethodsArticle