Question Answering (QA) is the task of generating or extracting an answer for a user query from a corpus of documents. Factoid QA is the most popular and studied form of QA and has received maximum focus from the scientific community. As is apparent from the name, the information requested from these factoid questions is a bare fact and in most cases is a named entity. In a majority of cases, such information is found in a single document and does not require sentence extraction and sentence reordering. However, most interesting questions are not factoid questions. Users might request a summary of a recent event from a news article, or they might want to know about a recent remedial cure for some observed symptoms that require text extraction from five different medical documents. All such queries require sentence extraction from a single or (often) multiple documents and require sentence reordering to generate a readable answer. This task is non-trivial, and hence there is more to non-factoid QA than meets the eye. Non-factoid QA has recently drawn attention from both the Information Retrieval (IR) and Natural Language Processing (NLP) communities, but most of the research has focused on developing learning models for re-ranking the answers from a set of question-answer pairs.
This thesis explores the use of different natural language (NL) structures to complement the traditional bag-of-words model to generate answers for non-factoid questions. We find that complex linguistic features like semantic role labels outperform the traditional bag-of-words model. In fact, we find that the combination of different NL structures with the bag-of-words model performs best in our experiments. We also use Feature Engineering for extracting different sets of features from a given corpus. We find that using similarity features, translation features and occurrence features produces a higher ranked result as compared to the bag-of-words model and may help bridge the semantic gap between non-factoid questions and answers.