Browsing by Author "Rajan, Ajitha"
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Assessing Requirements Quality Through Requirements Coverage(2008) Rajan, Ajitha; Heimdahl, Mats; Woodham, KurtItem Coverage Metrics for Requirements-Based Testing(ACM, 2006) Whalen, Michael; Rajan, Ajitha; Heimdahl, MatsIn black-box testing, one is interested in creating a suite of tests from requirements that adequately exercise the behavior of a software system without regard to the internal structure of the implementation. In current practice, the adequacy of black box test suites is inferred by examining coverage on an executable artifact, either source code or a software model. In this paper, we define structural coverage metrics directly on high-level formal software requirements. These metrics provide objective, implementation-independent measures of how well a black-box test suite exercises a set of requirements. We focus on structural coverage criteria on requirements formalized as LTL properties and discuss how they can be adapted to measure finite test cases. These criteria can also be used to automatically generate a requirements-based test suite. Unlike model or code-derived test cases, these tests are immediately traceable to high-level requirements. To assess the practicality of our approach, we apply it on a realistic example from the avionics domain.Item Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness(NASA, 2010) Staats, Matt; Whalen, Michael; Rajan, Ajitha; Heimdahl, MatsIn black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a black-box test suite with respect to a set of requirements formalized as Linear Temporal Logic (LTL) properties. In this report, we evaluate the effectiveness of these coverage metrics with respect to fault finding. Specifically, we conduct an empirical study to investigate two questions: (1) do test suites satisfying a requirements coverage metric provide better fault finding than randomly generated test suites of approximately the same size?, and (2) do test suites satisfying a more rigorous requirements coverage metric provide better fault finding than test suites satisfying a less rigorous requirements coverage metric? Our results indicate (1) that test suites satisfying more rigorous coverage metrics provide better fault finding than test suites satisfying less rigorous coverage metrics and (2) only one coverage metric proposed—Unique First Cause (UFC) coverage—is sufficiently rigorous to ensure test suites satisfying the metric outperform randomly generated test suites of similar size.Item Interaction Testing in Model-Based Development: Effect on Model-Coverage(2006) Bryce, Renee; Rajan, Ajitha; Heimdahl, MatsModel-based software development is gaining interest in domains such as avionics, space, and automotives. The model serves as the central artifact for the development efforts (such as, code generation), therefore, it is crucial that the model be extensively validated. Automatic generation of interaction test suites is a candidate for partial automation of this model validation task. Interaction testing is a combinatorial approach that systematically tests all t-way combinations of inputs for a system. In this paper, we report how well interaction test suites (2-way through 5-way interaction test suites) structurally cover a model of the modelogic of a flight guidance system. We conducted experiments to (1) compare the coverage achieved with interaction test suites to that of randomly generated tests and (2) determine if interaction test suites improve the coverage of black-box test suites derived from system requirements. The experiments show that the interaction test suites provide little benefit over the randomly generated tests and do not improve coverage of the requirements-based tests. These findings raise questions on the application of interaction testing in this domain.Item Model Validation using Automatically Generated Requirements-Based Tests(2007) Rajan, Ajitha; Whalen, Michael; Heimdahl, MatsIn current model-based development practice, validation that we are building a correct model is achieved by manually deriving requirements-based test cases for model testing. Model validation performed this way is time consuming and expensive, particularly in the safety critical systems domain where high confidence in the model correctness is required. In an effort to reduce the validation effort, we propose an approach that automates the generation of requirementsbased tests for model validation purposes. Our approach uses requirements formalized as LTL properties as a basis for test generation. Test cases are generated to provide rigorous coverage over these formal properties. We use an abstract model in this paper-called the Requirements Model-generated from requirements and environmental constraints for automated test case generation. We illustrate and evaluate our approach using three realistic or production examples from the avionics domain. The proposed approach was effective on two of the three examples used, owing to their extensive and well defined set of requirements.Item On MC/DC and Implementation Structure: An Empirical Study(IEEE, 2008) Whalen, Michael; Heimdahl, Mats; Staats, Matt; Rajan, AjithaIn civil avionics, obtaining D0-178B certification for highly critical airborne software requires that the adequacy of the code testing effort be measured using a structural coverage criterion known as Modified Condition and Decision Coverage (MC/DC). We hypothesized that the effectiveness of the MC/DC metric is highly sensitive to the structure of the implementation and can therefore be problematic as a test adequacy criterion. We tested this hypothesis by evaluating the faultfinding ability of MC/DC-adequate test suites on five industrial systems (flight guidance and display management). For each system, we created two versions of the implementations—implementations with and without expression folding (i.e., inlining). We found that for all five examples, the effectiveness of the test suites was highly sensitive to the structure of the implementation they were designed to cover. MC/DC test suites adequate on an inlined implementation have greater fault finding ability than test suites generated to be MC/DC adequate on the non-inlined version of the same implementation at the 5% significance level. (The inlined test suites outperformed the non-inlined test suites in the range of 10% to 5940%.) This observation confirms our suspicion that MC/DC used as a test adequacy metric is highly sensitive to structural changes in the implementation, and that test suite adequacy measurement using the MC/DC metric will be better served if done over the inlined implementation.Item ReqsCov: A Tool for Measuring Test-Adequacy over Requirements(2008) Staats, Matt; Deng, Weijia; Rajan, Ajitha; Heimdahl, Mats; Woodham, KurtWhen creating test cases for software, a common approach is to create tests that exercise requirements. Determining the adequacy of test cases, however, is generally done through inspection or indirectly by measuring structural coverage of an executable artifact (such as source code or a software model). We present ReqsCov, a tool to directly measure requirements coverage provided by test cases. ReqsCov allows users to measure Linear Temporal Logic requirements coverage using three increasingly rigorous requirements coverage metrics: naive coverage, antecedent coverage, and Unique First Cause coverage. By measuring requirements coverage, users are given insight into the quality of test suites beyond what is available when solely using structural coverage metrics over an implementation.Item Requirements Coverage as an Adequacy Measure for Conformance Testing(Springer-Verlag, 2008) Rajan, Ajitha; Whalen, Michael; Staats, Matt; Heimdahl, MatsConformance testing in model-based development refers to the testing activity that verifies whether the code generated (manually or automatically) from the model is behaviorally equivalent to the model. Presently the adequacy of conformance testing is inferred by measuring structural coverage achieved over the model. We hypothesize that adequacy metrics for conformance testing should consider structural coverage over the requirements either in place of or in addition to structural coverage over the model. Measuring structural coverage over the requirements gives a notion of how well the conformance tests exercise the required behavior of the system. We conducted an experiment to investigate the hypothesis stating structural coverage over formal requirements is more effective than structural coverage over the model as an adequacy measure for conformance testing. We found that the hypothesis was rejected at 5% statistical significance on three of the four case examples in our experiment. Nevertheless, we found that the tests providing requirements coverage found several faults that remained undetected by tests providing model coverage. We thus formed a second hypothesis stating that complementing model coverage with requirements coverage will prove more effective as an adequacy measure than solely using model coverage for conformance testing. In our experiment, we found test suites providing both requirements coverage and model coverage to be more effective at finding faults than test suites providing model coverage alone, at 5% statistical significance. Based on our results, we believe existing adequacy measures for conformance testing that only consider model coverage can be strengthened by combining them with rigorous requirements coverage metrics.Item The Effect of Program and Model Structure on MC/DC Test Adequacy Coverage(ACM, 2008) Rajan, Ajitha; Whalen, Michael; Heimdahl, MatsIn avionics and other critical systems domains, adequacy of test suites is currently measured using the MC/DC metric on source code (or on a model in model-based development). We believe that the rigor of the MC/DC metric is highly sensitive to the structure of the implementation and can therefore be misleading as a test adequacy criterion. We investigate this hypothesis by empirically studying the effect of program structure on MC/DC coverage. To perform this investigation, we use six realistic systems from the civil avionics domain and two toy examples. For each of these systems, we use two versions of their implementation|with and without expression folding (i.e., inlining). To assess the sensitivity of MC/DC to program structure, we first generate test suites that satisfy MC/DC over a non-inlined implementation. We then run the generated test suites over the inlined implementation and measure MC/DC achieved. For our realistic examples, the test suites yield an average reduction of 29.5% in MC/DC achieved over the inlined implementations at 5% statistical significance level.