Comparing the effectiveness of automated test generation tools "EVOSUITE"� and "Tpalus"�

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Published Date

Publisher

Abstract

Automated testing has been evolving over the years and the main reason behind the growth of these tools is to reduce the manual effort in checking the correctness of any software. Writing test cases to check the correctness of software is very time consuming and requires a great deal of patience. A lot of time and effort used on writing manual test cases can be saved and in turn we can focus on improving the performance of the application. Statistics show that 50% of the total cost of software development is devoted to software testing, even more in the case of critical software. The growth of these automated test generation tools lead us to a big question of "How effective are these tools in checking the correctness of the application?"� There are several challenges associated with developing automated test generation tools and currently there is no particular tool or metric to check the effectiveness of these automated test generation tools. In my thesis, I aim to measure the effectiveness of two automated test generation tools. The two automated tools on which I have experimented on are Tpalus and EVOSUITE. Tpalus and EVOSUITE are capable of generating test cases for any program written in Java. They are specifically designed to work on Java. Several metrics have to be considered in measuring the effectiveness of a tool. I use the results obtained from these tools on several open source subjects to evaluate both the tools. The metrics that were chosen in comparing these tools include code coverage, mutation scores, and size of the test suite. Code coverage tells us how well the source code is covered by the test cases. A better test suite generally aims to cover most of the source code to consider each and every statement as a part of testing. A mutation score is an indication of the test suite detecting and killing mutants. In this case, a mutant is a new version of a program that is created by making a small syntactic change to the original program. The higher mutation score, the higher the number of mutants detected and killed. Results obtained during the experiment include branch coverage, line coverage, raw kill score and normalized kill score. These results help us to decide how effective these tools are when testing critical software.

Keywords

Description

University of Minnesota M.S. thesis. July 2015. Major: Computer Science. Advisor: Andrew Brooks. 1 computer file (PDF); vii, 71 pages.

Related to

Replaces

License

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Other identifiers

Suggested citation

Chitirala, Sai Charan Raj. (2015). Comparing the effectiveness of automated test generation tools "EVOSUITE"� and "Tpalus"�. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/174750.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.