Byun, Tae Joon2022-11-142022-11-142022-08https://hdl.handle.net/11299/243072University of Minnesota Ph.D. dissertation. August 2022. Major: Computer Science. Advisors: Mats Heimdahl, Sanjai Rayadurgam. 1 computer file (PDF); viii, 141 pages.With the remarkable advancement of deep learning in many domains, such as in computer vision, learning-enabled systems are rapidly being adopted in safety-critical domains where it is crucial to verify and validate the system rigorously. However, due to the unique characteristics of the learning-enabled components compared to traditional systems, existing verification techniques do not work in many cases, which calls for new approaches to address this problem. In the literature, we identified that a practical and scalable testing technique is lacking for computer-vision deep neural networks (DNNs) that deal with high-dimensional and unstructured input data. Moreover, most of the existing approaches for addressing this problem are white-box solutions that are dependent on the DNN under test, solutions that may be inappropriate given the highly iterative model development workflow. To address this problem, we propose systematic testing techniques for DNNs that resolve the dependency on the model under test, since the dependency comes with several critical shortcomings. In doing so, we investigated the following three concrete ideas. First, we propose a test prioritization technique that can identify failure-revealing test inputs to help reduce the test construction cost. Second, we propose a DNN-independent test adequacy measurement technique that can measure the adequacy of testing, and also help construct a representative test suite. Third, we propose a DNN-independent test case generation technique that can synthesize realistic test cases that are effective at finding failures in the DNN under test. The last two approaches are black-box solutions in that the test adequacy measurement and the test case generation are performed independently of the DNN under test, a unique direction compared to existing approaches. The experiments showed that (1) test prioritization can effectively prioritize failure-revealing test cases, (2) the black-box coverage criterion can help construct representative test cases that achieve effectiveness comparable to those constructed with white-box criteria, but with much lower measurement cost, and (3) the black-box test generation can synthesize realistic test cases that are also effective at finding failures in the model under test. We believe that the black-box approaches bring complementary benefits to white-box approaches and that they deserve further investigation.enBlack-box testingMachine learning testingSoftware testingTest coverage criteriaVariational autoencoderManifold-based Testing of Machine Learning SystemsThesis or Dissertation