|Dates coming soon|
Automated Test Generation for High MC/DC Using Guided Concolic Testing
Software, in many different fields and tasks, has replaced humans to improve efficiency and safety. However, devastating accidents with human casualties can be caused by implementation bugs and design defects in safety critical software such as airplane navigation systems and autopilot systems. On December 20th, 1995, American Airlines Flight 965 departed from Miami, Florida to Cali, Colombia crashed into a 9,800 feet mountain due to a navigation software error, which caused 159 deaths. To prevent such tragedies from happening again, Federal Aviation Administration requires that level 1 (the most safety critical) systems are adequately tested using test cases with high MC/DC (modified condition/decision) coverage defined in DO-178B/C. Although many studies have shown these tests are effective in detecting bugs in software systems, generating tests to achieve high MC/DC can be expensive and time-consuming. It is imperative to develop novel techniques to accomplish this goal in a cost-effective way. To overcome this challenge, some test generation techniques using concolic testing with code transformation are proposed. However, these techniques suffer from a fundamental flaw, which is that code transformation might change the program’s output, therefore greatly diminish the practicability. To address this issue, we propose an innovative approach that combines concolic testing, MC/DC measurement, and control flow graph (CFG) analysis to automatically generate tests to achieve high MC/DC. The overall process can be divided into three major steps: 1) We use a MC/DC measurement tool (developed by ourselves) to instrument the system under test (SUT), and analyze its decisions (e.g., if, for, and while). Information of each decision such as how much MC/DC improvement can be achieved by each combination of conditions (the boolean assignment for each decision) is collected. We also use a static analyzer to collect CFG information for subsequent steps. 2) Concolic testing is performed on the SUT. A concolic executor executes the SUT using an existing test case (a randomly constructed test case is used for the first time) and collects its corresponding path constraint (PC). Next, by changing the collected PC, several new PCs are constructed. Each of them can lead the concolic executor to explore a new execution path in SUT. 3) We choose one of the constructed PCs that can achieve high MC/DC based on the CFG of SUT and MC/DC improvement information collected in Step 1. A constraint solver is used to generate a test case of the chosen PC. After finishing Step 3, test generation process goes back to Step 1 to update the MC/DC improvement for each decision. Steps 1, 2, and 3 are repeated until a stopping criterion is met, for example, 100% MC/DC is achieved or no more new PCs can be constructed. Case studies on real-life programs with complex control flows will be conducted to evaluate the effectiveness and efficiency of the proposed approach.
This presentation has not yet been uploaded.
No handouts have been uploaded.
W. Eric Wong (Primary Presenter), University of Texas at Dallas, firstname.lastname@example.org;
W. Eric Wong received his M.S. and Ph.D. in Computer Science from Purdue University, West Lafayette, Indiana. He is a Full Professor and the Founding Director of Advanced Research Center for Software Testing and Quality Assurance (http://paris.utdallas.edu/stqa) in Computer Science at the University of Texas at Dallas (UTD). Prior to joining UTD, he was with Telcordia Technologies (formerly Bellcore) as a senior research scientist and the project manager in charge of Dependable Telecom Software Development. Dr. Wong is the recipient of the 2014 IEEE Reliability Society Engineer of the Year. He is also the Edit-in-Chief of the IEEE Transactions on Reliability. His research focuses on helping practitioners improve software quality while reducing production cost. In particular, he is working on software testing, program debugging, risk analysis, safety, and reliability. Dr. Wong has published more than 180 papers and edited 2 books. Dr. Wong is also the Founding Steering Committee Chair of the IEEE International Conference on Software Quality, Reliability, and Security (QRS) and the IEEE International Workshop on Program Debugging (IWPD).
Ruizhi Gao (Author,Co-Author), University of Texas at Dallas, email@example.com;
Ruizhi Gao received his Bachelor degree in Software Engineering from Nanjing University, China. He is currently a Ph.D. candidate under the supervision of Professor W. Eric Wong at the University of Texas at Dallas. His research focus is on software testing, fault localization, and program debugging.
Linghuan Hu (Co-Author), University of Texas at Dallas, firstname.lastname@example.org;
Linghuan Hu received his B.S. degree in Information Security from Chongqing University of Posts and Telecommunications and M.S. degree in Software Engineering from the University of Texas at Dallas. He is currently a PhD student under the supervision of Professor W. Eric Wong in computer science department at the University of Texas at Dallas. His current research interests include quality assurance, code instrumentation, combinatorial testing and model based test generation.
Richard Kuhn (Co-Author), NIST, email@example.com;
Rick Kuhn is a computer scientist in the Computer Security Division of the National Institute of Standards and Technology. He has authored two books and more than 100 conference or journal publications on information security, empirical studies of software failure, and software assurance, and is a senior member of the Institute of Electrical and Electronics Engineers (IEEE). He co-developed the role based access control model (RBAC) used throughout industry and led the effort that established RBAC as an ANSI standard. Previously he served as Program Manager for the Committee on Applications and Technology of the President's Information Infrastructure Task Force and as manager of the Software Quality Group at NIST. Before joining NIST in 1984, he worked as a systems analyst with NCR Corporation and the Johns Hopkins University Applied Physics Laboratory. He received an MS in computer science from the University of Maryland College Park, and an MBA from William & Mary.
Raghu Kacker (Author,Co-Author), NIST, firstname.lastname@example.org;
Raghu Kacker is a mathematical statistician in the Mathematical and Computational Sciences Division of the Information Technology Laboratory of NIST. He received his Ph.D. in statistics from the Iowa State University in 1979. After one year on the faculty of Virginia Tech in Blacksburg, he worked for seven years in the former AT&T Bell Laboratories in New Jersey. He joined NIST in 1987. His current interests include evaluation of uncertainty in physical and virtual measurements, quantification of uncertainty from bias, combining information from interlaboratory evaluations and multiple methods of measurement, meta-analysis of clinical trials, measurement equations, Bayesian uncertainty, linear models and variance components, industrial statistics, quality engineering, and Taguchi methods. He is a Fellow of the American Statistical Association and a Fellow of the American Society for Quality. He was elected member of the International Statistical Institute. He has received Bronze medal from the U.S. Department of Commerce and Distinguished Technical Staff Award from the AT&T Bell Laboratories. He was member of an NIST team that developed software to assay large parallel processing programs, which won R&D 100 award. He is a member of the editorial boards of the journals Total Quality Management and Journal of Applied Statistics.