Abstract Details

<< Back to Schedule

9/27/2017  |   11:15 AM - 12:00 PM   |  Track 5 - Test and Verification

Covering Arrays: Evaluating coverage and diversity in the presence of disallowed combinations

Test engineers are often faced with the challenge of selecting test cases that maximize the chance of discovering faults while working with a limited budget. Combinatorial testing is an effective test case selection strategy to address this challenge. The basic idea is to select test cases that ensure that all possible combinations of settings from two (or more) inputs are accounted for, regardless of which subset of two (or more) inputs are selected. Currently, combinatorial testing usually implies a covering array as the underlying mathematical construct. Amongst the challenges that practitioners sometimes encounter are: a) The challenge of accommodating constraints on allowed combinations of settings for some subset of inputs [1] when specifying the covering array to be used for combinatorial testing. b) The challenge of assessing the coverage and diversity properties [2] of the resulting covering array. In this talk we will address both of these challenges but we will focus on a particular subclass of constraints, namely "disallowed combinations". We will motivate the discussion by working through a case study and, in the process, we will propose a new class of covering arrays, that we will refer to as "unsatisfiable constrained covering arrays", as well as extensions to the metrics proposed in [2] to accommodate this new class of covering arrays. References 1. Cohen, M., Dwyer, M., & Shi, J., "Constructing interaction test suites for highly-configurable systems in the presence of constraints: A greedy approach," IEEE Transactions on Software Engineering, 34(5), 2008, pp. 633-650. 2. Dalal, S., & Mallows, C., "Factor-covering designs for testing software," Technometrics, 40(3), 1998, pp. 234-243.

This presentation has not yet been uploaded.

No handouts have been uploaded.

Joseph Morgan (Primary Presenter,Author), SAS Institute Inc., joseph.morgan@sas.com;
Joseph Morgan is a research statistician/developer in the JMP division of SAS Institute Inc. Before joining SAS, he was a full-time faculty member in the School of Computer Science at DePaul University, where he taught computer science, software engineering and data analysis classes. His research interests include combinatorial design methods, software reliability and empirical software engineering. His publications have appeared in IEEE Transactions on Reliability, International Journal of Reliability, Quality and Safety Engineering and several conference proceedings.

Ryan Lekivetz (Co-Author), SAS Institute Inc, ryan.lekivetz@sas.com;
Ryan Lekivetz works on the design of experiments (DOE) platforms in JMP. He earned his doctorate in statistics from Simon Fraser University in Burnaby, BC, Canada, and has publications related to topics in DOE in peer-reviewed journals.

Tom Donnelly (Co-Author), SAS Institute Inc., Tom.Donnelly@jmp.com;
Tom Donnelly works as a Systems Engineer in the JMP division of SAS. A physicist by training, he has been actively using and teaching Design of Experiments (DOE) methods for the past 25 years to develop and optimize products, processes and technologies. Donnelly joined JMP after working as an analyst for the Modeling, Simulation & Analysis Branch of the US Army’s Edgewood Chemical Biological Center at Aberdeen Proving Ground, MD. There, he used DOE to develop, test, and evaluate technologies for detection, protection and decontamination of chemical and biological agents.

2017 Sponsors: IEEE and IEEE Computer Society