Learn More
In avionics and other critical systems domains, adequacy of test suites is currently measured using the MC/DC metric on source code (or on a model in model-based development). We believe that the rigor of the MC/DC metric is highly sensitive to the structure of the implementation and can therefore be misleading as a test adequacy criterion. We investigate(More)
Black-box testing is a technique in which test cases are derived from requirements without regard to the internal structure of the implementation. In current practice, the black-box test cases are derived manually from requirements. Manually deriving test cases from requirements is a costly and time consuming process. In this paper, we present the notion of(More)
In <i>black-box</i> testing, one is interested in creating a suite of tests from requirements that adequately exercise the behavior of a software system without regard to the internal structure of the implementation. In current practice, the adequacy of black box test suites is inferred by examining coverage on an executable artifact, either source code or(More)
In black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a(More)
Model-based software development is gaining interest in domains such as avionics, space, and automotives. The model serves as the central artifact for the development efforts (such as, code generation), therefore, it is crucial that the model be extensively validated. Automatic generation of interaction test suites is a candidate for partial automation of(More)
In current model-based development practice, validation that we are building a correct model is achieved by manually deriving requirements-based test cases for model testing. Model validation performed this way is time consuming and expensive, particularly in the safety critical systems domain where high confidence in the model correctness is required. In(More)
Conformance testing in model-based development refers to the testing activity that verifies whether the code generated (manually or automatically) from the model is behaviorally equivalent to the model. Presently the adequacy of conformance testing is inferred by measuring structural coverage achieved over the model. We hypothesize that adequacy metrics for(More)