Does Software Have to Be Ultra Reliable in Safety Critical Systems?


It is difficult to demonstrate that safety-critical software is completely free of dangerous faults. Prior testing can be used to demonstrate that the unsafe failure rate is below some bound, but in practice, the bound is not low enough to demonstrate the level of safety performance required for critical software-based systems like avionics. This paper argues higher levels of safety performance can be claimed by taking account of: 1) external mitigation to prevent an accident: 2) the fact that software is corrected once failures are detected in operation. A model based on these concepts is developed to derive an upper bound on the number of expected failures and accidents under different assumptions about fault fixing, diagnosis, repair and accident mitigation. A numerical example is used to illustrate the approach. The implications and potential applications of the theory are discussed.

DOI: 10.1007/978-3-642-40793-2_11

Extracted Key Phrases

3 Figures and Tables

Cite this paper

@inproceedings{Bishop2013DoesSH, title={Does Software Have to Be Ultra Reliable in Safety Critical Systems?}, author={Peter Bishop}, booktitle={SAFECOMP}, year={2013} }