An empirical study of the reliability of UNIX utilities
@article{Miller1990AnES, title={An empirical study of the reliability of UNIX utilities}, author={Barton P. Miller and Lars Fredriksen and Bryan So}, journal={Commun. ACM}, year={1990}, volume={33}, pages={32-44} }
The following section describes the tools we built to test the utilities. These tools include the fuzz (random character) generator, ptyjig (to test interactive utilities), and scripts to automate the testing process. Next, we will describe the tests we performed, giving the types of input we presented to the utilities. Results from the tests will follow along with an analysis of the results, including identification and classification of the program bugs that caused the crashes. The final…
1,036 Citations
An empirical study of the robustness of MacOS applications using random testing
- Computer ScienceOPSR
- 2007
This study applies fuzz testing techniques to applications running on the Mac OS X operating system, and finds the GUI-based applications to be less reliable, noticeably worse than either of the previous Windows (Win32) or UNIX (X-Windows) studies.
An empirical study of the robustness of MacOS applications using random testing
- Computer ScienceRT '06
- 2006
This study applies fuzz testing techniques to applications running on the Mac OS X operating system, and finds the GUI-based applications to be less reliable than either of the previous Windows (Win32) or UNIX (X-Windows) studies.
Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services
- Computer Science
- 1995
This study parallels the 1990 study and tests the reliability of a large collection of basic UNIX utility programs, X-Window applications and servers, and network services, using a simple testing method of subjecting these programs to a random input stream.
An empirical study of the robustness of Windows NT applications using random testing
- Computer Science
- 2000
This study applies black-box random input testing techniques to applications running on the Windows NT operating system, and builds a tool that helps automate the testing of Windows NT applications.
An Inquiry into the Stability and Reliability of UNIX Utilities
- Computer Science
- 2001
We tested a large set of UNIX utilities under two popular UNIX variants, a GNU/Linux platform and a Solaris platform, for characteristics of stability and reliability. The testing methodology we…
Fuzz testing of web applications
- Computer Science
- 2008
A method and tool for (semi) automatic generation of pseudo random test data (fuzzing) that has been applied to several popular open source products, and shows that from the perspective of the human tester, the approach to testing is quick, easy and effective.
2 . 1 The Change that Causes a Failure
- Computer Science
The Delta Debuggingalgorithm generalizes and simplifies some failing test case to aminimal test casethat still produces the failure; it also isolates the difference between a working and a failingtest case.
Simplifying and Isolating Failure-Inducing Input
- Computer ScienceIEEE Trans. Software Eng.
- 2002
The delta debugging algorithm generalizes and simplifies the failing test case to a minimal test case that still produces the failure, and isolates the difference between a passing and a failingTest case.
Finding weaknesses in web applications through the means of fuzzing
- Computer Science
- 2008
This thesis describes a method and tool for (semi) automatic generation of pseudo random test data (also known as “fuzzing”), which has been applied to several popular open source products and shows that from the perspective of the human tester, the approach to testing is quick, easy and effective.
An empirical study of operating systems errors
- Computer ScienceSOSP
- 2001
A study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels found that device drivers have error rates up to three to seven times higher than the rest of the kernel.
References
SHOWING 1-10 OF 11 REFERENCES
Verifying a multiprocessor cache controller using random test generation
- Computer ScienceIEEE Design & Test of Computers
- 1990
The strategy was to develop a random tester that would generate and verify the complex interactions between multiple processors in functional simulation and it was easy to develop and detect over half the bugs uncovered during functional simulation.
Verifying a Multiprocessor Cache Controller Using Random Case
- Computer Science
- 1989
A random tester is developed to generate and verify the complex interactions between multiple processors in the functional simulation of SPUR, a shared memory multiprocessor designed and built at U.C. Berkeley.
Letters to the editor: go to statement considered harmful
- Computer ScienceCACM
- 1968
My considerations are that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, and that his intellectual powers are rather geared to master static relations and his powers to visualize processes evolving in time are relatively poorly developed.
Go to statement considered harmful
- Computer Science
- 1979
In form and content, Dijkstra's letter is similar to his 1965 paper, and the last few paragraphs underscore once again why the subject of structured programming stayed out of the mainstream of the data processing industry for so long.
A Tour of the Worm
- Computer Science
- 1988
This paper provides a chronology for the outbreak and presents a detailed description of the internals of the worm, based on a C version produced by decompiling.
On the criteria to be used in decomposing systems into modules
- EconomicsCACM
- 1972
This paper discusses modularization as a mechanism for improving the flexibility and comprehensibility of a system while allowing the shortening of its development time. The effectiveness of a…
Efficient Learning of Context-Free Grammars from Positive Structural Examples
- Computer ScienceInf. Comput.
- 1992
With microscope and tweezers: the worm from MIT's perspective
- ArtCACM
- 1989
The actions taken by a group of computer scientists at MIT during the worm invasion represents a study of human response to a crisis. The authors also relate the experiences and reactions of other…
Crisis and aftermath
- BusinessCACM
- 1989
Last November the Internet was infected with a worm program that eventually spread to thousands of machines, disrupting normal activities and Internet connectivity for many days. The following…