Detection of Requirement Errors and Faults via a Human Error Taxonomy: A Feasibility Study

@article{Hu2016DetectionOR,
  title={Detection of Requirement Errors and Faults via a Human Error Taxonomy: A Feasibility Study},
  author={Wenhua Hu and Jeffrey C. Carver and Vaibhav Anu and Gursimran Singh Walia and Gary L. Bradshaw},
  journal={Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement},
  year={2016}
}
Background: Developing correct software requirements is important for overall software quality. Most existing quality improvement approaches focus on detection and removal of faults (i.e. problems recorded in a document) as opposed identifying the underlying errors that produced those faults. Accordingly, developers are likely to make the same errors in the future and fail to recognize other existing faults with the same origins. Therefore, we have created a Human Error Taxonomy (HET) to help… 
Using Human Error Abstraction Method for Detecting and Classifying Requirements Errors: A Live Study
TLDR
It is hypothesize that inspections focused on identifying human errors are better at identifying requirements problems when compared to inspectionsfocused on faults, and the design and evaluation of the HEA method during the live study is discussed.
Defect Prevention in Requirements Using Human Error Information: An Empirical Study
TLDR
The results of this study show that a better understanding of human errors does lead developers to insert fewer problems into their own requirements documents, and indicate that different types of Human Error information have different impacts on fault prevention.
Using human error information for error prevention
TLDR
Evaluating whether understanding human errors contributes to the prevention of errors and concomitant faults during requirements engineering and identifying error prevention techniques used in industrial practice showed that the better a requirements engineer understands human errors, the fewer errors and Concomitant Fault makes when developing a new requirements document.
Understanding Human Errors In Software Requirements: An Online Survey
TLDR
In this research, the findings from human error research are applied to improve the process of requirements engineering by focusing on those issues that are human errors.
Training Industry Practitioners to Investigate the Human Error Causes of Requirements Faults
TLDR
An industrial study to evaluate whether human error training procedures and instrumentation created by authors can be used to train industry software practitioners on human errors that occur during requirements engineering process shows that parts of the training procedures need to be improved.
How Software Developers Mitigate Their Errors When Developing Code
TLDR
It is found that developers struggle with effective mitigation strategies for their errors, reporting strategies largely based on improving their own willpower to concentrate better on coding tasks, which may help reduce errors during software development.
Usefulness of a Human Error Identification Tool for Requirements Inspection: An Experience Report
TLDR
This empirical study investigates the effectiveness of a newly developed Human Error Abstraction Assist (HEAA) tool in helping inspectors identify human errors to guide the fault detection during the requirements inspection.
A Bird’s Eye View of Natural Language Processing and Requirements Engineering
TLDR
It is asserted that human involvement with knowledge about the domain and the specific project is still needed in the RE process despite progress in the development of NLP systems.
...
...

References

SHOWING 1-10 OF 19 REFERENCES
Requirement error abstraction and classification: an empirical study
TLDR
The results show that the EAP significantly improves the productivity of subjects, that the RET is useful for improving software quality, that it provides useful insights into the requirements document, and that various context variables also impact the results.
A systematic literature review to identify and classify software requirement errors
Using error abstraction and classification to improve requirement quality: conclusions from a family of four empirical studies
Achieving high software quality is a primary concern for software development organizations. Researchers have developed many quality improvement methods that help developers detect faults early in
Comparing Detection Methods for Software Requirements Inspections: A Replicated Experiment
TLDR
It is hypothesized that a Scenario-based method, in which each reviewer uses different, systematic techniques to search for different, specific classes of faults, will have a significantly higher success rate than either Ad Hoc or Checklist methods.
Experimenting with error abstraction in requirements documents
  • F. Lanubile, F. Shull, V. Basili
  • Computer Science
    Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262)
  • 1998
TLDR
An empirical study is presented whose main purpose is to investigate whether defect detection in requirements documents can be improved by focusing on the errors in a document rather than the individual faults that they cause.
The cost of errors in software development: evidence from industry
Human error in the software generation process
TLDR
The nature of random faults and to what extent they can be attributed to human error are discussed and a probability model has been generated to determine the susceptibility of a software generation process to the introduction of faults.
Building a requirement fault taxonomy: experiences from a NASA verification and validation research project
  • J. Hayes
  • Computer Science
    14th International Symposium on Software Reliability Engineering, 2003. ISSRE 2003.
  • 2003
TLDR
A NASA-specific requirement fault taxonomy and processes for tailoring the taxonomy to a class of software projects or to a specific project are built and lessons learned are presented.
Incorporating a fault categorization and analysis process in the software build cycle
TLDR
A programming technique that requires programmers to both categorize and understand the reason for any faults at each iterative build is reviewed, and the data collected suggests that all three measured variables improve when programmers take the time to categorizing and understandThe reason for a fault at build time.
...
...