Selecting Best Practices for Effort Estimation

Abstract

Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently underconstrained problem. Hence, the learned effort models can exhibit large deviations that prevent standard statistical methods (e.g., t-tests) from distinguishing the performance of alternative effort-estimation methods. The COSEEKMO effort-modeling workbench applies a set of heuristic rejection rules to comparatively assess results from alternative models. Using these rules, and despite the presence of large deviations, COSEEKMO can rank alternative methods for generating effort models. Based on our experiments with COSEEKMO, we advise a new view on supposed "best practices" in model-based effort estimation: 1) Each such practice should be viewed as a candidate technique which may or may not be useful in a particular domain, and 2) tools like COSEEKMO should be used to help analysts explore and select the best method for a particular domain

DOI: 10.1109/TSE.2006.114

Extracted Key Phrases

14 Figures and Tables

010203020072008200920102011201220132014201520162017
Citations per Year

201 Citations

Semantic Scholar estimates that this publication has 201 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Menzies2006SelectingBP, title={Selecting Best Practices for Effort Estimation}, author={Tim Menzies and Zhihao Chen and Jairus Hihn and Karen T. Lum}, journal={IEEE Transactions on Software Engineering}, year={2006}, volume={32} }