IMPORTANT DATES
February 1, 2017 Abstract and optional full paper submission begins
May 26, 2017 Exhibit & Supporter registration opens
June 15, 2017 Abstract and optional extended abstract submission ends
June 29, 2017 Acceptance notifications sent
July 24, 2017 Submit final abstracts and presenter biographies
August 28, 2017 Submit final presentations and optional full papers

Abstract Details

<< Back to Schedule

9/26/2017  |   2:00 PM - 2:45 PM   |  Track 3 - Metrics

How Software Cost Estimators Select Size Metrics: Preliminary Survey Results

With plenty of existing software size metrics, it is difficult to select the most effective one for estimating a project's cost. Each size metric was developed with the intention of simplifying and improving effort estimates, because it is difficult to acceptably estimate project size in source lines of code (SLOC) until the project is nearly complete. Since size metrics use different information, they may be more useful in different development environments and at different times during the software lifecycle. Books and research on cost and effort estimation provide very little detail on how to select the correct or best software size metric for estimating a project’s cost. Chemuturi suggests using the organization’s standard size measurement technique, the technique one is most familiar with, try Function Points due to its popularity, a client’s request or boss’s suggestion, or the size metric for which data exists [1]. [2] defined criteria to select a size metric “that is most suitable for effort estimation”, but they are based on the “characteristics of the software sizing methods”, such as calculation ease. The guidelines and criteria offered by [1] and [2] could result in multiple options, and an organization or cost estimator will still not have clarity in selecting an appropriate size metric. It is not clear how organizations, project managers, and cost estimators select an appropriate size metric for estimation. To understand how software size metrics are selected, we constructed a survey for project managers and cost estimators. Some interesting observations from the 20 survey responses are: • 15 respondents use SLOC, 11 use Expert Judgment (not including Story Points), and 9 use Function Points. These are the 3 most used size metrics across the current sample. • Of the 9 respondents that said they selected their size metrics due to its effectiveness, 7 of them also said that accurate cost estimation is a problem in their team/organization. • Similarly, 9 respondents said they selected their size metrics due to its applicability to the type of software being built, but 7 of them also said that accurate cost estimation is a problem in their team/organization. • Size metrics were most often chosen for the following reason o Historical Data: 14 o Ease of use/calculation: 12 o Applicable to lifecycle model: 10 o Applicable to software: 9 These survey results offer some surprising and contradictory observations. For instance, though it is practically impossible to estimate SLOC until the project is nearly complete, most project managers/cost estimators still use SLOC as their primary size metric. Since the sample size of the current responses are not random or representative of all the software development teams, the authors want to share preliminary findings while still gathering more participation. References [1] Chemuturi, Murali. Software estimation best practices, tools & techniques: a complete guide for software project estimators. J. Ross Publishing, 2009. [2] Sheetz, Steven D., David Henderson, and Linda Wallace. "Understanding developer and manager perceptions of function points and source lines of code." Journal of Systems and Software 82.9 (2009): 1540-1549.

Presentation:
This presentation has not yet been uploaded.

Handouts:
No handouts have been uploaded.

Anandi Hira (Primary Presenter), University of Southern California, a.hira@usc.edu;
Anandi Hira is currently a PhD student under Dr. Barry Boehm at University of Southern California’s (USC) Computer Science Department. Her research interests lie in cost estimation and models. She has been a part of the Unified Code Count (UCC) development effort at USC's Center for Systems and Software Engineering (CSSE) for the past 6 years, and has been collecting and analyzing the data to improve the development processes and the product’s quality. Anandi has also joined the effort within USC's CSSE to develop COCOMO® III (COnstructive COst MOdel) as an update from COCOMO® II.

Barry Boehm (Co-Presenter), USC, boehm@usc.edu;
Dr. Barry Boehm is the TRW Professor in the USC Computer Sciences, Industrial and Systems Engineering, and Astronautics Departments. He is also the Director of Research of the DoD-Stevens-USC Systems Engineering Research Center, and the founding Director of the USC Center for Systems and Software Engineering. He was director of DARPA-ISTO 1989-92, at TRW 1973-89, at Rand Corporation 1959-73, and at General Dynamics 1955-59. His contributions include the COCOMO family of cost models and the Spiral family of process models. He is a Fellow of the primary professional societies in computing (ACM), aerospace (AIAA), electronics (IEEE), and systems engineering (INCOSE), and a member of the U.S. National Academy of Engineering.

2017 Sponsors: IEEE and IEEE Computer Society