iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/s11219-014-9230-x
Predicting defective modules in different test phases | Software Quality Journal Skip to main content

Advertisement

Log in

Predicting defective modules in different test phases

  • Published:
Software Quality Journal Aims and scope Submit manuscript

Abstract

Defect prediction is a well-established research area in software engineering . Prediction models in the literature do not predict defect-prone modules in different test phases. We investigate the relationships between defects and test phases in order to build defect prediction models for different test phases. We mined the version history of a large-scale enterprise software product to extract churn and static code metrics. We used three testing phases that have been employed by our industry partner, namely function, system and field, to build a learning-based model for each testing phase. We examined the relation of different defect symptoms with the testing phases. We compared the performance of our proposed model with a benchmark model that has been constructed for the entire test phase (benchmark model). Our results show that building a model to predict defect-prone modules for each test phase significantly improves defect prediction performance and shortens defect detection time. The benefit analysis shows that using the proposed model, the defects are detected on the average 7 months earlier than the actual. The outcome of prediction models should lead to an action in a software development organization. Our proposed model gives a more granular outcome in terms of predicting defect-prone modules in each testing phase so that managers may better organize the testing teams and effort.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Alpaydin, E. (2004). Introduction to machine learning. Cambridge: The MIT Press.

    Google Scholar 

  • Bener, A., & Menzies, T. (2012). Guest editorial: Learning to organize testing. Automated Software Engineering, 19(2), 137–140. doi:10.1007/s10515-011-0095-y.

    Article  Google Scholar 

  • Boehm, B., Basili, V.R., (2001). Software defect reduction top 10 list. Computer 34(1), 135–137. doi:http://doi.ieeecomputersociety.org/10.1109/2.962984.

  • Brooks, F., (1995). The mythical man-month: Essays on software engineering. Reading, MA: Addison-Wesley Pub. Co., http://books.google.ca/books?id=fUYPAQAAMAAJ.

  • Caglayan, B., & Bener, A. (2014). DSL-2014-02 metric distributions for the enterprise software dataset for modules in different test phases. Tech. rep., Ryerson University, Data Science Laboratory.

  • Caglayan, B., Bener, A., Koch, S., (2009). Merits of using repository metrics in defect prediction for open source projects. In Proceedings of the 2009 ICSE workshop on emerging trends in Free/Libre/Open source software research and development, IEEE Computer Society, Washington, DC, USA, FLOSS ’09, pp. 31–36.

  • Caglayan, B., Tosun, A., Miranskyy, A., Bener, A., Ruffolo, N. (2010). Usage of multiple prediction models based on defect categories. In Proceedings of the 6th International conference on predictive models in software engineering, ACM, New York, NY, USA, PROMISE ’10, pp 8:1–8:9.

  • Chillarege, R., Bhandari, I., Chaar, J., Halliday, M., Moebus, D., Ray, B., et al. (1992). Orthogonal defect classification-a concept for in-process measurements. IEEE Transactions on Software Engineering, 18(11), 943–956. doi:10.1109/32.177364.

    Article  Google Scholar 

  • Cotroneo, D., Natella, R., & Pietrantuono, R. (2013a). Predicting aging-related bugs using software complexity metrics. Performance Evaluation, 70, 163–178.

    Article  Google Scholar 

  • Cotroneo, D., Pietrantuono, R., & Russo, S. (2013b). Testing techniques selection based on odc fault types and software metrics. Journal of Systems and Software, 86, 1613–1637.

    Article  Google Scholar 

  • Dallmeier, V., Zimmermann, T. (2007). Extraction of bug localization benchmarks from history. In Proceedings of the 22nd IEEE/ACM international conference on automated software engineering.

  • Di Fatta, G., Leue, S., & Stegantova, E. (2006). Discriminative pattern mining in software fault detection. In SOQUA ’06: Proceedings of the 3rd international workshop on software quality assurance (pp. 62–69). New York, NY, USA: ACM.

  • Fagan, M. E. (1999). Design and code inspections to reduce errors in program development. IBM Systems Journal, 38(2–3), 258–287. doi:10.1147/sj.382.0258.

    Article  Google Scholar 

  • Fenton, N. E., & Neil, M. (1999). A critique of software defect prediction models. IEEE Transactions on Software Engineering, 25(5), 675–689.

    Article  Google Scholar 

  • Fenton, N. E., & Ohlsson, N. (2000). Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering, 26(8), 797–814.

    Article  Google Scholar 

  • Gelman, A., Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Analytical methods for social research, Cambridge: Cambridge University Press. http://books.google.com.tr/books?id=c9xLKzZWoZ4C.

  • Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The weka data mining software: An update. SIGKDD Explorations Newsletter, 11(1), 10–18.

    Article  Google Scholar 

  • IEEE Standard Computer Dictionary. (1991). A compilation of IEEE standard computer glossaries. IEEE Std 610, pp 1. 10.1109/IEEESTD.1991.106963.

  • Jiang, Y., Cukic, B., Menzies, T., (2008). Can data transformation help in the detection of fault-prone modules? In DEFECTS ’08: Proceedings of the 2008 workshop on defects in large software systems, ACM, New York, NY, USA, pp. 16–20. doi: 10.1145/1390817.1390822.

  • Kocaguneli, E., Tosun, A., Bener, A.B., Turhan, B., Caglayan, B. (2009). Prest: An intelligent software metrics extraction, analysis and defect prediction tool. In SEKE, pp. 637–642.

  • Koru, A. G., & Emam, K. E. (2010). The theory of relative dependency: Higher coupling concentration in smaller modules. IEEE Software, 27, 81–89.

    Article  Google Scholar 

  • Koru, A. G., Zhang, D., El Emam, K., & Liu, H. (2009). An investigation into the functional form of the size-defect relationship for software modules. IEEE Transactions on Software Engineering, 35(2), 293–304.

    Article  Google Scholar 

  • Lessmann, S., Baesens, B., Mues, C., Pietsch, S. (2008). Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering 34(4), 485–496. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4527256.

  • Leszak, M., Perry, D. E., & Stoll, D. (2002). Classification and evaluation of defects in a project retrospective. Journal of Systems Software, 61(3), 173–187.

    Article  Google Scholar 

  • Li, N., Li, Z., Zhang, L. (2010). Mining frequent patterns from software defect repositories for black-box testing. In Intelligent systems and applications (ISA), 2010 2nd international workshop on, pp. 1–4. doi:10.1109/IWISA.2010.5473578.

  • Maloof, M.A. (2003). Learning when data sets are imbalanced and when costs are unequal and unknown. In ICML-2003 workshop on learning from imbalanced data sets II.

  • Meneely, A., Williams, L., Snipes, W., & Osborne, J. (2008). Predicting failures with developer networks and social network analysis. In SIGSOFT ’08/FSE-16: Proceedings of the 16th ACM SIGSOFT international symposium on foundations of software engineering (pp. 13–23). New York, NY, USA: ACM.

  • Menzies, T., Greenwald, J., & Frank, A. (2007). Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering, 33(1), 2–13.

    Article  Google Scholar 

  • Menzies, T., Milton, Z., Turhan, B., Cukic, B., Jiang, Y., & Bener, A. (2010). Defect prediction from static code features: Current results, limitations, new approaches. Automated Software Engineering, 17(4), 375–407. doi:10.1007/s10515-010-0069-5.

    Article  Google Scholar 

  • Misirli, A. T., Bener, A. B., & Kale, R. (2011). Ai-based software defect predictors: Applications and benefits in a case study. AI Magazine, 32(2), 57–68.

    Google Scholar 

  • Misirli, A. T., Caglayan, B., Bener, A., Turhan, B. (2013). A retrospective study of software analytics projects: In-depth interviews with practitioners. IEEE Software 30(5):54–61. doi:10.1109/MS.2013.93.

  • Myers, G. J., Badgett, T., Thomas, T., & Sandler, C. (2004). The art of software testing (2nd ed.). New York: Wiley.

    Google Scholar 

  • Nagappan, N., & Ball, T. (2007). Using software dependencies and churn metrics to predict field failures: An empirical case study. In ESEM ’07: Proceedings of the first international symposium on empirical software engineering and measurement (pp. 364–373). Washington, DC, USA: IEEE Computer Society.

  • Nagappan, N., Ball, T., & Zeller, A. (2006). Mining metrics to predict component failures. In Proceedings of the 28th international conference on software engineering, ACM, Shanghai, China.

  • Ostrand, T. J., Weyuker, E. J., & Bell, R. M. (2004). Where the bugs are. In Proceedings of the 2004 ACM SIGSOFT international symposium on software testing and analysis. ACM, New York, NY, USA, ISSTA ’04, pp. 86–96. doi:10.1145/1007512.1007524.

  • Ostrand, T. J., Weyuker, E. J., & Bell, R. M. (2005). Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering, 31(4), 340–355.

    Article  Google Scholar 

  • Ostrand, T. J., Weyuker, E. J., & Bell, R. M. (2007). Automating algorithms for the identification of fault-prone files. In Proceedings of the 2007 international symposium on Software testing and analysis (pp. 219–227). ACM.

  • Pressman, R. (2010). Software engineering: a practitioner’s approach. McGraw-Hill higher education, McGraw-Hill Higher Education. http://books.google.ca/books?id=y4k_AQAAIAAJ.

  • Ratzinger, J., Pinzger, M., & Gall, H. (2007). EQ-Mine: Predicting short-term defects for software evolution. In M. B. Dwyer & A. Lopes (Eds.), Proceedings of the fundamental approaches to software engineering at the European joint conference on theory and practice of software (pp. 12–26). Berlin: Springer.

  • Schroter, A., Zimmermann, T., Premraj, R., Zeller, A. (2006). If your bug database could talk. In Proceedings of the 5th international symposium on empirical software engineering, Volume II: Short papers and posters, pp 18–20.

  • Shull, F., Basili, V., Boehm, B., Brown, A.W., Costa, P., Lindvall, M., Port, D., Rus, I., Tesoriero, R., Zelkowitz, M. (2002). What we have learned about fighting defects. In Proceedings of 8th international software metrics symposium, pp 249–258.

  • Stringfellow, C., Andrews, A., Wohlin, C., & Petersson, H. (2002). Estimating the number of components with defects post-release that showed no defects in testing. Software Testing Verification and Reliability, 12(2), 93–122.

    Article  Google Scholar 

  • Tosun, A., Turhan, B., & Bener, A. (2009a). Practical considerations in deploying ai for defect prediction: A case study within the turkish telecommunication industry. In PROMISE ’09: Proceedings of the 5th international conference on predictor models in software engineering (pp. 1–9). New York, NY, USA: ACM.

  • Tosun, A., Turhan, B., & Bener, A. (2009b). Validation of network measures as indicators of defective modules in software systems. In PROMISE ’09: Proceedings of the 5th international conference on predictor models in software engineering (pp. 1–9). New York, NY, USA: ACM.

  • Turhan, B., Bener, A. (2007). A multivariate analysis of static code attributes for defect prediction. In Quality software, 2007. QSIC ’07. Seventh international conference on, pp 231–237.

  • Turhan, B., Menzies, T., Bener, A. B., & Di Stefano, J. (2009). On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering, 14, 540–578.

    Article  Google Scholar 

  • Weyuker, E. J., Ostrand, T. J., Bell, R. M. (2007). Using developer information as a factor for fault prediction. In Proceedings of the third international workshop on predictor models in software engineering, IEEE computer society. doi: 10.1109/PROMISE.2007.14. http://portal.acm.org/citation.cfm?id=1269056.

  • Zimmermann, T., & Nagappan, N. (2008). Predicting defects using network analysis on dependency graphs. In ICSE ’08: Proceedings of the 30th international conference on software engineering (pp. 531–540). New York, NY, USA: ACM.

  • Zimmermann, T., Nagappan, N. (2009). Predicting defects with program dependencies. 2009 3rd international symposium on empirical software engineering and measurement pp 435–438. doi:10.1109/ESEM.2009.5316024. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5316024.

  • Zimmermann, T., Premraj, R., Zeller, A. (2007). Predicting defects for eclipse. In PROMISE ’07: Proceedings of the third international workshop on predictor models in software engineering, IEEE computer society, Washington, DC, USA.

Download references

Acknowledgments

This research is supported in part by Turkish State Planning Organization (DPT) under the Project Number 2007K120610 and by NSERC Project number 402003-2012. We would like to thank IBM Canada Lab – Toronto site for making their development data available for research and strategic help during all phases of this research. The opinions expressed in this paper are those of the authors and not necessarily of IBM Corporation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bora Caglayan.

Appendix: list of software metrics used in this study

Appendix: list of software metrics used in this study

Attribute

Description

Attribute

Description

Code metrics

Cyclomatic density, vd(G)

The ratio of module’s cyclomatic complexity to its length

Essential complexity, ev(G)

The degree to which a module contains unstructured constructs

Design density,dd(G)

Condition/ decision

Cyclomatic complexity, v(G)

Number linearly independent paths

Essential density,ed(G)

(ev(G)-1)/(v(G)-1)

Maintenance severity

ev(G)/v(G)

Decision count

Number of decision points

Condition count

Number of conditionals

Branch count

Number of branches

Call pairs

Number of calls to other functions

Multiple condition count

Number of multiple conditions

Normalized cyclomatic complexity, norm v(G)

v(G) / nl.

Unique operands

n1

Total operators

N1

Total operands

N2

Unique operators

n2

Difficulty (D)

1/L

Length (N)

N1 + N2

Level (L)

(2/n1)*(n2/N2)

Programming effort (E)

D*V

Volume (V)

N*log(n)

Programming time (T)

E/18

Vocabulary

n1+n2

Executable LOC

Nonempty lines of code

Churn metrics

Number of edits

Number of edits done on a method

Lines Added

Total number of lines added

Number of unique committers

Number of unique committers edited a method

Lines Removed

Total number of lines removed

Total churn

Total number of lines edited

  

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Caglayan, B., Tosun Misirli, A., Bener, A.B. et al. Predicting defective modules in different test phases. Software Qual J 23, 205–227 (2015). https://doi.org/10.1007/s11219-014-9230-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11219-014-9230-x

Keywords

Navigation