PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Preliminary evaluation of schemes for predicting use satisfaction with the ability of system to meet stated objectives

Autorzy
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In software engineering literature two most commonly investigated targets for prediction are development effort and software quality. This study follows the methodological advances of these studies but focuses on predicting user satisfaction in software project. Specific outcome variable investigated in prediction is user satisfaction with the ability of system to meet stated objectives (MSO). A total number of 288 prediction schemes have been evaluated in the ability to predict MSO. These schemes have been built as different combinations of their components, i.e. feature pre-selection, elimination of missing values, automated feature selection, and a classifier. Two best performing schemes achieved the accuracy measured as Matthews correlation coefficient of 0.71 in test subset. These schemes involved W-LMT and W-SimpleLogistic classifiers. Significant differences have been observed between different classifiers and selected other components, depending on the dataset (validation or test). Discussed results may serve as guidelines to design a scheme to predict user satisfaction.
Rocznik
Strony
32--50
Opis fizyczny
Bibliogr. 38 poz., rys.,tab.
Twórcy
  • Faculty of Computer Science and Information Technology,West Pomeranian University of Technology, Szczecin, Poland
Bibliografia
  • [1] ISO/IEC: Software engineering Software product Quality Requirements and Evaluation (SQuaRE) System and software quality models, volume ISO/IEC 25010:2011(E). 2011.
  • [2] Raza, A., Capretz, L. F., Ahmed, F.: Improvement of Open Source Software Usability: An Empirical Evaluation from Developers’ Perspective. Advances in Software Engineering, 2010, pp. 1–12, 2010. ISSN 1687-8655.
  • [3] Ives, B., Olson, M. H., Baroudi, J. J.: The Measurement of User Information Satisfaction. Commun. ACM, 26(10), pp. 785–793, 1983. ISSN 0001-0782.
  • [4] Jones, C.: Applied Software Measurement: Global Analysis of Productivity and Quality. McGraw-Hill Education, 3rd edition, 2008. ISBN 978-0071502443.
  • [5] ISBSG Repository Data Release 11. International Software Benchmarking Standards Group, 2009.
  • [6] Fernandez-Diego, M., Gonzalez-Ladron-de Guevara, F.: Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review. Information and Software Technology, 56(6), pp. 527–544, 2014. ISSN 09505849.
  • [7] Mendes, E., Lokan, C.: Replicating studies on cross- vs single-company effort models using the ISBSG Database. Empirical Software Engineering, 13(1), pp. 3–37, 2008.
  • [8] Khatibi Bardsiri, V., Jawawi, D. N., Hashim, S. Z., Khatibi, E.: A PSO-based Model to Increase the Accuracy of Software Development Effort Estimation. Software Quality Journal, 21(3), pp. 501–526, 2013. ISSN 0963-9314.
  • [9] Radliński, L.: Empirical Analysis of the Impact of Requirements Engineering on Software Quality. In: Regnell, B., Damian, D. (eds.), Requirements Engineering: Foundation for Software Quality, volume 7195 of Lecture Notes in Computer Science, pp. 232–238. Springer, Berlin / Heidelberg, 2012. ISBN 978-3-642-28713-8.
  • [10] Radliński, L.: How software development factors influence user satisfaction in meeting business objectives and requirements? In: Madeyski, L., Ochodek, M. (eds.), Software Engineering from Research and Practice Perspectives, chapter 6, pp. 101–119. Nakom, Pozna´n-Warszawa, 2014.
  • [11] ISBSG Comparative Estimating Tool V4.0 User Guide. International Software Benchmarking Standards Group, 2005.
  • [12] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I. H.: The WEKA Data Mining Software: An Update. SIGKDD Explor. Newsl., 11(1), pp. 10–18, 2009. ISSN 1931-0145.
  • [13] RapidMiner Studio, 2015.
  • [14] Yoav Benjamini, Y. H.: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), pp. 289–300, 1995. ISSN 00359246.
  • [15] John, G. H., Langley, P.: Estimating Continuous Distributions in Bayesian Classifiers. In: Eleventh Conference on Uncertainty in Artificial Intelligence, pp. 338–345. Morgan Kaufmann, San Mateo, 1995.
  • [16] Aha, D., Kibler, D.: Instance-based learning algorithms. Machine Learning, 6, pp. 37–66, 1991.
  • [17] Rokach, L., Maimon, O.: Data Mining with Decision Trees: Theroy and Applications. World Scientific Publishing Co., Inc., River Edge, NJ, USA, 2008. ISBN 9789812771711, 9812771719.
  • [18] Breiman, L.: Random Forests. Machine Learning, 45(1), pp. 5–32, 2001.
  • [19] Holte, R.: Very simple classification rules perform well on most commonly used datasets. Machine Learning, 11, pp. 63–91, 1993.
  • [20] Frank, E., Hall, M., Pfahringer, B.: Locally Weighted Naive Bayes. In: 19th Conference in Uncertainty in Artificial Intelligence, pp. 249–256. Morgan Kaufmann, 2003.
  • [21] Hall, M., Frank, E.: Combining Naive Bayes and Decision Tables. In: Proceedings of the 21st Florida Artificial Intelligence Society Conference (FLAIRS), pp. 318–319. AAAI press, 2008.
  • [22] Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA, 1993.
  • [23] Kohavi, R.: Scaling Up the Accuracy of Naive-Bayes Classifiers: a Decision-Tree Hybrid. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202–207. AAAI Press, 1996.
  • [24] Landwehr, N., Hall, M., Frank, E.: Logistic Model Trees. Machine Learning, 95(1-2), pp. 161–205, 2005.
  • [25] Holmes, G., Pfahringer, B., Kirkby, R., Frank, E., Hall, M.: Multiclass alternating decision trees. : ECML, pp. 161–172. Springer, 2001.
  • [26] Shi, H.: Best-first decision tree learning. Master’s thesis, University of Waikato, Hamilton, NZ, 2007. COMP594.
  • [27] le Cessie, S., van Houwelingen, J.: Ridge Estimators in Logistic Regression. Applied Statistics, 41(1), pp. 191–201, 1992.
  • [28] Cooper, G., Herskovits, E.: A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9(4), pp. 309–347, 1992.
  • [29] Atkeson, C., Moore, A., Schaal, S.: Locally weighted learning. AI Review, 1996.
  • [30] Cleary, J. G., Trigg, L. E.: K*: An Instance-based Learner Using an Entropic Distance Measure. In: 12th International Conference on Machine Learning, pp. 108–114. 1995.
  • [31] R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2015.
  • [32] Cerpa, N., Bardeen, M., Astudillo, C. A., Verner, J.: Evaluating different families of prediction methods for estimating software project outcomes. Journal of Systems and Software, 112, pp. 48–64, 2016. ISSN 01641212.
  • [33] Menzies, T., Greenwald, J., Frank, A.: Data Mining Static Code Attributes to Learn Defect Predictors. IEEE Transactions on Software Engineering, 32(11), pp. 1–12, 2007.
  • [34] Song, Q., Jia, Z., Shepperd, M., Ying, S., Liu, J.: A General Software Defect-Proneness Prediction Framework. IEEE Transactions on Software Engineering, 37(3), pp. 356–370, 2011. ISSN 0098-5589.
  • [35] Subramanyam, R., Weisstein, F. L., Krishnan, M. S.: User participation in software development projects. Communications of the ACM, 53(3), pp. 137–141, 2010. ISSN 00010782.
  • [36] Tarafdar, M., Tu, Q., Ragu-Nathan, T. S.: Impact of Technostress on End-User Satisfaction and Performance. Journal of Management Information Systems, 27(3), pp. 303–334, 2010. ISSN 0742-1222.
  • [37] Fenton, N., Marsh, W., Neil, M., Cates, P., Forey, S., Tailor, M.: Making Resource Decisions for Software Projects. In: Proceedings of the 26th International Conference on Software Engineering, pp. 397–406. IEEE Computer Society, Washington, DC, 2004.
  • [38] Radliński, L.: Towards expert-based modeling of integrated software quality. Journal of Theoretical and Applied Computer Science, 6(2), pp. 13–26, 2012.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-68d99f57-7674-4e69-8edc-43a155c58536
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.