PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Examining correlations in usability data to effectivize usability testing

Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Based on a case study performed in industry, this work deals with a statistical analysis of data collected during usability testing. The data is from tests performed by usability testers from two companies in two different countries. One problem in the industrial situation is the scarcity of testing resources, and a need to use these resources in the most efficient way. Therefore, the data from the testing is analysed to see whether it is possible to measure usability on the basis of one single metric, and whether it is possible to judge usability problems on the basis of the distribution of use case completion times. This would allow test leaders to concentrate on situations where there are obvious problems. We find that it is not possible to measure usability through the use of one metric, but that it may be possible to gain indications of usability problems on the basis of an analysis of time taken to perform use cases. This knowledge would allow the collection of usability data from distributed user groups, and a more efficient use of scarce testing resources.
Rocznik
Strony
25--37
Opis fizyczny
Bibliogr. 22 poz.
Twórcy
autor
autor
Bibliografia
  • [1] ISO 9241-11 (1998): Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs) - Part 11: Guidance on Usability, International Organization for Standardization Std.,1998.
  • [2] E. Frøkjaer, M. Hertzum, and K. Hornbæk,“Measuring usability: Are effectiveness, efficiency,and satisfaction really correlated?” in Conference on Human Factors in Computing Systems, vol.Proceedings of the SIGCHI conference on Human factors in computing systems. The Hague,Netherlands: ACM Press, 2000, pp. 345–352.
  • [3] J. Sauro and E. Kindlund, “A method to standardize usability metrics into a single score,” inCHI 2005, ser. Proceedings of the SIGCHI conference on Human factors in computing systems.Portland, Oregon, USA: ACM Press, 2005, pp.401–409.
  • [4] J. Winter, K. Rönkkö, M. Ahlberg, M. Hinely, and M. Hellman, “Developing quality through measuring usability: The UTUM test package,” in ICSE 2007, ser. 5th Workshop on Software Quality, at ICSE 2007, 2007.
  • [5] J. Winter, K. Rönkkö, M. Ahlberg, and J. Hotchkiss, “Meeting organisational needs and quality assurance through balancing agile & formal usability testing results,” in CEE-SET 2008,ser. Preprint of the third IFIP TC2 Central and East European Conference on Software Engineering Techniques, Z. Huzar, J. Nawrocki, and J. Zendulka, Eds., Brno, 2008.
  • [6] J. Winter and K. Rönkkö, “Satisfying stakeholders’needs - balancing agile and formal usability test results,” e-Informatica Software Engineering Journal, vol. 3, no. 1, p. 20, 2009.
  • [7] J. Winter, “Measuring usabiilty - balancing agility and formality,” Licentiate Thesis,Blekinge Institute of Technology, 2009.
  • [8] K. Hornbæk, “Current practice in measuring usability: Challenges to usability studies and research,” International Journal of Human-Computer Studies, vol. 64, no. 2, pp.79–102, 2006.
  • [9] B. Pettichord, “Testers and developers think differently,” STGE magazine, vol. Vol. 2,no. Jan/Feb 2000 (Issue 1), 2000. [Online].Available: http://www.io.com/~wazmo/papers/testers_and_developers.pdf
  • [10] D. Martin, J. Rooksby, M. Rouncefield, and I. Sommerville, ““Good” organisational reasons for “Bad” software testing: An ethnographic study of testing in a small software company,”in ICSE ’07. Minneapolis, MN: IEEE, 2007.
  • [11] D. Schuler and A. Namioka, Participatory Design- Principles and Practices, 1st ed. Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1993.
  • [12] J. Brooke, “SUS: A quick-and-dirty usability scale,” 1986.
  • [13] C. Robson, Real World Research. Oxford, England:Blackwell Publishing, 1993, vol. 2.
  • [14] J. R. Lewis and J. Sauro, “The factor structure of the system usability scale,” in LNCS 5619, vol.Proceedings of the human computer interaction international conference (HCII 2009),. Springer Verlag, 2009, pp. 94–103.
  • [15] T. S. Tullis and J. N. Stetson, “A comparison of questionnaires for assessingwebsite usability,” 2004. [Online].Available: http://home.comcast.net/~tomtullis/publications/UPA2004TullisStetson.pdf
  • [16] BTH, “UIQ, usability test,” Aug. 2008. [Online].Available: http://www.youtube.com/watch?v=5IjIRlVwgeo
  • [17] Y. Dittrich, K. Rönkkö, J. Erickson, C. Hansson,and O. Lindeberg, “Co-operative method development: Combining qualitative empirical research with method, technique and process improvement,” Journal of Empirical Software Engineering, vol. 13, no. 3, pp. 231–260, 2007.
  • [18] R. K. Yin and S. Robinson, Case Study Research – Design and Methods, ser. Applied Social Research Methods Series. Thousand Oaks, Cal.:SAGE publications, 2003, vol. 3.
  • [19] B. G. Glaser and A. L. Strauss, The discovery of grounded theory : strategies for qualitative research. Piscataway, NJ.: Aldine Transaction,1967.
  • [20] K. Rönkkö, “Ethnography,” in Encyclopedia of Software Engineering (accepted for publication),P. Laplante, Ed. New York: Taylor and Francis Group, 2010.
  • [21] J. Dumas and J. Redish, A Practical Guide to Usability Testing. Exeter, England: Intellect, 1999.
  • [22] G. Denman, “The structured data summary (SDS),” 2008.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPW7-0018-0058
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.