PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
Tytuł artykułu

Metody ilościowe i jakościowe w dyscyplinie interakcja człowiek-komputer. Porównanie i zastosowania

Autorzy
Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
EN
Quantitative and qualitatibe methods in the domain of human computer interaction: comparison and applicability
Języki publikacji
PL
Abstrakty
PL
Pierwsze ilościowe metody ewaluacji użyteczności zostały zaadaptowane z psychologii eksperymentalnej na początku rozwoju dyscypliny interakcja człowiek-komputer (HCl) w dwóch celach: aby przekonać projektantów systemów (w większości informatyków dążących do optymalizacji systemu, a nie do uzyskania satysfakcjonującego poziomu użyteczności interfejsu użytkownika) o wadze analizy potrzeb użytkowników, a także w celu stworzenia metodologicznego pakietu dla praktyków, który pozwoliłby efektywnie ocenić jakość oprogramowania z perspektywy użytkownika. Powyższe powody sprawiają, że wiele ze stosowanych dziś metod nie jest przystosowanych ani do jakościowej oceny danej technologii, ani do analizy koncepcji projektowych, a także nie nadają się one do analizy preferencji użytkowników w dłuższym przedziale czasowym. Niniejszy artykuł ma na celu przedstawienie metod odpowiednich na różnych etapach procesu projektowania i przedyskutowanie ich wad i zalet w kontekście projektowania interakcji człowiek-komputer.
EN
The first quantitative evaluation methods have been adapted from experimental psychology at the dawn of the Human Computer Interaction (HCl) field for two purposes: to convince software developers (mostly professionals seeking system optimization and not focused on achieving a satisfactory level of user interface usability) about the need of analyzing user needs and to create a methodological package for practitioners, which would allow to effectively assess the quality of software from the user's perspective. These reasons led to a situation, where many methods in use today are not adapted either to a qualitative assessment of technology, nor to the analysis of design concepts or user preferences over time. The goal of this article is to show which methods are suitable for the different stages of the design process and to discuss their pros and cons in the context of human computer interaction design.
Rocznik
Tom
Strony
93--107
Opis fizyczny
Bibliogr. 46 poz.
Twórcy
autor
  • Laboratorium Interaktywnych Technologii, Ośrodek Przetwarzania Informacji (OPI)
Bibliografia
  • [1] Ackerman M.S., The intellectual challenge of CSCW: The gap between social requirements and technical feasibility, Human-Computer Interaction Journal, 2000, 15 (2), s. 179-203.
  • [2] Baren J., IJsselsteijn W., Romero N., Markopoulos P., Ruyter B., Affective Benefits in Communication: The development and field-testing of a new questionnaire measure, w: Proceedings of Presence Conference, ACM Press 2003, s. 48.
  • [3] Barkhuus L., Rode J., From Mice to Men - 4 years of Evaluation, w: Proceedings of CHI Conference, ACM Press 2007.
  • [4] Beyer H., Holtzblatt K., Contextual design: defining customer-centered system, Morgan Kaufmann 1998.
  • [5] Buxton W., Sketching user experiences: getting the design right and the right design, Morgan Kaufmann 2007.
  • [6] Buxton W., Sniderman R., Iteration in the design of the human-computer interface, w: Proceedings of CHI Conference, ACM Press 1980, s. 72-80.
  • [7] Cockton G., Putting Value into Evaluation, w: Maturing usability: Quality in software, interaction and value, red. E. Law, E. Hvannberg, G. Cockton, Springer 2008.
  • [8] Cockton G., From quality in use to value in the world, w: Proceedings of CHI Conference, ACM Press 2004, s. 1287-1290
  • [9] Davis F.D, Bagozzi R.P., Warshaw P.R., User acceptance of computer technology: A comparison of two theoretical models, Management Science, 1989, 35 (8), s. 982--1003.
  • [10] Davis F.D., Perceived usefulness perceived ease of use and user acceptance of information technology, MIS Quarterly, 1989, 13 (3), s. 319-340.
  • [11] Dix A., Finlay J., Abowd G.D., Human-Computer Interaction, Prentice Hall 2004.
  • [12] Dourish P., Implications for design, w: Proceedings of CHI Conference, ACM Press 2006, s. 541-550.
  • [13] Frøkjær E., Hertzum M., Hornbæk K., Measuring usability: are effectiveness, efficiency, and satisfaction really correlated?, w: Proceedings of CHI Conference, ACM Press 2000, s. 352-362.
  • [14] Good M., Seven experiences with contextual field research, SIGCHI Bulletin, 1989, 20 (4), s. 25-32.
  • [15] Gray W.D., Salzman M.C., Damaged merchandise? A review of experiments that compare usability evaluation methods, Human-Computer Interaction Journal, 1998, 13 (3), s. 203-261.
  • [16] Greenberg S., Buxton W., Usability evaluation considered harmful (some of the time), w: Proceedings of CHI Conference, ACM Press 2008, s. 111-120.
  • [17] Hassenzahl M., The interplay of beauty, goodness, and usability in interactive products, Human-Computer Interaction Journal, 2004, 19 (4), 319-349.
  • [18] Hassenzahl M., Platz, A., Burmester M., Lehner K., Hedonic and ergonomic quality aspects determine a software's appeal, w: Proceedings of CHI Conference, ACM Press 2000, s. 208-218.
  • [19] Hornbæk K., Usability Evaluation as Idea Generation, Maturing Usability - Human Computer Interaction Series, Springer 2008, s. 267-286.
  • [20] Hsieh H.F., Shannon S.E., Three approaches to qualitative content analysis, Qualitative Health Research, 2005, 15 (9), s. 1277-1288.
  • [21] Johnson R.B., Onwuegbuzie A.J., Mixed methods research: A research paradigm whose time has come, Educational Researcher, 2004, 33 (7), s. 14-26.
  • [22] Jordan P.W., Designing pleasurable products: An introduction to the new human factors, Taylor & Francis 2002.
  • [23] Karapanos E., Martens J.B., Hassenzahl M., Accounting for diversity in subjective judgments, w: Proceedings of CHI Conference, ACM Press 2009, s. 639-648.
  • [24] Karapanos E., Zimmerman J., Forlizzi J., Martens J.B., User experience over time: an initial framework, w: Proceedings of CHI Conference, ACM Press 2009, s. 729-738.
  • [25] Kirakowski J., The software usability measurement inventory: backgroundand usage, Usability evaluation in industry, Taylor & Francis, New York 1996, s. 169-178.
  • [26] Larsen K.R., Nevo D., Rich E., Exploring the Semantic Validity of Questionnaire Scales, w: International Conference on System Sciences, IEEExplore 2008, s. 440.
  • [27] Maslow A.H., A theory of human motivation, Psychological Review, 1934, 50(4), s. 370-396.
  • [28] Mathison S. Why triangulate?, Educational Researcher, 1988, 17 (2), s. 13.
  • [29] Mayhew D.J., The usability engineering lifecycle: a practitioner's handbook for user interface design, Morgan Kaufmann 1999.
  • [30] Mcdonagh D., Adams-Webber J., The implication potential of personal constructs in relation to their subjective importance and order of elicitation, Social Behavior and Personality: an International Journal, 1987, 15 (1), s. 81-86.
  • [31] Nielsen J., Bellcore M., The usability engineering life cycle, Computer, 1992, 25 (3), s. 12-22.
  • [32] Norman D.A., Emotional design: Why we love (or hate) everyday things, Basic Books, Cambridge 2004.
  • [33] Onwuegbuzie A.J., Daniel L.G., A framework for reporting and interpreting internal consistency reliability estimates, Measurement and Evaluation in Counseling and Development, 2002, 35 (2), s. 89-103.
  • [34] Onwuegbuzie A.J., Leech N.L, On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies, International Journal of Social Research Methodology, 2005, 8 (5), s. 375-387.
  • [35] Riihiaho S., Experiences with usability evaluation methods, Licentiate thesis, Helsinki University of Technology, Laboratory of Information Processing Science 2000.
  • [36] Sandelowski M., The problem of rigor in qualitative research, Advances in Nursing Science, 1986, 8 (3), s. 27-37.
  • [37] Scriven M., Tyler R.W., Gagne R.M., Scriven M., Perspectives of curriculum evaluation, The methodology of evaluation, PsycNET 1967, s. 39-83.
  • [38] Sechrest I., Sidani S., Quantitative and qualitative methods: Is There an Alternative?, Evaluation and Program Planning, 1995,18 (1), s. 77-87.
  • [39] Shneiderman B., Designing the user interface, MIT Press, Cambridge 1989.
  • [40] Spence R., Information visualization, Addison-Wesley Reading, MA, 2001.
  • [41] Suwa M., Tversky B., External representations contribute to the dynamic construction of ideas: Diagrammatic Representation and Inference, Lecture Notes in Computer Science, 2002, Vol. 2317, s. 149-160.
  • [42] Tohidi M., Buxton W., Baecker R., Sellen A., Getting the right design and the design right, w: Proceedings of CHI Conference, ACM Press 2006, s. 1243-1252.
  • [43] Tohidi M., Buxton W., Baecker R., Sellen A., User sketches: a quick, inexpensive, and effective way to elicit more reflective user feedback, w: Proceedings of Nordi CHI Conference, ACM Press 2006, s. 114.
  • [44] Tomico O., Karapanos E., Levy P., Mizutani N., Yamanaka T., The repertory grid technique as a method for the study of cultural differences, International Journal of Design, 2009, 3 (3), s. 55-63.
  • [45] Venkatesh V., Morris M.G., Davis G.B., Davis F.D., User acceptance of information technology: Toward a unified view, 2003, 27(3), MIS Quarterly, s. 425-478.
  • [46] Wixon D., Evaluating usability methods: why the current literature fails the practitioner, Interactions, 2003, 10 (4), s. 28-34.
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-article-BPP2-0014-0020
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.