Tutorial: Measuring the subjective user experience.

Thank you all for attending the tutorial!

The next few links are directed to those present at the tutorial on the 25th of August in Upssala Sweden. These materials support the tutorial:

Abstract.
Measuring the subjective user experience is a challenging task. In this tutorial we will demonstrate how psychological constructs can be divided in separate variables, each measured by its individual questionnaire items. We will show how to design and phrase items to measure the subjective user experience. The tutorial will also address the analysis of the questionnaire data to estimate its validity (do we measure what we intend to measure) and reliability (do we measure this consistently). Finally we discuss quantitative techniques like reliability analysis (Cronbach’s Alpha) and Principal Component Analysis and show practical examples for the HCI field. Analysis will be demonstrated using SPSS.

Duration. 3 hours total.



Objectives of the tutorial



The main goal of the tutorial is to give an overview of the process of developing psychological questionnaires from theory (the constructs you intend to measure) to items, to the analysis of the items. Participants in the tutorial will learn about and get experience with the following:

 

After attending the course participants will have a good theoretical overview of the process of developing and validating a questionnaire. During the tutorial several examples grounded in the HCI field will be provided. The final objective is to give participants a starting point to develop their own questionnaires and to provide sufficient knowledge to enable participants to critically evaluate questionnaires and questionnaire items developed by colleagues.


Content of the tutorial


The content of the tutorial can be summarized as follows:


Intended audience


This tutorial is intended for HCI researchers and practitioners interested in evaluating the effects of systems on users’ subjective experience. For those practitioners with limited or no background in social sciences the tutorial will be a good starting point to set up one’s own research projects and to better understand the psychology / HCI literature.


Tutorial description


Recently you could have read the following: “The evaluated prototype led to a higher sense of social connectedness. This was especially true for the 3D interface”.  At first glance this seems positive: Social connectedness sounds like something you would want, and if this is provided by the evaluated prototype than perhaps we should pursue developing it as a product. However, several questions remain: 1. What exactly is social connectedness? 2. How is it operationalized in this study? 3. Has it been measured reliably? 4. Validly? And as such is the initial gut feeling of a positive result actually justified?

This tutorial addresses just these questions. We will show how psychologist tackle the problems of measuring the subjective user experience. We will address hypothetical constructs (such as social connectedness), variables and items. We will address the different types of validity and reliability. Furthermore, we will address the phrasing and wording of questionnaire items [4]. Finally we will address the quantitative analysis of the results of an administered questionnaire. Common problems associated with using existing questionnaires like translation, scale transformation and analysis are also discussed.

Questionnaire development starts with the identification of the hypothetical construct one intends to measure, for example social connectedness. After identifying the construct one has to determine which underlying variables would properly reflect social connectedness: bond with your social network, the sense of belonging, the feeling of being in touch... After identifying the variables, these have to be measured using different items: the actual questions shown to participants [2].

After showing examples of the route from construct to individual item the tutorial will address validity and reliability . Validity reflects the extent to which your questions measure what you intend to measure. The tutorial first addresses construct validity (do the chosen variables properly represent the hypothetical construct) and then addresses content validity (do the items measure the variable). Furthermore, internal and external validity, ecological validity, temporal validity and face validity are addressed. Reliability reflects the extent to which the measurement reflects the actual score a participant has on the hypothetical construct and thus the extent to which error is omitted from the measurement. We will address test-retest reliability, split-half reliability, and inter-rater reliability [1].

After addressing reliability and validity the tutorial will focus on a more hand on approach to writing questionnaire items. We will address the informed consent and the special layout of the questionnaire. Furthermore, question wording and inability or unwillingness to answer are addressed. Finally we address the effects of the order of questionnaire items.
At this point in the tutorial it should be clear to participants how to start building up a questionnaire based on psychological literature, evaluate the validity and reliability of previously reported questionnaires, and design ones own items and questionnaire. The remainder of the tutorial will focus on two quantitative techniques to analyze the results of the administered questionnaire.

First off all we will focus on so-called reliability analysis – the systematic analysis of inter item correlations to be able to conclude whether the phrased items indeed reflect one variable. From the concept of item correlation we proceed to inter-item correlations and to scale analysis. Cronbach’s Alpha [3] is presented as a measure for scale reliability and interpretations are given. Furthermore, examples are provided of methods to increase scale reliability. Analysis will be supported by practical examples using SPSS.

Secondly we will look at Principal Component Analysis [5]. The purpose of this technique – to identify latent variables in an item set – is explained. Furthermore we give practical examples on determining the number of components (working with Eigenvalues and Scree plots), interpreting the component solution (working with component loadings), and rotating the component solution. Within the section on principal component analysis we will also address confirmative versus explorative analysis and the similarities and differences between component and factor analysis.

At the end of the tutorial respondents will have had a thorough theoretical overview of the process of questionnaire development. Participants will be able to start developing their own questionnaires, and they will be able to correctly judge whether conclusion derived from questionnaire usage by others are reliable and valid. Finally, participants will gain experience in analyzing questionnaire data to evaluate both its reliability and validity.

References



Background of the tutor


Drs. M. C. Kaptein PDeng has a background in Economic Psychology. He obtained his masters degree in 2005 and during his studies had a strong focus on research methodology and statistics. Maurits has lectured methodology courses to freshman and sophomore classes from 2004, at multiple universities. Maurits lectured methodology introduction courses at the University of Tilburg (the Netherlands) and assisted courses in applied statistics and SPSS (Software package for statistical analysis) at the same university. During his master Maurits also worked as a data-analyst at the University of Nijenrode (the Netherlands). After finishing his masters Maurits continued to pursue a Post graduate degree in User System Interaction at the Eindhoven University of Technology (the Netherlands). During this two year post graduate course Maurits lectured courses in both questionnaire design and SPSS to post graduate students at the Eindhoven University of Technology.

After obtaining his professional doctorate in Engineering Maurits worked as a Research Development Manager at a market research agency for about a year. He provided methodological research support to clients and created and designed questionnaires. Maurits developed new methods of measurement useful for commercial purposes.

Currently Maurits is pursuing his PhD in a joint project of the Eindhoven University of Technology, Philips Research, and Stanford University. His PhD focuses on persuasive technologies and the need to belong: thus how can systems be designed to enhance the feeling of belongingness. To tackle this question numerous methods of measuring subjective constructs need to be developed.

For more information please contact: maurits [at] mauritskaptein [dot] com - See you at interact 2009!





  Contact maurits [at] mauritskaptein [dot] com if you have any questions.
  Nth-iteration.com research on human computer interaction