Methods for expert evaluation
Expert evaluation methods utilize the knowledge of user experience professionals in evaluating UX of the system. Compared to user studies, expert evaluation is often easier to arrange. Experts can also evaluate “difficult” material such as product specifications or early prototypes with many technical problems. Basic problems can be avoided by conducting an expert evaluation before a more expensive user study.
Observing the user in real context, researcher taking a role of an apprentice. The method was originally developed for understanding work practices*. When focus is on UX, the researcher pays attention to the emotional aspects of product use: not only the behavior but also the affective aspects of product use.
* "Contextual Inquiry: Field interviews with customers in their work places while they work, observing and inquiring into the structure of their own work practice."
The investigator herself uses the system in real contexts and evaluates it. So, the investigator is the only “participant” in the field study.
The MAX is a post-use method for evaluating the general experience through cards with an avatar and a board. MAX can be applied after the use of mockups, prototypes, interactive systems, or any artifact that user can interact with. It has four categories, which are represented on the board by questions that guide the user at the evaluation: (a) Emotion: What did you feel when using it?, (b) Ease of Use:Was it easy to use?; (c) Usefulness: Was it useful? and (e) Intention to Use: Would you wish to use it?
Very easy to use technique to rank order stimuli (products) with respect to some quality (e.g. enjoyment); also easy to do for children; goes back to early (1920's) test and scale development techniques; paired comparison data can be transformed in ordering stimuli.
A team of people with different perspectives evaluates a product.
Perspectives can include: Aesthetics, fun, comfort & other user experience.
Playability heuristics evaluate the playability aspect within games. Apart from usability problems, the heuristics can reveal the experiential aspects of game play.
A semantic scale that is built for each evaluation case separately via user interviews and by using product semantics as the theoretical basis.
A structured way to do expert evaluation: the expert goes through a checklist of design goals for different product properties (form, colour, materials, graphics, sounds, functionality, interaction design).
RGT is a technique for eliciting and evaluating people's subjective experiences of interacting with technology, through the individual way they construe the meanings of members of the set of artifacts under investigations. It thus attempts to capture how users experience things, what the experience means for them, and covers both emotionally- based constructs (warm-cold) and more “rational” ones (professional-popular). Kelly suggested the Repertory Grid Technique (RGT) as a methodological extension of his Personal Construct Theory (Kelly, 1955). Kelly argued that we make sense of our world through our own ‘construing' of it. That is, we tend to model what we find in the world according to a number of personal constructs, which are bipolar in nature. According to Kelly, a ‘construct' is a single dimension of meaning for a person allowing two phenomena to be seen as similar and thereby as different from a third (Bannister & Fransella, 1985).
UX experts use their expertise of users and UX theories to evaluate UX of a system.