Special Session 2

Title

Crowd-sourced and Remote User Studies for Quality of Experience and Usability Research

Organizers

Babak Naderi, Matthias Hirth, Niall Murray, and Kjell Brunnström

Date and time

Monday, September 5 11:15 – 12:35

Motivation and objectives

Laboratory studies are an established and essential tool for Quality of Experience (QoE) and User Experience (UX) research. However, they required well-equipped test rooms built following international standards and personnel for the supervision of the test participants. Therefore, they are often cost- and time-intensive. Further, the number of test candidates is often limited due to the sparse laboratory space and the need for participants to be physically present in the test environment. In the last 2 years, the COVID-19 pandemic has resulted in more significant challenges to conduct laboratory studies by increasing the organizational overhead and limiting potential participants.
Two possibilities to overcome the current situation are crowdsourcing and remote user studies. Microtask crowdsourcing has been successfully used for QoE and UX research in the past years. It offers a faster, cheaper, and more scalable approach compared to laboratory tests. It may also provide a more ecological valid environment for the experiment but come at the cost of less control compared to a laboratory test. Researchers developed best practices to quickly collect a large number of subjective ratings from a diverse set of participants and applied the crowdsourcing approach in many domains of QoE research. Some of the main challenges are ensuring the suitability of the test environment/system, eligibility of participants, and controlling the reliability of responses in the absence of a test moderator.
Other potential possibilities that have not drawn much attention in the past years are supervised or unsupervised individual remote test procedures. They can be viewed as a hybrid-procedure of crowdsourcing and traditional laboratory environments. While the tests are still conducted online, the participants are not anonymous but pre-registered participants who might even be guided via a chat or video conferencing system. Such an approach can benefit from the broader reach of the online study while diminishing the challenges of a completely anonymous and unsupervised/untrusted setting.
In this context, the special session aims to foster contributions concerning optimizing and designing crowdsourced subjective studies for QoE and UX research. In addition, another motivation is to raise awareness and promote new research directions with respect to crowdsourcing and remote evaluations of QoE and UX. The topic collection encourages researchers to submit works on how to apply best practices from crowdsourcing studies in the context of remote user studies, with non-anonymous test takers and vice versa.

Paper

A Vital Improvement? Relating Google’s Core Web Vitals to Actual Web QoE
Nikolas Wehner, Monisha Amir, Michael Seufert, Raimund Schatz and Tobias Hoßfeld
Waiting along the Path: How Browsing Delays Impact the QoE of Music Streaming Applications
Anika Seufert, Ralf Schweifler, Fabian Poignée, Michael Seufert and Tobias Hoßfeld
Comparison of Crowdsourced and Remote Subjective User Studies: A Case Study of Investigative Child Interviews
Saeed Shafiee Sabet, Cise Midoglu, Syed Zohaib Hassan, Pegah Salehi, Gunn Astrid Baugerud, Carsten Griwodz, Miriam Johnson, Michael Alexander Riegler and Pål Halvorsen
Evaluating the Robustness of Speech Evaluation Standards for the Crowd
Edwin Ricardo Gamboa, Babak Naderi, Matthias Hirth and Sebastian Möller