Steve Whittaker is Professor of Human Computer Interaction at University of California at Santa Cruz. He is a member of the CHI Academy, and Editor of the journal Human Computer Interaction. In 2014, he received a Lifetime Research Achievement Award from SIGCHI (Special Interest Group on Computer Human Interaction), he is also a Fellow of the Association for Computational Machinery. Probably best known for his work on email overload, computer mediated communication and personal information management, he combines empirical analyses that are informed by the social sciences, with novel interactive system design to address important human problems. He has worked both in industry and academia, at HPLabs, IBM, AT&T Bell Labs, and University of Sheffield. His H index is 65, and he has over 50 US and worldwide patents. He is currently working on personal informatics, and his latest book from MIT Press is The Science of Managing Our Digital Stuff.

His program :

  • Tuesday 10 July at 2:00pm - LIMSI, conference room - Rue John von Neumann, Orsay - Plan Accès
    • Title: The Quantified Self and Personal Informatics: Critique and Opportunities
    • Abstract: The Quantified Self (QS) vision argues for collecting and analysing rich collections of personal data. According to the vision, such personal data facilitate greater self-insight promoting behaviour change. But although self quantification is important, quantified self technologies show poor rates of adoption. This talk explores reasons for this failure. I will revisit the QS vision, identifying its flaws, arguing that its overly analytic, overly rational and overly authoritative. I will present deployments of 3 new QS systems, that address these problems, proposing a new design approach to personal data systems.

  • Monday 16 July at 2:30pm - ENSTA ParisTech, 1024, Boulevard des Maréchaux, 91762 Palaiseau Cedex - Plan Accès
    • Title:When Machines Know More About Us Than We Do Ourselves: What Causes Us To Act on Algorithmic Interpretations of Highly Personal Data? Steve Whittaker, Aaron Springer & Victoria Hollis - University of California at Santa Cruz
    • Abstract:Intelligent systems powered by machine learning are pervasive in our everyday lives. These systems make decisions ranging from the mundane to the significant, from routes to work to recommendations about criminal recidivism. We, as humans, increasingly devolve more and more responsibility to these systems with little transparency or oversight. Concerns about how these systems are making decisions are building and this is only exacerbated by recent machine learning methods such as deep learning that are difficult to explain in human-comprehensible terms. The current paper uses empirical mixed methods to explore key processes underlying human comprehension of, and trust in algorithms. Specifically we evaluate when and why people are prepared to accept and act upon algorithmic outputs in the context of highly personal ‘quantified self’ data. Across 4 studies, we document instances of algorithmic authority where users are overly accepting of algorithmic interpretations of their own emotions, overruling their own views of how they feel. On other occasions, we find the opposite effect where users reject system interpretations, overriding quite accurate system interpretations when these disconfirm their views of themselves. We also present an intervention that aims to address these problems by making algorithms more transparent. We show that increased transparency may have paradoxical effects, and conclude with a discussion of design implications in this space.