Intervention_Strategies_For_Engagement-Segal_Gal

This note last modified April 2, 2021

#notesFromPaper Authors: Segal, Gal, Kamar, Horvitz, Bowyer, Miller

Tags: cairns, #cairnsRL citizen science - engagement

Most citizen scientists are dabblers, who join briefly and never come back.

Used machine learning to identify which people were likely to disengage. They want to put engaging messages at the right time.

Using Mao et. al’s work, they identified the most important characteristics for predicting disengagement. They used 16, but said only 5 were super relevant: user’s average session time over sessions so far, user’s average dwell time, user’s session count, number of seconds elapsed in current session, and difference between # of tasks completed in this session and number of tasks they do (based on median of last 10 sessions)

Sending the message at times when disengagement was predicted to be high worked, sending at random times did not.

Their intro section has a couple of other works with different types of messages. Segal found that sending an email a few days later increased engagement in a MOOC

What format do you give the message in? How long? What does the message say?

Message types

  1. U r so helpful :D
  2. u r part of a great community :D
  3. even if u fuck up, it’s not that bad :D

Only one message per session.

Super users removed from the study (3 standard deviations above)

There’s a bit about the effects of choosing different thresholds (how likely someone is to disengage). 0.3, 0.5, 0.7 were chosen.

They keep mentioning how random messages didn’t affect people, but they never explicitly said that it didn’t affect them negatively, they just keep implying it.

Future Work

Better understanding influences and interactions between message content and timing.

Other sorts of interventions such as task type changing, task difficulty changing.

Re-engagement? What if someone leaves?