Validating Mood Checkin & SOS Mode

Attempted to validate the UX and intent of the Mood Checkin & SOS Mode through structured scenarios and controlled prototype testing. Buckle up!

Study Overview

  • Date: May 2023

  • n = 6

  • Level 2

  • Structured concept test followed by semi-structured hybrid interview

  • Testing with adolescents

First, Define the Approach

First, Define the Approach

Research Purpose

To validate that the Mood Checkin/SOS Mode design and UX will efficiently and effectively serve users as it’s intended.

Product Introduction

The feature being tested was a “crown jewel” of the app, working name iKinnect, that was intended to help users indicate when they need help with their acute emotional state. That could be times of great grief, depressive episodes leading to the darkest of thoughts, etc. The hope was that this app, and SOS Mode, could help regulate behavior and mental health.

As with everything, there is a lot more to this project but hopefully that’s sufficient to understand the importance of this study.

Next, Write the Protocol

Next, Write the Protocol

My goal as the moderator and designer of this study was to validate the way that the Mood Checkin/SOS Mode had been designed. Much of the study was very structured to insure clean data collection that could be analyzed quickly in order to support our team’s greater development timeline.

As a researcher, and a curious one, part of me wanted to plan this study as almost a ethnographic/validative hybrid. How does the participant in our remote environment use the feature naturally? Then what do they have to say about it after the fact? So that’s what I ended up going with. A protocol that allowed me to collect both validation-type quantitative data AND have space and context to ask vital qualitative questions.

Objectives

  • With designs in front of them, can users effectively navigate the feature (through prototype)?

  • Is this feature usable as it’s been designed for users experiencing elevated emotions?

Research Questions

  • Do it work thooooo

  • After engaging with the feature, what does the user/participant feel it should be called?

  • How does the user engage with the copy for distress ratings?

  • After engaging with the feature, does the name feel right to the participant?

  • Is the feature findable through iconography on main pages?

    • Can the user navigate to feature through adjacent ones (crisis kit → sos mode)

  • What is the users' tolerance for structure of checkins and their cadence?

Then, Conduct Interviews

Then, Conduct Interviews

Remote Prototype Testing

During interviews, it became apparent that that timeline I referenced earlier was not gonna work. The feature, as it had been designed, in short was not working for users. After the 3rd interview, I knew but kept the rest of the sessions and restructured them slightly to allow time and context to get clearer on what the user would expect from the bones of the feature. What do they expect from a feature that’s intended to offer momentary support for acute emotional intensity (regardless of the emotion being experienced)?

Prototype quality is low given some depreciation that happened. Details on that in the next section!

FINDINGS & ARTIFACTS

FINDINGS & ARTIFACTS

The following is verbatim how I originally wrote it and while not particularly professional it captures the spirit of the findings very well. I presented the following to the clinical expert and the designer…

Findings

  1. Participants did not understand the goal or “point” of the feature after both flows (negative & neutral)

    1. While they had “good” things to say, their understanding of the feature was completely off-base from the intent of it

    2. If they were in neutral place, they would want to self-discover and not rely on a “quiz”

    3. Is it mood logging over time? Or is it a quiz to find skills?

  2. A major confusion point was the second dial in the negative flow asking about ability to cope. Needs a different scale that is about coping specifically and not the nebulousness of the emotional scale

  3. The feature as it stands does too much

    1. Evident in the users inability to narrow the feature name down

  4. Too many clicks to get to aid → a person in need should not have to answer so many questions when they just need some help in a difficult moment. It’s a turn off for the feature.

Recommendations

  1. Aside from a full redesign...

    1. Remove the second dial in favor of a more simple scale - no ability to cope → some ability to cope → ability to cope Create a flow and an option for immediate access to aid that avoids all the ratings

    2. If a user needs help - they do not have the time to go through all the clicks

  2. Introduce the feature with a concise pitch at the top of usage as well as an introduction into how to access it and how/when to use it

    1. this is generally a cop out for poor designed-UX imo, but given we want folks to really use this one - getting them introduced to it early would be good

  3. Use iconography more aligned with getting help, learning, being in crisis,

    1. wait - do you see that? do you see how complicated it is to land on a single icon BECAUSE THE FEATURE IS DOING TOO MUCH LOL

Spoiler!

That same designer and clinical expert took all this feedback along with a notes synthesis and completely revamped the feature. Said redesign I suggested happened. Check out the rerun of the feature in the followup study, which I cleverly named, “Re-run Mood Checkin”.