Rethinking Mental Health Interventions: How Crowd-Powered Therapy Can Help Everyone Help Everyone

By on

We all have dark thoughts sometimes. And if you’ve ever been a graduate student, perhaps thoughts like the following feel familiar:

The thoughts in this image are real data points collected during deployment of a prototype called Flip*Doubt, an app in which negative thoughts are entered and then sent to three random crowd workers to be “positively reframed” and sent back to the user. (The full paper title is “Effective Strategies for Crowd-Powered Cognitive Reappraisal Systems: A Field Deployment of the Flip*Doubt Web Application for Mental Health” and you can read it here.) 

Rates of mental illness continue to rise every year. Yet there are nowhere near enough trained mental health professionals available to meet the need–and Covid-19 has only worsened the state of affairs. In short–we urgently need to rethink how we design mental health interventions so that they are more scalable, accessible, affordable, effective, and safe. 

So, how can technology create new ways to expand models of delivery for clinically validated therapeutic techniques? In Flip*Doubt, we focus on “cognitive reappraisal”–a well-researched technique for changing one’s thoughts about a situation in order to improve emotional wellbeing. This skill is often taught by trained therapists (e.g., in Cognitive and Dialectical Behavioral Therapy), and it has been shown to be highly effective at reducing symptoms of anxiety and depression. The problem is, it’s really hard to learn, and even harder to apply in one’s own mind on an ongoing basis. 

We envision that people could learn the skill through practice by reframing thoughts for each other–since research shows that it’s easier to learn by objectively sizing up others’ thoughts, rather than immediately trying to challenge your own entrenched ways of thinking. Thus, Flip*Doubt relies on crowd workers to create reframes, and the major driving questions of our study were: What makes reframes good or bad? And how can we design systems that effectively help people to nail the skill?

Our deployment yielded some fascinating results about how people use cognitive reappraisal systems in the wild, the types of negative thought patterns that weigh grad students down, and what types of strategies are most effective at flipping dark thoughts. For instance, the example below shows how a participant rated three reframes from Flip*Doubt:

Represented here are three different reappraisal tactics for transforming the original thought that we identified through our data analysis. “Direct negation” isn’t effective at all–it’s just invalidating and frustrating for someone to suggest the opposite of what you’re struggling with. “Agency” rings more true–yet can feel a bit simplistic. “Silver Lining” wins the gold for this thought–it provides fresh perspective by emphasizing an important positive that wouldn’t be possible without all the struggle. ​Our paper provides additional analyses, culminating in six hypotheses for what makes an effective reframe.

Our work suggests several important design implications. First, systems should consider prompting for structured reflection rather than prompting for negative thoughts. People aren’t always thinking negatively, and only allowing negative thoughts for input can reinforce those thoughts, or drive people away. Second, systems should consider tailoring user experiences to focus on a few core issues, since the best gains may come if meaningful progress can be made to address vicious and repetitive thoughts, rather than any old negative thought. Finally, crowd-powered systems can be safer and more effective if we design AI/ML-based mechanisms to help peers shape their responses through effective reappraisal strategies and behaviors–there’s a lot more on this in the paper, so we hope you’ll read about it there.

We’re honored that this paper won an Honorable Mention at #CSCW2021, since the world truly needs new interventions like Flip*Doubt to help us all to help each other. You can find the full paper at https://z.umn.edu/flipdoubt or watch the virtual presentation on YouTube

Thanks for reading, and we hope to discuss this work with you at CSCW and beyond.

Social Computing Researchers Need to Pay Attention to Religion and Spirituality in Design (Especially in Matters of Life & Death)

By on

Everything changes in a heartbeat when you or someone you love receives a life-threatening health diagnosis. 

Research from the medical and nursing fields repeatedly shows that people turn toward religion or spirituality to cope, even if they didn’t necessarily see themselves as “spiritual” during their lives. Many people wish they could go back and apply lessons learned earlier in their lives, so that they could live more fully and be better people. What if technology could help with that? To embrace the aspects of our experiences that most provide us with a sense of meaning, hope, and fulfillment–however we each individually define that? 

CaringBridge.org is a nonprofit health journaling platform that offers a free service similar to a blog, but with specialized tools and privacy controls to facilitate social support during serious or life-threatening illness. Our prior research showed that prayer support is more important to CaringBridge users than any other form of support [1]. Although HCI research has largely ignored religion and spirituality for decades [2-5], our #CSCW2021 paper follows up on this finding to ask, beyond prayer, “What is Spiritual Support and How Might It Impact the Design of Online Communities?” (Full paper here.)

Through participatory design focus groups with CaringBridge stakeholders, we derived the following definition:

Spiritual support is an integral dimension that underlies and can be expressed through every category of social support, including informational, emotional, instrumental, network, esteem, and prayer support. This dimension creates a triadic relationship between a recipient, a provider, and the sacred or significant, with the purpose of helping recipients and providers experience a mutually positive presence with each other, and with the sacred or significant.

The point is, when our aim is truly to support someone who is struggling, a fundamental underlying element of love and connection needs to transcend specific beliefs. Take prayer, for example. If you’re Christian, prayer might just be the most meaningful way someone can help you. If you’re an atheist, though, prayer could be quite an offensive way of expressing support. One implication is that in sensitive health contexts, designers might consider ways to help people represent their beliefs, so that supporters can craft expressions of care and support that respect them.

Building on this concept of expression, our results also highlight that even when spiritually supportive intentions are there, it’s difficult to respond to devastating news—so, participants wanted technological assistance with writing helpful comments. A second implication–which could span many types of online communities–is that commenting interfaces could embed mechanisms such as training resources, tips, or possibly automatic text recommendations. Future research will need to investigate how to design such features without damaging the meaningfulness and authenticity of comments.

Stakeholders also envisioned future systems that could create more immersive sensory experiences–e.g., by visualizing spiritual support networks and all the specific types of support they can provide (ps. check out this awesome viz project by Avleen Kaur on the topic!)–or that could even help people come to terms with their mortality and plan for a time beyond their final days–e.g., by designing mechanisms that aid users to configure advance planning directives and to mindfully sculpt the digital legacies they will leave behind. Read the full paper or watch our video presentation to learn more about these fascinating implications.


I’ll close on the note that, for a topic like this, a scientific paper truly cannot convey the depth and richness of participants’ experiences. So, I worked with artist Laura Clapper (lae@puddleglum.net) to illustrate a few special quotes from our data that highlight what spiritual support means to people–both online and offline. I’ll let these stories speak for themselves, and I hope to see you at our session at #CSCW2021.

“An older woman had just been admitted, and she had a kind of a rough night and didn’t feel great. She was just near tears. An aid was in the room, and they were talking about religion. They were the same religion. And the aid got down on her knees and held the woman’s hand and she said, “Can I pray for you when I go home?” It was towards the end of the shift, and the woman, I thought she was going to cry. She just changed her whole tone. It just gave her an extra bit of hope, and I think it was a kind thing to do.”
“My husband was in the hospital, having had a massive motorcycle accident. He was one of those, “Will he make it for the first 24 hours? We’ll see…” And I had a friend start a CaringBridge site. He was in the ICU for almost two weeks. This is the description I came away with–it’s like riding the wave of love. That’s what it felt like. Both of us could feel this support, that was in the writing. Later, when people stop writing because you’ve gotten better, you can feel that diminishing. That was very, very tangible.”
“For us, receiving meals was spiritual support because the people who would come to deliver food, it wouldn’t be an expectation of sitting in our house and us entertaining them. But they would just kind of give her a hug or something. And it was quick. And it was loving. And to us, it wasn’t even about the food. It was just kind of, them doing something out of love, taking time out of their day, showing that they care.”
“When I was an oncology nurse, I had different experiences with patients, right before they’re dying. I had one moment when somebody had cancer, and she had been lying there, kind of unresponsive. But then this morning, she woke up, and I was like, “Hey, do you want to stand up? Let’s brush your teeth. Let’s get you cleaned up.” I got her back in bed, then her husband came, and I was like, awesome, he’s gonna see her awake, and she just kept smiling. And I was like, “What are you looking at? Do you see something?” And she’s like, “I see three beautiful beings.” And I said, “You look so peaceful,” and she goes, “I’m so peaceful.” I said, “You look happy, are you happy?” And she said, “I’m so happy.” And she ended up dying later that day, and I was like, the husband got to hear her say, “I’m happy, I’m at peace.”
“​​Even though I have a lot of experience providing spiritual support, one of the experiences that most sticks out to me was that my father had ALS and he was 84. The doctors said it will be relatively quick. I had been out about six weeks before he passed away, and had just started my chaplain internship. Somebody came and found me on oncology and pulled me out from a patient and said, I’m saddened to tell you this, but your dad died. And it was a… I just broke down and I said, “I thought I was ready.” This oncologist, who I didn’t think really knew who I was, or you know, was all business, stopped in her tracks and just put her arms around me and said, “Don’t worry about anything. Just go take care of yourself and your family, we’ll take care of everything else.” The night I got back from my dad’s funeral, I was on call, and I got a call at midnight, 96 year old woman, she had keeled over the family dinner. After a couple hours in the ER, doctor said, “We gotta call it, there’s not much we can do.” So we gathered the family together. So I went from caring for, to being cared for, to caring for–so, giving and receiving, all in a 72-hour period.”

Citation: Smith, C. Estelle, Avleen Kaur, Katie Z. Gach, Loren Terveen, Mary Jo Kreitzer, and Susan O’Conner-Von. “What is Spiritual Support and How Might It Impact the Design of Online Communities?” Proceedings of the ACM on Human-Computer Interaction 5, no. CSCW1 (2021): 1-42.

References:

[1] Smith, C. Estelle, Zachary Levonian, Haiwei Ma, Robert Giaquinto, Gemma Lein-Mcdonough, Zixuan Li, Susan O’Conner-Von, and Svetlana Yarosh. “” I Cannot Do All of This Alone” Exploring Instrumental and Prayer Support in Online Health Communities.” ACM Transactions on Computer-Human Interaction (TOCHI) 27, no. 5 (2020): 1-41.

[2] Wyche, Susan P., Gillian R. Hayes, Lonnie D. Harvel, and Rebecca E. Grinter. “Technology in spiritual formation: an exploratory study of computer mediated religious communications.” In Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work, pp. 199-208. 2006.

[3] Bell, Genevieve. “No more SMS from Jesus: Ubicomp, religion and techno-spiritual practices.” In International Conference on Ubiquitous Computing, pp. 141-158. Springer, Berlin, Heidelberg, 2006.

[4] Bell, Genevieve. “Messy Futures: culture, technology and research.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012.

[5] Buie, Elizabeth, and Mark Blythe. “Spirituality: there’s an app for that! (but not a lot of research).” In CHI’13 Extended Abstracts on Human Factors in Computing Systems, pp. 2315-2324. 2013.

What does it mean to “keep community in the loop” when building algorithms for Wikipedia?

By on

Original Artwork contributed by: Laura Clapper.

[Cross-posted from Wikimedia Foundation Technical Blog]

Imagine you’ve just created a profile on Wikipedia and spent 27 minutes working on what you earnestly thought would be a helpful edit to your favorite article. You click that bright blue “Publish changes” button for the very first time, and you see your edit go live! Weeee! But 52 seconds later, you refresh the page and discover that your edit has been wiped off the planet. How would you feel if you knew that an algorithm had contributed to this rapid reversion of all your hard work?

For the sake of illustration, let’s say you were editing a “stub” article about a woman scientist you admire. You can’t remember where you read it, but there’s this great story about how she got interested in computing. So, you spend some time writing up the story to improve her mostly empty bio. Clearly, you’re trying to be helpful. But unfortunately, you didn’t cite your source…and boom!—your work gets blown away. Without any way to understand what happened, you now feel snubbed and unwanted. Will you ever edit again?! :scream_emoji:

Many edits (like yours) are damaging to Wikipedia, even if they were completed in good faith—e.g. missing citations [ ], bad grammars, mis-speled werds, and incorrect {syntax. And then there are plenty of edits that are malicious—e.g. the addition of offensive, racist, sexist, homophobic, or otherwise unacceptable content. All of these examples make it necessary for human moderators (a.k.a. “patrollers”) to review edits and revert (or fix) the bad ones. However, given the massive volume of edits to Wikipedia each day, it’s impossible for humans to review every edit, or even to identify which edits should be reviewed. 

In order to make it possible(-ish) to build and maintain Wikipedia, the community absolutely requires the help of algorithmic systems. But we need these algorithmic systems to be effective community partners (think R2-D2, cheerfully supporting the Rebel Alliance!) rather than AI overlords (think Terminator…being Terminator). How can we possibly design these systems in a way that supports all of its well-intentioned community stakeholders…including patrollers, newcomers, and everyone in between?

Our team of researchers from the University of Minnesota, Carnegie Mellon University, and the Wikimedia Foundation explored this question in our new open access research paper. We used a method called Value-Sensitive Algorithm Design which has three steps: 

(1) Understand community stakeholders’ values related to algorithms.
(2) Incorporate and balance these values across the full span of the ML development pipeline.
(3) Evaluate algorithms based not only on accuracy, but also on their acceptability and broader impacts.

We argue that if you follow these three steps, you can “keep community in the loop” as you build algorithmic systems, making you more likely to avoid catastrophic and community-damaging consequences. Our paper completes the first step of Value-Sensitive Algorithm Design with respect to a prominent machine learning system on Wikipedia called ORES (Objective Revision Evaluation Service).

ORES is a collection of machine learning algorithms which look at textual changes made by humans, and then produce statistical guesses of how likely the edits are to be damaging. These guesses are continuously fed via API in real-time all across Wikipedia, as editors and patrollers complete their work in parallel. 

For example, one prominent place where ORES’ guesses affect user experience is in the “Recent Changes” feed, which looks like a list that shows every new edit to the encyclopedia chronologically. Patrollers often spend time looking through the Recent Changes list, using a highlighting tool built into the interface. 

If we fed an edit like yours into ORES, it might output guesses like “82% likely to be damaging” and “79% likely to be done in good faith.” The Recent Changes list could use these scores to highlight your edit in red to show that it is “moderately likely to be problematic.” Or, if the patroller wanted, it could highlight your edit in green to show that you likely meant well. 

In either case, both the underlying algorithms of ORES and the highlights they generate majorly impact: (1) how the patroller interacts with your edit, and (2) whether or not you will continue editing in the future. That’s why, in our study, we wanted to understand what values should guide our design decisions with regard to systems like ORES, and how we can balance these values to lead to the best outcomes for the whole community.

We spoke to dozens of ORES stakeholders, including editors, patrollers, tool developers, Wikimedia Foundation employees, and even researchers, in order to systematically identify which values matter to the community. The following infographic summarizes the results.

For example, one critical value is “Human Authority.” On Wikipedia, the community believes it is vitally important to avoid giving final decision-making authority to the algorithmic system itself. In other words, please, nobody build Terminator! There should never be an algorithm that gets to call the shots and make the final decision about which edits stay, and which edits go. But we do need community partners like R2-D2 to assist with “Effort Reduction” by pointing us in the right direction.

At the same time, the example of your edit shows that along with “Effort Reduction,” we also need to build systems that foster “Positive Engagement.” In other words, ORES should reduce how much work it takes for patrollers to find bad edits, and it also needs to make sure that well-intentioned community members are having positive experiences, even when their edits aren’t up to snuff. 

So, maybe when ORES detects damaging (but good faith) edits in Recent Changes, those edits could receive special treatment. For example, rather than wiping out your red-highlighted edit without explanation, perhaps your edit could be allowed to stay online for a just a few extra minutes. Recent Changes could take a hint from Snuggle and direct a patroller to first reach out to you before reverting, provide some scaffolded text like, “Hi @yourhandle! Thanks for making your first edit to Wikipedia! Unfortunately, our algorithm detected an issue… It seems like you meant well, so I wanted to see if you could fix this by adding a citation so that I don’t have to revert it?” 

(Yes, this is challenging the BOLD, Revert, Discuss (B-R-D) paradigm, and suggesting that in some cases, B-D-R may be a more appropriate way to balance community values. Please discuss!)

In the full paper, we share our journey of applying VSAD to understand the Wikipedia community’s values, along with 25 concrete recommendations for developers interested in building ML-driven systems in complex socio-technical contexts. As you navigate community-based moderation, we hope our experiences may shed light on approaches to problems you may be experiencing in your community, as well.

Thanks for reading! Please share your thoughts in the comments, or get in touch with me @fauxneme on Wikipedia.

Special thanks to co-authors, colleagues, and friends who contributed feedback on this blog post, including Aaron Halfaker, Loren Terveen, Haiyi Zhu, Anjali Srivastava, Zachary Levonian, Mo Houtti, Sabirat Rubya, Charles Chuankai Zhang, and Laura Clapper.

How can we #sciencethenews?

By on

[Cross-posted from Estelle’s Blog]

Mainstream media are most adults’ primary source of new information about science. Yet even when mainstream media outlets cover science (which they rarely do), the coverage contains errors approximately 20-30% of the time. Consequently, a hefty majority of Americans (over 70%!) lack adequate literacy to reason about scientific evidence as it relates to civic life. As scientists, how can we bridge the gap between the ivory tower and the general public? (more…)

Friends with Benefits: How GroupLens and Wikimedia are Happier Together

By on

From left to right: Kyle Condiff, Sarah McRoberts, Aaron Halfaker, and Jacob Thebault-Spieker having lunch at GroupLens.

 

Aaron Halfaker peels back the gleaming foil of an overloaded burrito. Surrounded by doctoral candidates at the GroupLens lunch table, he chomps into his eagerly anticipated rice and beans, taking breaks to shoot the breeze or riff on research ideas. After earning his Ph.D. in 2013, Halfaker scored a full-time research job at the Wikimedia Foundation (WMF). Yet every Thursday, he returns to hang out with us on the University of Minnesota campus. This is certainly an unusual arrangement…what keeps him coming back? (more…)