The Public's Health: In Social Media We Trust | Public Health Post

We have grown accustomed to social media reading our thoughts. One day we are using Facebook to exchange notes about an upcoming wedding invitation, and the next day Facebook ads highlight discounts on potential wedding gifts. We post photos of a new item of clothing on Instagram and next we are receiving ads from the newest crop of design stars.
 
Of course our social media apps are not reading our thoughts—they are reading our photos, shopping patterns, and the actual words we write in our emails and posts. We have come to think of this type of targeted advertising as unremarkable, even as, in recent years, we have become aware of the potential havoc this type of micro-targeting can wreak on our democratic process.
 
But what if this micro-targeting, instead of being only a force to generate consumer interest, were also a force for good? What if our words offer a means to ameliorate an important public health problem like Americans’ growing suicide rate?

Let’s say that InstaTwitBook had an algorithm that could quite accurately judge that a user was suicidal from changes in the language she communicated online. Let’s say that site administrators could send this person at risk for suicide a gentle message suggesting that she is evincing warning signs of depression, or maybe even nudge her by directing advertisements about local mental health counseling to her without overtly mentioning mood. Let’s say that 3,500 persons could be reliably identified and sent such messages this year and that 1 in 100 would lead to care-seeking and 35 suicides averted. And it’s not just InstaTwitBook that has the power of detection. Phone companies may be able to tell from your voice if you’re depressed, and use that information to identify who may benefit from mental health help.
 
Are we okay with this company’s public health action, if done not to make money, but for the good of its users? Should InstaTwitBook pursue this public health campaign to save lives in our era of rising suicide deaths?
 
In some ways we do not see why we should not be in favor of this. After all, we accept intrusion about issues that matter much less than issues of life and death. But we are also not blind to the challenges that arise from this approach. In part, the scale of the challenge depends on the harm of the inevitable inaccuracy, of false-positive results, and the sending of notifications to persons who aren’t suicidal. What if there were unintended consequences; what if depressed persons stayed away from social media, growing more isolated, rather than get singled out?
 
Answers to these questions would take some research, and to conduct such research will require that social media companies be open to third party investigators using their data. Given the reach of social media, and the potential for good that such efforts can achieve, we think such research is long overdue.

Warmly,
Michael Stein & Sandro Galea