Facebook has gotten a bad rap recently, for data breaches and ad targeting alike. But a team of researchers based at the University of Pennsylvania has found a way Facebook posts could be helpful: by predicting depression.
The researchers compared the Facebook posts of people who were diagnosed as depressed with those who weren’t. They found that, in the six months leading up to the diagnosis, the depressed patients made more frequent posts about crying, physical symptoms of headaches and pain, or used “I and me” statements a lot — among other flags.
It turned out, using language like this was about as accurate in predicting an individual would be depressed as intake screening surveys at a doctor’s office.
The practical application of these findings is another matter.
The researchers stressed that clinicians should not extrapolate from the results and that it is unwise to diagnose depression based solely on Facebook language.
Andrew Schwartz, a social scientist in Stony Brook University’s computer science department, said the predictive nature of these language patterns should be thought of as one tool in the toolbox for clinicians and therapists.
“Patients could share an assessment of how they’ve been doing in between visits, which is a time when patients are most vulnerable,” said Schwartz, who co-authored the study with the Penn researchers.
Dr. Matthew Wintersteen, a psychiatrist at Thomas Jefferson University Hospital, agreed that the findings are promising. Having access to social media posts can be really helpful, said Wintersteen, who treats adolescents.
“As a provider, one of the things that we struggle with from time to time is the only info we have about our patients is what they share with us in our offices,” he said. “It would be a fruitful way to begin a conversation with our patients who may not be completely forthcoming with how they’re feeling.”
Wintersteen and the researchers said it would be dangerous to try to diagnose people based solely on Facebook language — that could lead to risky categorizations without a ton of context. Wintersteen, who works with adolescents who are often suicidal, gave the example of someone posting, “I just don’t want to be here anymore.”
“In a school setting, that could be interpreted as the person wants to die,” he said. “Or it could be interpreted as the kid just doesn’t want to be in school anymore.”
Because therapists can rely only on what their patients tell them during therapy sessions, digital strategies for monitoring patients’ mood and psychological state between appointments are gaining steam.
Some clinicians use apps such as iMood Journal, which alerts patients at random intervals throughout the day to note their mood or energy level. Eventually, the data aggregates and offers a graph depicting trends in mood over time. While this process still relies on patient self-reporting, it offers the opportunity for real-time check-ins. The research team hopes that using social media posts as a resource for therapists could provide similar out-of-office material.
While this study offers a hopeful outlook on the role of Facebook in helping to predict signs of depression, others have shown that increased activity on social media is highly correlated with high rates of depression. Further still, convincing patients to consent for their social media presence to be monitored by a clinician could prove challenging.
Both clinicians and researchers stressed that privacy is still the first priority.
Before the report was released, Schwartz said, the team conducted a feasibility study to make sure that people would be willing to link their medical records with their social media profiles.
Still, Schwartz and the researchers consider their findings an important advancement.
“If you think about it, there’s not a lot of ways to really look at a person’s daily life from a research perspective and understand what they’re going through,” he said. Social media, if nothing else, helps us do that.