BY SIMONE J. SMITH
You are on your lunch break, so you decide to jump on social media. As you are scrolling through various feeds, a post catches your attention. Something about it triggers you, so like most people you have something you want to say, share, question or rant about that specific topic or post.
So you type away.
All of a sudden, a message pops up. It tells you that you are exhibiting signs of a mental health issue. You are then asked to proceed with further digital screening, after which you’ll be given helpful information about where to go for further help.
You think to yourself, how did they know this? How did your phone know that you were experiencing distress? You don’t feel distressed?
Dartmouth researchers have built an artificial intelligence model for detecting mental disorders using conversations on a popular forum called Reddit. This AI is a part of an emerging wave of screening tools that use computers to analyze social media posts and gain insight into people’s mental states. In a paper presented at the 20th International Conference on Web Intelligence and Intelligent Agent Technology, the researchers show that this approach performs better over time, irrespective of the topics discussed.
“Social media offers an easy way to tap into people’s behaviours,” Xiaobo Guo, co-author of the research paper, said. “This is because social media is voluntary and public.”
Reddit, which offers a massive network of user forums, was the researchers platform of choice because it has nearly half a billion active users who discuss a wide range of topics. The posts and comments are publicly available, and the researchers could collect data dating back to 2011.
They focused on what they termed “emotional disorders,” including depression, anxiety and bipolar disorder. They did this by searching for users who self-reported their emotional disorders compared to users who did not have any known disorders, and then trained the model to recognize these “signature patterns.”
The model can identify an emotional disorder by comparing the user’s “fingerprint” to established “signatures” of emotional illnesses. They trained their model to label the emotions expressed in users’ posts and map the emotional transitions between different posts. The posts could then be labelled “joy,” “anger,” “sadness,” “fear,” “no emotion,” or a combination of these. The map is a matrix that would show how likely it was that a user went from any one state to another, such as from anger to a neutral state of no emotion.
The researchers then tested posts not used in training, and the model was able to accurately predict if the user had any of these disorders. The researchers say that using digital screening tools can prompt people to get the help they need to address mental illnesses.
Sounds pretty user friendly, doesn’t it?
The framework for its implementation is being assembled right now, via government agencies including the CDC, political bodies including Congress and the Biden administration, tech companies and leading university research programs.
Is this truly for our best interest, or is this the next step of corporate-government overseer’s control?
I guess we will have to wait and see, but for now, know that everything about your person is being watched, scrutinized, and analyzed, sometimes without you even knowing it.