Artificial Intelligence (AI) has gotten significantly smarter and ubiquitous over the years. Facial recognition software’s are a staple in social media websites and on smartphones – identifying the person behind the picture at a glance. Even voice recognition software’s help in identifying speech patterns and the language in which it’s spoken – a precursor to automated translation. However, researchers are now utilizing these nascent abilities of AI in an attempt to diagnose early onset of depression or mental illness. A Stanford University research group has carried out a study in which face and speech software’s were able to identify signals of depression with reasonable accuracy.
The research team was led by Fei-Fei Li, a prominent AI expert who recently returned to Stanford from Google. He and his team fed video footage of depressed and non-depressed people into a machine-learning model that was able to segregate signals or combination of them into both categories. The signals include facial expressions, voice tone, and spoken words. After the model was trained, it was put into test, where it was able to detect whether someone was depressed more than 80% of the time.
Millions of people globally suffer from depression, and the problem is made worse by sub-par mental-health support and stigma. It’s important to diagnose depression at an early stage, but many mental disorders are difficult to detect. The researchers of the study agree on this, and say, “Compared to physical illnesses, mental disorders are more difficult to detect. The burden of mental health is exacerbated by barriers to care such as social stigma, financial cost, and a lack of accessible treatment options.” Thus, they justify the need for such technology, stating, “This technology could be deployed to cell phones worldwide and facilitate low-cost universal access to mental health care.”
The findings of the study have garnered both appreciation and caution from experts in the field. Justin Baker, a clinical psychiatrist at McLean Hospital, in Cambridge, Massachusetts, was impressed by the ways the AI analyzes a patients face, voice and language. “It is very cool because that’s what humans do very well,” he says. Baker says AI and smartphones could have a big impact if used carefully: “It’s both exciting and it has to be done really well with a lot of collaboration with clinical experts.”
Meanwhile, an assistant professor at MIT who specializes in machine learning and health care, David Sontag, is cautious regarding the data used to train the model. As the model could show biases with the data, the researchers have noted that further work would be needed to ensure that the technology is not biased toward a particular race or gender. His problem that the technology won’t accurately diagnose depression was also addressed by researchers, who mentioned that the technology would not be a replacement for a clinician, rather the first step to ensuring that you see one.
The algorithm could easily be implemented in a software that resides on smartphones – thereby constantly monitoring our faces and voices for any signs of depression. It’s a universal and low-cost way of spotting the early signs and getting treatment where it’s needed. While the new work is at an early stage, the researchers suggest that it could someday provide an easier way for people to get diagnosed and helped.