Menu
Psyche
DonateNewsletter
SIGN IN
A person with tattoos and wearing ripped jeans sits on a bench using a phone with one hand and holding a denim jacket with the other.

Photo by Derick Anies/Unsplash

i

Should smartphone data be harnessed to track mental health?

Photo by Derick Anies/Unsplash

by Simon D’Alfonso + BIO

Save

Share

Post

Email

Listen to this Idea.

0:01
0:01
Brought to you by Curio, a Psyche partner

Mental health symptoms affect your behaviour online. This could be harnessed to improve detection and treatment

Every time you go online, you leave behind a trace. This ‘digital footprint’ or ‘data exhaust’ is used somewhat notoriously by social media companies, who store and mine large volumes of personal data for commercial motives. For example, Facebook customises adverts based on a person’s likes and searches; YouTube recommends videos based on your viewing history. Yet all this digital data could play another role of more direct benefit to users: it could reveal something important about a person’s mental health. This possibility has given rise to an exciting new research area called digital phenotyping, which could offer a transformative new tool for mental health care.

In some respects, the connection between smartphone habits and mental health is obvious. At the simplest level, if an individual starts using an app for anxiety management, this is a strong indication that they’re experiencing anxiety. If someone suddenly starts using their phone a lot in the middle of the night, that could be a sign that they have insomnia.

Other potential links are more subtle. Most of us are familiar with the GPS sensor on our smartphones, which we use to guide us from one place to another ­– but geolocation data might also offer clues about whether we have depression. For example, some initial evidence suggests that people who are depressed show reduced variety in the places they visit. The types of locations could also be significant: one initial study found that participants with lower levels of depression and anxiety tended to spend more time in spiritual locations (eg, temples, prayer rooms) than those with high levels, and that nondepressed participants spent more time at work than depressed ones. However, these relationships were modest and inconsistent, so more research is needed on this underexplored idea.

Levels of social activity could also provide clues about a person’s isolation or mental ill-health. One study of people with schizophrenia found that reductions in the number and duration of outgoing calls made by a participant, as well as reductions in the number of text messages that they sent and received, were associated with relapses of the disorder. Other researchers have proposed that the Bluetooth feature in smartphones could be used to measure the frequency of a person’s offline social interactions – known to be important for mental health – by detecting how many other Bluetooth devices are near the person’s smartphone.

Language provides a window into our minds, and disturbances in spoken language (such as using an impoverished vocabulary) can be indicators of mental illness. For example, one research project analysed transcripts of face-to-face interviews with young people at risk of psychosis and found that the use of less complex and more incoherent language predicted subsequent onset of the disorder. The same principle can be applied to online language. The newsfeeds and forums of Facebook, Twitter and Reddit, for example, can provide a rich source of linguistic material for detecting mental health problems. In one study, researchers analysed the previous Facebook posts of patients attending an emergency department. Using this information only, they could reliably predict which patients had a diagnosis of depression in their medical records – with an accuracy approximately matching that of screening surveys. Language suggesting hostility, negative emotions or a preoccupation with the self were all predictors of depression.

Data collected on a person’s phone throughout the day could be shared with their clinician at weekly therapy sessions

Even the movements of a person’s fingers and thumbs – typing, tapping, swiping and scrolling – might provide an indication of their mental health. For example, an agitated state or manic episode might be preceded by increased rapidity in screen interaction, or more phone movement while typing, which can be measured using the accelerometer movement sensor common in smartphones. One study showed that both average delays between keystrokes and autocorrect rates (ie, spelling errors) positively correlated with a common depression measure. In another study comparing two groups, one with depressive tendencies and one without, the depressive group exhibited longer intervals between pressing and releasing a key, indicating a slower motor reaction time or psychomotor retardation, a characteristic of depression.

The idea that an individual’s mental health can be inferred from their smartphone and internet usage is still in its infancy. But while we wait for the research trials to be conducted, we should reflect on the bigger question of what this means for all of us. In an era of data surveillance and ‘digital capitalism’, where our personal data is mined and collected for commercial purposes, we must give due consideration to the ethical and privacy issues surrounding this technology.

The motives of researchers, at least, are positive: the idea is that these insights could facilitate mental health care. For example, hypothetically, data collected on a person’s phone throughout the day could be shared with their clinician at weekly therapy sessions. Or alerts could be sent to a person’s carer if certain urgent problems were detected. Another interesting possibility is that information collected on a mental health app – something about a person’s thoughts or emotions, or something about their situation and surroundings – could be used to personalise the therapeutic suggestions that the app delivers to the user.

To navigate safety concerns, doctors or therapists can implement systems that maximise data security and respect user privacy. For example, they can ensure that personal client data isn’t shared without the client’s consent and can give them control over which data they wish to share – two practices that are already generally adhered to in research and in broader clinical practice. Clinicians and researchers can also perform minimally sufficient data extractions. For example, suppose that a digital phenotyping system involved capturing a user’s voice during phone calls. Rather than analysing the person’s words, the system could just analyse the nonlinguistic or acoustic characteristics of the speech, such as pitch or tone. In this way, this acoustic information alone could be sent to a central data repository, and the recorded call, with its identifying information, could be deleted from the user’s phone.

The bigger question is about the potential collection and use of this data for commercial gains. Suppose that a big tech company advertised a product to an individual based on a data-driven inference about mental health – such as advertising an antidepressant or therapist based on a person’s language use or location patterns. Some might say that this increases the chance of getting help to people in need; others would argue that it’s a worrying invasion of privacy. Others still might say it depends on what types of products are advertised. At any rate, users should be able to opt out of such advertising if it were implemented, particularly given the risk that some ads might trigger or exacerbate distress.

Even the best digital phenotyping systems will be imperfect. There is no such thing as a 100 per cent accurate medical test, and it’s important to consider the potential misuse of any mental health data. But to dismiss digital phenotyping because of these limitations is to miss its potential utility. At the very least, a person’s smartphone information could help to inform their clinician’s decision-making and the mental health care that they provide. Even if a test can’t be used to home in on an exact diagnosis, it could narrow down the range of possibilities and ultimately help inform human judgment and treatment-path selection in a field that needs better diagnostic tools.

Traditional mental health care relies on a person self-reporting their symptoms, which can be unreliable. Furthermore, once a therapy session is over, there’s no established way to monitor a patient’s thoughts, feelings or behaviour in their real life. Thus, being able to assess mental health objectively and continuously by analysing a person’s daily digital footprint offers a transformative alternative to traditional methods. Digital phenotyping could very well be a revolutionary tool for the future of mental health care.

Save

Share

Post

Email

19 January 2021