UT Wordmark Primary UT Wordmark Formal Shield Texas UT News Camera Chevron Close Search Copy Link Download File Hamburger Menu Time Stamp Open in browser Load More Pull quote Cloudy and windy Cloudy Partly Cloudy Rain and snow Rain Showers Snow Sunny Thunderstorms Wind and Rain Windy Facebook Instagram LinkedIn Twitter email alert map calendar bullhorn

UT News

Can AI Support Youth Mental Health?

UT researchers explore whether artificial intelligence can help identify teens and young adults at risk for mental health problems by ethically following their online footprint.

AI and Mental Health
Photo by George Pagan III on Unsplash.

According to studies conducted by the Substance Abuse and Mental Health Services Administration, 1 in 5 adolescents have had a serious mental health disorder in their lives, and suicide sadly is now the second leading cause of death for youths ages 15-24.

Knowing how deeply attached young people can be to technology, experts at The University of Texas at Austin are researching how artificial intelligence can support young people who are dealing with mental health issues.

“From text messages to posts on social media, there are algorithms that can process that language and detect behavioral patterns, even emotion and sentiment,” explains Professor S. Craig Watkins, founder of the Institute for Media Innovation in the Moody College of Communication.

Watkins has partnered with a team of graduate students from the School of Information (iSchool) to research the power of what they call “values-driven AI.” Their project explores how AI-powered technology can remove or reduce barriers for adolescents or young adults seeking mental health help. Barriers can include lack of general awareness of resources, affordability and access, as well as the stigma and shame that are often associated with mental health. Watkins’ involvement with Good Systems, a university-wide interdisciplinary research initiative that aims to improve AI to better reflect human values, informs the design and execution of the project.

“AI can analyze the content people create, the conversations they participate in, the communities they are connected to, and what information people search for. All these things can be indicators of the onset of mental health crises or of someone currently dealing with mental health issues,” says Watkins.

To bridge the gap from concept to reality, Watkins is aided by his iSchool students: Jingyi Cheng, Shashank Jain and Yuqing Chen. The iSchool offers advanced degrees in information studies, a multidisciplinary domain that includes user experience design — a plus for this project.

“Being such a sensitive topic,” said Jain, “we did not want to make any assumptions as designers.”

They are collaborating with mental health experts, child advocacy groups and, crucially, young people. “They obviously have as much if not more insight into their relationship to technology than anybody,” Jain says. “We need to be more transparent and assertive about incorporating more diverse perspectives, values and interests in terms of what AI can be and should be for society.”

Another issue they are researching is how the analysis of this personal information by AI introduces profound ethical questions. The health sector has dealt with issues of digital privacy for some time, as more and more technology platforms are built to help manage the delivery of health care. One of the issues the team is researching is how the introduction of AI or other technologies used to detect, track and analyze behavior patterns of younger populations, in particular, push the ethical conversation of digital privacy into new territory.

Watkin says that on one side of the issue is the need to protect the patient’s information. “If these are young children or teens, how do you respect their right to privacy? How do you respect their potential vulnerability?”

On the other side is the obligation to react to a patient’s need for help. If a person’s online behavior suggests suicidal ideation, for example, the life-or-death implications suggest an obligation to act. “If your technology signals that a young person is considering suicide or is engaged in suicidal ideation, there should likely be protocols to help intervene and provide them with the care they need,” he says.

The potential of AI-powered mental health support to improve access to care and increase early detection of mental illness, he says, warrants the effort to find the right balance on the issue.

“Like any kind of technology, AI will only transform mental health if it is infused with humanistic values,” said Dr. Octavio N. Martinez, Jr., executive director of the UT Hogg Foundation for Mental Health. “I’m very impressed with the efforts of Dr. Watkins and his team to ensure that happens.”

The “iterative” approach to this project starts with interviewing young people to understand the pain points in their mental health care experiences, designing solutions to address those pain points, testing those solutions with real users, and revising the design based on new information.

Watkins and his team will be field testing a mobile app prototype from late April to mid-May. The result of the field tests, which will involve users and mental health professionals, will guide subsequent iterations — culminating in a finished product sometime during the summer.