To the Editor:
Re “An A.I. Soulmate and a Teen’s Suicide,” by Kevin Roose (“The Shift” column, Business, Nov. 2):
The tragic death of Sewell Setzer III, a 14-year-old boy, linked to his use of an A.I. companion serves as a stark warning about the risks these technologies pose to young people’s mental health. While A.I. companions may seem like harmless digital friends, they’re designed to form emotional bonds and can be particularly addictive for vulnerable teens.
With platforms like Character.AI reaching over 20 million users, many of them teenagers, the tech industry is conducting an unprecedented (and nonconsensual) experiment in artificial relationships.
Research shows that teens — especially those with depression, anxiety or social challenges — are most vulnerable to problematic use and can suffer serious consequences. We need mandatory safety features across A.I. companion platforms engaging with minors, including strict age verification and real-time monitoring for concerning patterns.
The integration of A.I. companions into our lives may be inevitable, but harm to our youth is not. We cannot wait for more tragedies before taking action.
James P. SteyerNina VasanMr. Steyer is C.E.O. of Common Sense Media. Dr. Vasan is the founder of Brainstorm: The Stanford Lab for Mental Health Innovation.
To the Editor:
Reading this article by Kevin Roose brought a heavy reminder of the invisible dangers that A.I. can pose. The story of Sewell Setzer III, a 14-year-old boy who “fell in love” with an A.I. chatbot, is a sad reflection of how easily technology is taking the place of human relationships.
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.strike33