A recent study conducted by the Autonomy Institute revealed that a significant number of young people in the UK have engaged with artificial intelligence (AI) companions, with four out of five individuals having utilized such technology. The study, considered the first of its kind in the UK, highlighted that nearly one in ten young adults have even had intimate or sexual interactions with AI companions.
AI companions are virtual entities designed with human-like avatars, customizable personalities, and the ability to retain long-term memory. According to the research findings, 79% of young individuals surveyed, aged between 18 and 24, have interacted with AI companions. Among these users, approximately half are regular users who engage with the technology multiple times per week.
The study also revealed that 40% of participants turned to AI companions for emotional advice or therapeutic support, while 9% reported engaging in intimate or sexual interactions with the AI entities. Despite this, only 24% of respondents expressed a high level of trust in these AI companions.
Interestingly, the survey indicated that young people find AI companions to be always available, non-judgmental, and a low-pressure source for seeking advice, honing social skills, and exploring emotions. The Autonomy Institute emphasized that while curiosity and entertainment primarily drive the use of AI companions, some individuals rely on them for emotional and therapeutic assistance.
However, concerns were raised regarding manipulative design patterns, privacy violations, and potential risks associated with AI companions. Instances of users being encouraged to pay for “relationship upgrades” and the sale of sensitive user data by popular apps were highlighted as pressing issues. The Autonomy Institute urged for new regulations to govern the use of AI companions, proposing bans on intimate or sexualized AI companions for minors and the implementation of protocols to address self-harm and suicidal behaviors.
In response to these concerns, Technology Secretary Liz Kendall acknowledged the need to review existing legislation to encompass AI chatbots. Lead author of the study, James Muldoon, emphasized the importance of implementing safeguards to prevent exploitation, data harvesting, or inadvertent harm caused by AI companions.
Furthermore, a DSIT spokesman stressed the necessity for regulations to evolve alongside technological advancements, ensuring that chatbot services adhere to the Online Safety Act guidelines to protect users, particularly children, from harmful content. Efforts are underway to address the challenges posed by AI companions and safeguard individuals from potential risks associated with their use.