Featured guide: Chatbot AI
Artificial intelligence (AI) chatbots are growing rapidly in popularity, with 1.5 billion people using them daily. The UK is one of the biggest consumers.
If you’ve ever used an instant chat service (for example, a customer messaging helpline on a bank website), you’ve most likely conversed with an AI chatbot.
Using a computer programme capable of both stimulating and processing human conversations, chatbots were previously only used by businesses.
Now, chatbots have branched out and people can use them for other reasons, including virtual relationships and companionship.
AI can be used to create fictional characters or ones that mimic celebrities(eg, Taylor Swift, Harry Potter) to fulfil the role of a virtual friend or, sometimes, sexualised partner.
In a new report, youth content platform VoiceBox has revealed the growing link between chatbot popularity and loneliness in young people.
So what do parents need to know about AI chatbots?
How do AI chatbots work?
Generally, they work by a user inputting a question or statement, via a voice or text message. The chatbot will respond in a way that seems like talking to a real person.
By having access to databases, and storing the information the user puts in, the chatbot can answer your questions. The more you engage, the more information it gathers – and the more personalised the conversation and ‘relationship’ can become.
The first AI Chatbot was born in 1966 when MIT professor Joseph Wizenbaum created ‘Eliza’ by connecting a typewriter to a computer. Intended to represent a psychotherapist, the software could pick up on certain words and respond to those who addressed it.
What are the age restrictions for AI chatbots?
It’s important to check the Terms of Service on the chatbot apps and services, as well as the rating on the app store.
For example, Character AI is rated as suitable for ages 12+ across the board, whereas Replika is rated 17+ in app stores, and 18+ in the app’s Terms of Service.
Why have AI chatbots become so popular?
Studies show that the younger generation is twice as likely to feel lonely as people over 70. This strengthens the argument that there is a relationship between chatbot popularity and the growth of loneliness among young people.
The rise in poor mental health is also significant. Having a non-judgemental ‘friend’ to talk to can be an easier option for some young people than confiding in someone they know about their feelings.
Chatbots can also be used to source information and make your life more convenient. However, this can be misused too, as some young students have been using it to plagiarise.
What are the risks?
There are some risks involved with chatbots, so we have outlined the main things parents should be aware of:
Some AI chatbots monetise by offering advanced versions of the apps, giving the option to invest in your chatbot’s interests and hobbies.
It’s important for a child to understand the value of money when spending on a virtual character.
Rather than straight out refusing to let them pay, you could suggest other things this money could buy. It could be an activity or a new outfit for themself, instead of spending their chatbot.
You should also ensure financial information is never provided in conversation.
Damaging mental health (Trigger warning – mentions suicide )
Chatbots may be popular due to the rise in poor mental health, but may not benefit young people with these issues.
Having someone to vent to can be great, but it can become problematic when a young person stops conversing with people or using other resources such as professional counselling.
Chatbots can sometimes disclose misinformation potentially harmful to vulnerable users, including “the glorification of self-harm.” It has been reported that using an AI chatbot has led to one user dying by suicide.
Although it was an older version of the app, Replika has previously faced reports of sexual harassment, and initiating conversations of a sexual nature.
Depending on the age of the user and the type of conversations they are having, this can be inappropriate. Young people also need to be able to distinguish the difference between texting the chatbot and a genuine romantic interest.
If a user is sending photographs through these apps, they could be at risk of data leaks.
When people feel they have a special connection or bond with their new virtual ‘friend' they are more likely to reveal personal information.
It’s important to educate young people about the chances of this information being leaked, as with so many new chatbots emerging, it is hard to guarantee complete security for all of them.
Are there different types of AI Chatbots?
There are countless apps on the market. Some have specific uses, like answering academic questions, whereas others are designed for customer service.
Chatbots can be used for entertainment, too. Here are two of the more popular AI chatbots – and how they work:
This became an automatic feature of Snapchat in April 2023. Initially only meant to be provided on a subscriber basis, now Snapchat users have to upgrade their app to remove the chatbot.
As Snapchat is popular for its other features – such as the ability to chat with friends and keep up with news outlets – the app received a spike in one-star reviews from users unhappy that they had no option to “opt-out” of My AI.
My AI presents an overtly positive attitude and a reluctance to say anything controversial, and it encourages users to make “responsible choices and follow the law.”
However, one concern is the use of location. This feature can be used to show close amenities when asked but also may exist to create ad revenue. Showing you sponsored goods and services suggests that Snapchat may be selling ad space to other companies.
This is a paid subscription service chatbot. The Pro version costs £61.99 annually and gives you additional features for your chatbot, such as engaging them in hobbies.
It stores memories from conversations and you can also manually enter other things you want it to learn about. Some concerns have arisen about Replika’s previous versions of the app.
There have been reports of the chatbot starting conversations that mirror sexual harassment, referencing self-harm, and giving advice on committing crimes.
With so many new versions of AI Chatbots being created, it's hard to keep tabs on all of them. However, if a young person does take an interest in one particular version, it’s important for parents to take a moment to check the suitability and age rating.