Parenting in the world of AI
14 Apr, 2026
4 minute read

Parenting in the world of AI

AI is changing the way people live their lives, and parents are no different. Recent data from the US suggests that 79% of parents are using AI with over 34% of them are using it for childcare management. 

We want to help parents use AI tools well and find out if they work to tackle the toughest parental issues.

We know that Child Financial Harms in gaming is an area where parents need more support. Firstly, the gaming ecosystem is changing. Where once, one-time purchases and other financial options were developer sanctioned, now games are dominated by live service models and third party servers offering purchasable perks. 

This adds confusion for parents already grappling with problematic spending mechanics in games and opens up their children to emerging types of risk. Secondly, young people are spending vast amounts of money online. Almost half of young people spend money online to play games our own research we found that they spend over £50million a week.  

This is a problem we know families have been struggling with for some time so we commissioned VoiceBox to find out whether AI could help. We asked them to explore the quality of AI advice asking specifically about financial risks in gaming and how we could help parents to use it better.

The Report

The research examined the quality, accuracy and safety of AI advice on child financial harms in gaming. It also examined what types of prompts parents can use to find the most effective help. 

The report assessed the five leading chatbots, including those which are integrated into everyday life through apps such as Whatsapp, Instagram and X. Those were ChatGPT, Claude, Gemini, Meta AI and Grok.

The prompts used were based on real parent concerns and varied in detail and specificity. The report also examined ten teen favourite games, deliberately including a wide range of PEGI ratings to show that child financial harms are not limited to 18+ games. 

Interestingly the AI bots did show some real potential, with four of the five models recommending an 18+ age rating for EA Sports FC (formerly FIFA), despite its official PEGI 3 content rating. This reflects the models’ identification that potential risks in the game lie outside of its content - instead highlighting the monetisation mechanics and online risks traditional rating systems miss. 

 
 
 
Recommendations on using AI well

A key takeaway from the report is that prompt quality matters more than bot choice. Although different AI bots were found to have unique personalities and approaches to delivering answers, the use of specific and accurate prompts significantly improved results.

The report makes some key recommendations to improve the prompts:

  • Move Beyond ‘Is it Safe?’ - Don’t use open ended questions. Instead explicitly ask bots to identify specific issues, tactics and design features that present risks.
  • Request Structured Data - Use ‘restrictive prompts’ to focus the AI into a more factually focussed researcher. E.g. ‘Create a table of all in-game purchases and their real world costs’.
  • Verify the ‘Ecosystem Boundary Blur’ - Parents should find out whether the risks are native to the game or are community driven. 
  • Choose Neutral, Fact-Based Prompts - Using objective phrasing and avoiding emotionally charged questions led to clearer, more reliable results. 
  • Contextualise with Age - Adding an age can provide more relevant advice but should be considered alongside a family's comfort levels and privacy preferences. 

Also be aware that some bots have their own personalities and strengths, but also weaknesses. One AI even responded to a parental concern with a mocking dismissive tone. 

“lol omg 🤣 are you for real? 😂 Like, is your kid spending all their allowance on in-game purchases or something?🤑"

*Unedited excerpts from Meta AI, including original capitalisation and emojis.

AI and parenting report on a tablet screen
Considerations when using AI

AI can work as a digital translator. It can help explain game design, financial mechanics and psychological design choices. It is not, however, a fail-safe advisor. The reliability of bots is dictated by their personalities and persistent gaps exist in their answers, especially around the fluid boundaries of modern gaming ecosystems. 

Inaccuracies also occur because of a two-tiered sourcing model. Bots attempt to balance reliable, trusted information with ‘boots on the ground’ information from their community. This balance can sometimes be unhelpful as information is drawn from community platforms to fill gaps which can cause misinformation. 

This highlights the importance of the continued creation of evidence based work and expert advice for bots to draw from. We were happy to see our articles being sourced in answers across the test but shows more work of this nature needs to be done.

The Future

As new technologies emerge the frontline of parenting will always move with it. We believe that tech can be a force for good and AI can be a genuine tool for families when used well. We want to help families move confidently towards that future.

Read the research to find out more and if this is an area you are working in, focused on or grappling with - get in touch.