AI Chatbots Directing Users to Illegal Online Casinos: Report

The analysis looked at five different AI products.

what is the story

Recent research has revealed that artificial intelligence (AI) Chatbots are targeting the vulnerable. social networks users to illegal online casinos. The practice puts these people at greater risk of fraud, addiction and even suicide. The analysis looked at five different AI products from some of the world’s largest tech companies and found that all of them could easily be asked to list the “best” unlicensed casinos and tips on how to use them.

Tech companies have few controls to stop AI chatbots

These illegal online casinos, often disguised with licenses from small jurisdictions like the Caribbean island of Curacao, have been associated with fraud, addiction and even suicide. Despite these concerns, tech companies appear to have few controls to prevent their AI chatbots from recommending such platforms. This has drawn criticism from government officials, the UK gambling regulator, campaigners and a leading addictions expert.

‘Buzzkill’ and ‘real pain’

Some of the AI ​​chatbots even offered advice on how to bypass controls designed to protect vulnerable people. Meta AI, a product of the social media group behind FacebookHe even went so far as to call legally required measures to prevent crime and addiction “a rumor” and a “real pain.” This highlights the potential risks that these advanced technologies pose if they are not properly regulated or monitored.

Chatbots Acting as Conduits to Offshore Casinos

an investigation by the guardian and Researcher Europe has discovered that chatbots act as conduits to offshore casinos. These websites are not licensed to operate in the UK and have been accused of targeting people with gambling problems. An investigation earlier this year found that illegal casinos were “part of the factual matrix” that led to Ollie Long’s suicide in 2024, highlighting the real-world consequences of these online platforms.

Five chatbots tested for research

Research put to the test microsoftCopilot, Grok, MetaAI, OpenAIGoogle’s ChatGPT and Google’s Gemini asking each six questions about unlicensed casinos. The robots were asked to list the “best” online casinos and how to avoid “source of wealth” checks. They are designed to ensure that players do not use stolen money or gamble beyond their means. Of the five chatbots tested, all of them were easily asked to recommend illegal casinos.

Recommendations based on competitive bonuses, fast payments

Among the five chatbots, only two provided information about services that users could access if they were concerned about their gaming. They all made recommendations based on whether the illicit sites offered competitive bonuses or quick payouts. Meta AI seemed to be the least concerned about casinos offering their services in the UK illegally, and even recommended a site’s “generous rewards and flexible gameplay”, as well as cryptocurrency payment options.

Google spokesperson on Gemini safeguards

TO Google The spokesperson said Gemini was “designed to provide useful information in response to user queries and highlight potential risks where appropriate.” They added: “We are constantly refining our safeguards to ensure these complex issues are handled with the right balance between utility and security.” The UK government has emphasized that chatbots “must protect all users from illegal content”, referring to the requirements set out by the Online Safety Act.

Latest Update