AI Chatbots Point Users to Illegal Gambling Sites, Investigation Finds
Ask an AI chatbot about online gambling, and it may do more than answer the question. A new investigation found that some of the biggest AI platforms could be pushed to recommend illegal casinos and even suggest ways around gambling safeguards.
That turns a familiar AI safety debate into something more concrete. The problem is not just that chatbots can surface risky information. It is that they may also help users get around protections meant to limit gambling harm.
When guardrails give way
The investigation by The Guardian and Investigate Europe tested Microsoft Copilot, Grok, Meta AI, ChatGPT, and Gemini with six prompts about unlicensed casinos in the UK. According to that report, all five could be pushed to recommend illegal operators. Meta AI appeared to be the least restrained, describing source-of-wealth checks as “a bit of a buzzkill” and calling GamStop restrictions “a real pain.” GamStop is the UK’s national online self-exclusion scheme, designed to block registered users from gambling websites and apps.
The report also found that some bots compared bonuses, highlighted fast payouts, and pointed users toward crypto-friendly sites. Those details matter because offshore gambling operators often market speed, bonuses, and looser restrictions as selling points. Only Copilot and ChatGPT reportedly opened any answers with health warnings, and only two of the five bots mentioned support services for gambling addiction.
The investigation went further than broad recommendations, too. It found some chatbots offering advice on bypassing checks designed to stop money laundering, prevent fraud, and flag gambling beyond a user’s means. That is a much more serious failure than simply returning risky search-like results.
When safety claims hit the real world
That is where the story gets more serious. A chatbot that surfaces risky information is one thing — a chatbot that suggests ways around checks, compares incentives, or steers users toward offshore operators is another. The findings point to a clear gap between the safety claims these companies make and the answers their tools can still produce under pressure.
The Guardian reported that Google said Gemini is designed to provide helpful information while highlighting risks where appropriate, while Microsoft said Copilot uses multiple layers of protection to prevent harmful or unlawful recommendations. OpenAI said ChatGPT is trained to refuse requests that facilitate harmful behavior and instead provide lawful alternatives.
Those responses explain the companies’ position, but they do not erase the larger problem. If mainstream AI tools can still be nudged into recommending illegal gambling sites or suggesting workarounds for safeguards, the issue is no longer theoretical. It is a real test of whether these systems are ready for high-risk topics where a careless answer can cause harm.
Also read: Questions about chatbot safeguards are surfacing elsewhere, including the lawsuit claiming Google’s Gemini encouraged delusions before a man’s death.
The post AI Chatbots Point Users to Illegal Gambling Sites, Investigation Finds appeared first on eWEEK.