is under investigation in the U.K. over potential privacy risks associated with the company's generative artificial intelligence chatbot.

The Information Commissioner's Office (ICO), the country's data protection regulator, issued a preliminary enforcement notice Friday, alleging risks the , may pose to Snapchat users, particularly 13-year-olds to 17-year-olds.

"The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching 'My AI'," said in the release.

The findings are not yet conclusive and Snap will have an opportunity to address the provisional concerns before a final decision . If the ICO's provisional findings result in an enforcement notice, Snap may have to stop offering the AI chatbot to U.K. users until it fixes the privacy concerns.

"We are closely reviewing the ICO's provisional decision. Like the ICO, we are committed to protecting the privacy of our users," a Snap spokesperson told CNBC in an email. "In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available."

The tech company said it will continue working with the ICO to ensure the organization is comfortable with Snap's risk-assessment procedures. The AI chatbot, which runs on , has features that alert parents if their children have been using the chatbot. Snap says it also has general guidelines for its bots to follow to refrain from offensive comments.

The ICO did not provide additional comment, citing the provisional nature of the findings.

The agency previously issued a " " and followed up with a general notice in April listing questions developers and users should ask about AI.

Snap's AI chatbot has faced scrutiny since its debut earlier this year over inappropriate conversations, such as advising a 15-year-old how to hide the smell of alcohol and marijuana, .

Snap said in its most recent earnings that more than 150 million people have used the AI bot.

Other forms of generative AI have also faced criticism as recently as this week. Bing's image-creating generative AI, for instance, has been used by extremist messaging board 4chan to create racist images, .