menu
menu
Technology

Is Gmail training Gemini AI based on your emails? Google responds to viral rumours

Aman Gupta
Google has issued a clarification after recent reports claimed that it was training Gemini AI based on the emails of users. The company refuted the claims while also noting that it had not changed any settings for users.
The Gmail app icon on a smartphone in this illustration taken October 27, 2025. REUTERS/Dado Ruvic/Illustration(REUTERS)

Google has responded to viral claims that it is training its AI on the Gmail data of its users. Notably, a recent report by Malwarebytes had claimed that Google had changed its settings and was now automatically training its AI on users’ emails. The report also claimed that users needed to turn off the Smart features and personalisation setting to prevent Gmail from training AI on their emails.

The report led to a barrage of angry comments on social media, with users upset about Google training AI on their personal data without explicit permission — and it turns out, Google isn’t doing so.

The company, in a social media post, clarified that it hasn’t changed any settings and isn’t using Gmail content to train its Gemini AI.

In a post on X, Google said, “Let's set the record straight on recent misleading reports. Here are the facts.”

“We have not changed anyone’s settings. Gmail Smart Features have existed for many years. We do not use your Gmail content to train our Gemini AI model. We are always transparent and clear if we make changes to our terms & policies,” the tech giant added.

Malwarebytes subsequently updated its report to confirm that Google is not, in fact, training on users’ emails, and blamed the misunderstanding on Google’s ambiguous wording.

“The settings themselves aren’t new, but the way Google recently rewrote and surfaced them led a lot of people (including us) to believe Gmail content might be used to train Google’s AI models, and that users were being opted in automatically. After taking a closer look at Google’s documentation and reviewing other reporting, that doesn’t appear to be the case,” Malwarebytes said in its updated report.

 

Notably, this isn’t the first time Google has had to deny false reports about Gmail this year. In September, Google issued a clarification after multiple media reports claimed that 2.5 billion Gmail users were compromised during a leak. The company said those reports were false and were causing unnecessary panic.

 

Where Google is training your data

While Google has denied training Gemini on users’ Gmail content, the company does automatically train on the conversations users have with its AI chatbot. This means users need to turn off the Gemini Apps Activity setting in their Google account to stop Google from training its AI models based on Gemini conversations.

Google, however, isn’t alone in this practice. Anthropic recently notified users that it will, by default, train on conversations with Claude. Meanwhile, Meta has also said that it will show ads and personalised content based on conversations users have with its chatbot.

by Mint