Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

‘Please die’ says Google’s AI chatbot to student seeking homework help

AI-powered chatbots, designed to assist users, sometimes go rogue. A 29-year-old graduate student from Michigan, USA, recently got a chilling taste of how malicious Google’s artificial intelligence (AI) chatbot Gemini could get.
When Vidhay Reddy sought some help from Google Gemini on the challenges faced by ageing adults, the conversation started normally enough, but soon took a sinister turn as the chatbot unexpectedly spewed threatening messages.
“You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a stain on the universe. Please die. Please,” the AI stated.
The statement left both him and his sister shaken. “It was very direct and genuinely scared me for more than a day,” Reddy told CBS News.
“I wanted to throw all my devices out the window. This wasn’t just a glitch; it felt malicious,” said his sister, Sumedha Reddy.
Google claims its chatbots have safety filters to block hateful or violent content. However, the tech giant acknowledged that the Gemini chatbot’s response violated their policies.
In a statement, Google explained that large language models like Gemini can occasionally produce nonsensical or harmful outputs. Actions have reportedly been taken to prevent similar occurrences in the future.
While generative AI tools such as Gemini, ChatGPT, and Claude continue to gain popularity for their ability to boost productivity, incidents like this highlight the risks.
Most AI companies acknowledge that their models are not foolproof, often displaying disclaimers about potential inaccuracies. Whether a similar disclaimer was shown in this case remains unclear.
AI tools have become integral to many people’s lives, but incidents like this serve as a reminder of the challenges in managing these powerful technologies. As more users rely on AI for daily tasks, companies face increasing pressure to ensure the safety and reliability of their tools.
Google’s AI chatbot Gemini faced intense criticism in early 2024 for its controversial output regarding Indian Prime Minister Narendra Modi.
The controversy erupted when Gemini, Google’s AI-powered chatbot, was asked about Modi’s political stance. The chatbot responded that the Prime Minister had “been accused of implementing policies that some experts have characterised as fascist.”
This response ignited a strong backlash, with many perceiving it as biased. Rajeev Chandrasekhar, Union Minister of State for Electronics and Information Technology, condemned Gemini’s remarks. He stated that the chatbot’s response violated India’s Information Technology Rules and certain provisions of the criminal code.

.lead_generation_container{text-align:center}.lead_generation_container iframe{width:100%}
@media (min-width:669px){
.lead_generation_container iframe{height:525px}
}

In the wake of the controversy, Google issued an apology. The company acknowledged Gemini’s limitations, particularly its inability to reliably handle sensitive prompts related to politics and current events. Google reaffirmed its commitment to improving the chatbot’s reliability and reducing such occurrences.

en_USEnglish