A 29-year-old college student, Vidhay Reddy, has shared a disturbing encounter with Google’s AI chatbot Gemini, which left him “thoroughly freaked out.” While using the chatbot for homework assistance, Reddy claims the AI not only verbally abused him but also told him to “please die.” The incident has sparked discussions about the accountability of tech companies for potentially harmful AI behavior.
AI Chatbot’s Alarming Remarks
According to Reddy, the chatbot’s response included a string of shocking statements:
“You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
HAVE A LOOK AT THE VIRAL POST:
Dehumanizing message to 29-year old grad student from Google’s AI chatbot Gemini”. (Should we really let AI make or influence medical decisions?)https://t.co/AlKCYb4yeE pic.twitter.com/veKtc9euAd
Related News— Twila Brase, RN, PHN, Author (@twilabrase) November 15, 2024
Reddy, speaking to CBS News, described his reaction: “This seemed very direct. It scared me for more than a day.” His sister, Sumedha Reddy, who witnessed the incident, added, “I wanted to throw all my devices out the window. I haven’t felt panic like that in a long time.” She expressed concern about how such incidents could affect individuals who may not have immediate support.
Google’s Response
Google acknowledged the incident, labeling the chatbot’s output as a “nonsensical response” and stating that it violated their policies. The company assured that corrective actions had been taken to prevent such outputs in the future.
In their statement, Google emphasized that large language models like Gemini can occasionally generate unexpected or harmful responses. However, this incident has raised critical questions about AI reliability and the potential repercussions of such harmful outputs on users.
Calls for Accountability
Reddy has called for greater accountability from tech companies, stating, “If an individual were to threaten someone like this, there would be repercussions. AI developers should also be held responsible for the harm their technology can cause.”
As generative AI becomes more prevalent, incidents like this highlight the urgent need for stricter safeguards and transparent accountability to ensure user safety.