+
  • HOME»
  • Google AI Search Generates Bizarre Suggestions: Eating Rocks And Putting Glue On Pizza

Google AI Search Generates Bizarre Suggestions: Eating Rocks And Putting Glue On Pizza

Earlier this month, Google unveiled an AI-powered search feature designed to offer users instant answers to their queries. This AI-generated response was intended to provide a summary of the question, eliminating the need for users to navigate through multiple search results. Google assured users that its AI overview tool would “do the work for you.” […]

Earlier this month, Google unveiled an AI-powered search feature designed to offer users instant answers to their queries. This AI-generated response was intended to provide a summary of the question, eliminating the need for users to navigate through multiple search results.

Google assured users that its AI overview tool would “do the work for you.” However, the actual outcome differed from expectations.

In recent days, Google’s new AI search feature has faced criticism for providing misleading and sometimes bizarre answers to users’ queries. Notably, the AI tool suggested that people should consume “at least one small rock per day” and recommended applying glue to pizza to enhance cheese adhesion.

Screenshots showcasing both the highlights and low points of Google’s new AI search tool have circulated widely online.

It seems that some of the responses generated by Google AI were sourced from the satirical website The Onion and from Reddit responses.

For instance, when an Associated Press reporter posed a query, the Google AI search replied: “Yes, astronauts have met cats on the moon, played with them, and provided care.” It went on to claim: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” Obviously, none of this is true.

While these instances of Google AI errors sparked amusement on social media, experts are expressing concerns. According to AP, the new feature has worried experts who caution that it could perpetuate bias and misinformation, potentially endangering individuals seeking assistance in emergencies.

For instance, when Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it confidently responded with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.”

Mitchell noted that the summary appeared to support the claim by citing a chapter in an academic book written by historians. However, the chapter did not make the false claim; it merely referred to the debunked theory.

“Google’s AI system is not smart enough to figure out that this citation is not actually backing up the claim,” Mitchell stated in an email to AP. “Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.”

Google’s response

Google stated on Friday that it is taking “swift action” to address errors, such as the falsehood about Obama, that violate its content policies. The company stated that it is utilizing these instances to “develop broader improvements” that are currently being implemented. However, Google maintains that, in most cases, the system is functioning as intended due to extensive testing prior to its public release.

In a written statement, Google mentioned, “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

The tech giant reiterated this position while speaking to the BBC, stating, “The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences. The vast majority of AI overviews provide high-quality information, with links to dig deeper on the web.”

Advertisement