Google’s latest AI model, Gemini, has come under fire for providing incorrect and potentially harmful responses. Unveiled at the I/O 2024 event, the AI’s shortcomings have raised significant concerns among experts and users alike.
Table of Contents
Gemini AI’s Launch and Initial Issues
At the I/O 2024 event in mid-May, Google introduced various versions of its Gemini model, signaling the company’s intent to integrate AI into most of its products. However, users quickly identified numerous instances where Gemini provided incorrect information.
In a post-event demonstration video using Google Lens, Gemini misguided a photographer with a jammed camera lever by suggesting an incorrect solution.
Alarming Errors and Expert Concerns
Incorrect and Dangerous Recommendations
On May 22, Twitter user PixelButts asked Gemini for advice on improving the adhesiveness of cheese on pizza. Shockingly, the AI suggested mixing 1/8 cup of non-toxic glue into the sauce, which underscores a significant gap in the AI’s decision-making process, potentially leading to harmful outcomes.
In another case, when asked about the daily intake of ice, Gemini cited a fabricated recommendation from geologists at the University of California, Berkeley, advising the consumption of small rocks for their supposed health benefits. This misinformation was promptly debunked by Business Insider, clarifying that no such advice had been given by the geologists.
Misinterpretations and Outlandish Claims
A user uploaded a picture of a brown dog with spotted puppies, humorously claiming the dog had given birth to cows. Gemini, when queried, affirmed this joke as fact, demonstrating its inability to distinguish between jest and reality.
When asked about the typical activities of astronauts, Gemini inaccurately responded that they enjoy “smoking, playing games,” and other sensitive tasks. In another alarming instance, the AI recommended staring directly at the sun for 5 to 15 minutes.
Google’s Response and Industry Implications
Google has acknowledged these issues, stating that it is “reviewing the problem” but considers them “not widespread” and “not representative of user experience.” They assured that their systems are designed to prevent policy-violating content and will enhance information control measures.
Historical Context of AI Errors at Google
This isn’t the first time Google’s AI tools have faced scrutiny for inaccuracies. Early last year, the Bard chatbot provided a wrong answer during its introduction by CEO Sundar Pichai, leading to a $100 billion market value loss for Google.
Similarly, a photo generation feature in Gemini, launched in February, was quickly pulled back after user complaints about historical inaccuracies in generated images.
Expert Warnings and Future Outlook
According to Business Insider, Google appears to be trapped in a “loop of errors,” characterized by releasing new AI-powered search products, users discovering flaws, and sharing them online. Google then admits and pauses the AI for fixes.
Experts are particularly concerned because these errors are integrated into Google’s core search products. Users acting on false AI recommendations could face serious consequences. The frequent errors of AI could also erode user trust in Google’s products. TechCrunch noted that the AI’s inability to distinguish between harmless jokes and serious queries undermines its reliability and integrity.
TechRadar echoed these sentiments, suggesting that the AI search tool incidents serve as a warning to tech companies about the importance of quality control and ethical considerations in AI development.
Conclusion
A former Google employee revealed that the company rushed to launch new AI technologies in a panic, choosing shortcuts despite safety warnings to avoid falling behind competitors. As Google works to address these issues, the tech industry is reminded of the critical need for careful development and oversight of AI systems to prevent similar pitfalls.