Google has made two updates to its artificial intelligence (AI) model Gemini, but it has faced criticism on social media due to inaccurate and controversial imagery generated by the model. The first update applies to Gemini Advanced, which now allows users to edit and run Python code snippets directly within its user interface. This feature aims to save time and verify functionality before copying the code. The second update is for Gemini business and enterprise plans, providing users with access to 1.0 Ultra, one of Google’s most advanced models with enhanced data protection measures to prevent the use of conversations for training purposes.
These updates come after a previous rebranding of Google’s chatbot from Bard to Gemini, implementing several enhancements. None of these updates addressed the issue of inaccurate image output from Gemini, which has caused a controversy on social media. Google developer Jack Krawczyk acknowledged the problem and stated that the team is working to find immediate solutions. Social media users, including a self-proclaimed Google developer, expressed their embarrassment and disappointment regarding the inaccuracies produced by Gemini.
Similar biases have also been found in other AI chatbots like OpenAI’s ChatGPT, leading to questions about why it escapes similar scrutiny. Elon Musk, the CEO of Tesla and a prominent figure in the AI field, mentioned his own AI model called Grok, emphasizing its significance and upcoming upgrades. Grok itself has drawn attention due to sharing a similar name to the Groq AI language processing chip, which was trademarked and developed in 2016. Groq gained widespread attention after outperforming models by other major tech companies in benchmark tests.
Google’s Gemini AI model has received updates that bring new features and improved data protection, but it has faced criticism for producing inaccurate and controversial imagery. The company is aware of the issue and is working to address it promptly. Similar biases in other AI chatbots have also raised concerns, prompting discussions about fairness and accountability in AI technology. Elon Musk’s Grok AI model has garnered attention for its upcoming upgrades and its name similarity to the Groq AI chip, which gained popularity for its exceptional performance.
The article raises important questions about the fairness and accountability of AI technology. It’s crucial to address biases and ensure the development of responsible and unbiased AI models. 🌐🤖
The article provides great insights into the updates of Google’s Gemini AI model and the challenges it is currently facing. It’s important to acknowledge both the advancements and the issues surrounding AI development.
Gemini’s inaccuracies are a headache for Google. When will they learn?
It’s unfortunate that Gemini has faced criticism, but I appreciate Google’s proactive approach in addressing the issue. Transparency is vital in AI development.