Although Google claimed that its latest generative AI model, Gemini was designed to compete with OpenAI’s GPT-4, early impressions of Gemini Pro are unsatisfactory. Several users complained on X that the model showed inaccurate results repeatedly, and even asked them to Google information themselves.
Earlier this week, Google launched Gemini 1.0 stating that it had the potential to revolutionize computer interactions and blow the competitors like GPT-4 out of the park. The company introduced three versions of the generative AI model for different needs.
- Gemini Ultra for big data centers.
- Gemini Pro for everyday use.
- Gemini Nano for mobile devices.
Recently, Gemini Pro was released to Bard which enabled users to test its capabilities like reasoning, code generation, analyzing and interpreting scientific data, and processing various input formats with accuracy, efficiency, and speed.
Bard is Google’s conversational generative AI chatbot introduced in March 2023. Powered by LaMDA to interact with users like OpenAI’s ChatGPT-4, Bard can answer follow-up questions, compose various types of literature, search results, and more.
Early impressions of Gemini Pro reveal it makes a lot of mistakes
According to anecdotal evidence, Gemini Pro failed to show accurate results for very simple queries, over and over again.
When X user @Benjaminnetter asked Gemini Pro to show a six-letters word in French, it showed a 5-letters word.
FYI, Google Gemini is complete trash. pic.twitter.com/EfNzTa5qas
— Benjamin Netter (@benjaminnetter) December 6, 2023
For user @Vitor_dlucca, the model showed incorrect information about the 2023 Oscar winners like the Best Actor.
I'm extremely disappointed with Gemini Pro on Bard. It still give very, very bad results to questions that shouldn't be hard anymore with RAG.
A simple question like this with a simple answer like this, and it still got it WRONG. pic.twitter.com/5GowXtscRU
— Vitor de Lucca 🏳️🌈 / threads.net/@vitor_dlucca (@vitor_dlucca) December 7, 2023
Furthermore, @EdisonAde found that the model could not write a simple code for the Tic Tac Toe game.
— Edison Ade (@buzzedison) December 6, 2023
Gemini Pro is also not good at summarizing the latest current affairs events. Users seeking updates on the war in Israel and Gaza were told to Google the information.
— Min Choi (@minchoi) December 6, 2023
It appears that GPT-4 is still the leading generative AI model with better accuracy and efficiency than Gemini Pro which is equivalent to GPT-3.5 according to the tech giant. Maybe with future updates, Google can improve its performance.