Yesterday, Google came out with a couple of announcements. One being the integration of Gemini into Google Workspace (transitioning from ‘Duet AI’ for Workspace), and the most significant being the release of its open model ‘Gemma’. Built on Gemini models and inspired by the Latin word ‘gemma’ that means ‘precious stones’, Google’s latest model aims to appease the open source community.
However, there is a catch. It is not open-source; Google is just trying to fit in.
The year so far for Google has been all about ‘Gemini’, and the company seems to be on an announcement spree. From releasing Gemini Pro 1.5 last week, which boasts the longest context window for any LLM (1 million tokens), to rebranding Bard to Gemini, Google’s fixation on launching Gemini in various forms continues.
The latest, ‘Gemma’, seems too good to be true. Deliberately calling it an ‘open model’ and not open source, Google has made Gemma available in two sizes – Gemma 2B and Gemma 7B. It is said to beat Meta’s Llama 2 on several benchmarks, including MMLU, HellaSwag and HumanEval. However, not being an open-source model brings plenty of constraints.
AI advisor Vin Vashishta noted that an open model paradigm allows the release of model weights, but there are limitations on the model usage that can restrict developers.
Google has an ulterior motive in pushing the presence of ‘Gemini’ everywhere. Read here to find out.
The RAG Killer?
LLMs commonly struggle with hallucination, and to address this challenge, two solutions were introduced, one involving an increased context window and the other utilising RAG. Several developers have been experimenting with Gemini 1.5, and some of the results have proved that it is better than RAG .
Is it possible to seal the fate of RAG? Read the full story to find out.
Indic Data is All You Need
In an exclusive interview with AIM, Professor Maunendra Sankar Desarkar from IIT Hyderabad, who is also a core team member of the BharatGPT initiative, spoke about the importance of Indic data.
Deserkar feels that the existing models may not adequately represent Indian cultural and linguistic diversity, which can lead to biases and limitations in their applicability. “Moreover, fine-tuning may not fully address the unique linguistic challenges posed by Indic languages,” he said.
Read the full interview here.
Aravind Bhai’s ‘Perplexity Propaganda’
Aravind Srinivas, the founder and CEO of AI-powered answer engine Perplexity AI, has been on a social media engagement spree ever since the company raised over $70 million this January.
Srinivas briefly changed his X handle to ‘Aravind Bhai’ following a friendly banter with Carl Pei, the co-founder and CEO of Nothing, hinting at a possible collaboration between the companies. Perplexity AI has also announced its partnership with several service and hardware providers since January.
How aggressively has the founder tried to push Perplexity AI to the limelight? Read more to find out.
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.