Поиск по этому блогу

Search1

123

пятница, 6 октября 2023 г.

Researchers Want Galactica Back

Can't read or see images? View this email in a browser
 


Last year, Meta launched a large language model called Galactica. The model could summarise academic papers, solve maths problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. And, it was launched before ChatGPT.


However, unlike ChatGPT, which broke the hype cycle, Meta’s ambitious offering couldn’t survive even a week. Just three days post-release, Meta realised the model suffered from hallucinations and blurted random results. Meta panicked and withdrew the model.


Now, there is a demand from the research community that the model should be brought back and it’s only getting louder.

https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExOWJmZ296dnhxdjJwdzhjb2Z6d3o5cWthbTlzOGNxbXF1bjU0aHRiaSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/k0BRESevW4VFe/giphy.gif

Researchers believe that hallucinations are a part of the learning process for LLMs and urge that the model be weighed to assess whether the benefits outweigh the problems caused by occasional hallucinations.


They also connect hallucinations with creativity, suggesting that these might not be entirely harmful. Instead, they could serve as valuable "co-creative partners”, providing imaginative narratives that, while not entirely accurate, may contain useful threads of ideas for further exploration.


Meanwhile, Mustafa Suleyman, the CEO & co-founder of Inflection AI, anticipates a substantial reduction in LLM hallucinations by 2025, highlighting their profound implications that extend beyond present model inaccuracies.


Yann LeCun, the chief scientist at Meta AI, proposes that hallucinations are a result of auto-regressive prediction and suggests a remedy in the form of "Objective Driven AI”. In this approach, systems formulate their responses by optimising various objective functions during inference. LeCun argues that the term "hallucination" does not accurately characterise this specific aspect of LLMs, and he suggests it be referred to as "Confabulation”.


Researchers believe that after the release of LLaMA and Llama2, which too have broken the hype cycle, Galactica will be standing right in the corner waiting to be released again. 


Read the full story here.




Oracle Brings AI to Healthcare


Oracle aims to revolutionise healthcare with a range of innovations unveiled at its first Oracle Health Conference. These include cloud-based electronic health record capabilities, generative AI services, public APIs, and back-office optimisations, all designed for the healthcare industry. The introduction of the Oracle Health EHR platform, focusing on improving patient and provider experiences, simplifies patient engagement, offers self-service options, and reduces administrative burdens. 


Additionally, Oracle introduces generative AI capabilities through the Oracle Clinical Digital Assistant, streamlining workflows and enhancing patient engagement. The acquisition of Cerner reflects Oracle's strategic shift towards healthcare, aiming to create a unified, patient-centred healthcare system.


Read the full story here




AI replaces Human Feedback 


Reinforcement learning from human feedback (RLHF) has proven effective in training machine learning models, particularly large language models like ChatGPT. However, gathering high-quality human preference labels remains a challenge. Google Research has introduced a framework called reinforcement learning from human and AI feedback (RLAIF), which reduces reliance on human intervention. 


RLAIF and RLHF exhibit similar performance, with a slight edge for RLHF. Both tend to produce longer summaries than supervised fine-tuning. While RLAIF seems a viable alternative to RLHF without human annotation, more experiments are needed across various natural language processing tasks. 


Read the full story here.




Wrong Comparison

A recent study claimed that AI systems, including ChatGPT, emit significantly less carbon dioxide equivalent (CO2e) than humans performing tasks like writing or making illustrations. Raising eyebrows, the study's methodology equates an hour of writing with an hour of breathing, ignoring the fact that humans continue to emit CO2 even when not writing whereas AI systems can be turned off when not in use, making this a skewed comparison. 


The study also neglects the environmental impact of creating and maintaining the large datasets AI relies on, as well as the ongoing operational costs of AI models. In essence, AI's creative processes rely on human work, making a direct emissions comparison misleading.


Read the full story here.

     

TAUSIF ALAM & AMIT RAJA NAIK

Friday, Oct 6, 2023 | Was this email forwarded to you? Sign up here

     
   

DOWNLOAD OUR MOBILE APP

Stay Connected

info@analyticsindiamag.com

© 2023 Analytics India Magazine

   
Facebook
Twitter
LinkedIn
Youtube
Instagram
   
 
Analytics India Magazine | 280, 2nd floor, 5th Main, 15 A cross, Sector 6, HSR layout Bengaluru, Karnataka 560102

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.