четверг, 21 сентября 2023 г.

OpenAI’s High on Moral Compass

Can't read or see images? View this email in a browser
 

A chatbot generating complicated chemical compositions for biological weapons takes tech revolution to a whole new (and dangerous) level. But is it moral to make such things easily accessible to anyone at the stroke of the keyboard? The answer is, of course, no.

https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExemg2ZW05dHZrcWY0MGU0d3Ewa3BiM2tkbWVmN3E2dWhkd3hvc25sZiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/ZXsX9uf906yYypuWhB/giphy.gif

In the discussion around responsible AI, OpenAI has shown great maturity by acknowledging the moral issues with chatbots. Taking a step further, the company recently announced its plans to set up a Red Teaming Network with experts interested in improving the safety of OpenAI’s models. The company is inviting domain experts from diverse fields to help enhance the safety of its models.


The network will be assembled based on individuals with particular skills capable of aiding various stages of model and product development. The program offers flexibility, with members not necessarily participating in all projects, and their time commitments varying, sometimes as low as 5-10 hours per year.


CEO Sam Altman has frequently emphasized the extensive safety testing processes, spanning a rigorous six-month duration, preceding the release of GPT-4. This testing phase notably involved domain experts forming Red Teams to assess the product's safety.


Red Teamers also played a crucial role in the development of OpenAI's latest image-generation model, DALL.E 3, which was unveiled recently. These domain experts were actively engaged to enhance safety features and address issues related to biases and misinformation generation from the model, as confirmed by the company in their blog.


It's worth noting that the concept isn't unique to OpenAI. In July, Microsoft published an article on the subject of red teaming for large language models, highlighting its significance in ensuring the responsible development of systems utilizing LLMs. Microsoft also revealed that red teaming exercises were employed, incorporating content filters and other mitigation strategies for its Azure OpenAI Service models.


Read the full story here.




Larry says, Relax!


Contrary to the AI doomsayers, Oracle CTO Larry Ellison has expressed strong optimism about generative AI, countering the fear-mongering prevalent in the tech industry. At the Oracle CloudWorld 2023, Ellison emphasized the significance of generative AI, deeming it "probably" the most crucial new computer technology. He discussed privacy as a primary concern for generative AI, highlighting that Oracle is ensuring data privacy and security while facilitating private model training. He also advocated responsible AI use and sustainability, focusing on Oracle's dedication to integrating its cloud with Microsoft Azure for efficiency and sustainability.


Read the full story here.




Killing the Internet with AI


Very subtly, Google has shifted its stance on AI-generated content in its recent 'Helpful Content Update'. Previously advocating 'helpful content written by people, for people’, the updated phrasing, 'content created for people’, acknowledges AI's role in content generation. This linguistic pivot underscores Google's recognition of AI's significant content-creating impact, contrasting prior commitments to distinguish between AI and human-generated content.


This shift coincides with Google's efforts to combat AI-generated misinformation by contextualizing AI content in its Search results. As Google grapples with the duality of AI content, its actions bear the potential to shape the future of the AI-driven digital landscape.


Read the full story here.




It’s the CPU Revolution

https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExdTRzZ3YwcWx6cHhrOXBhZjYydWgxcHZoM25vNXJ1eXlxbmpzOXJrbyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/4NiFoaN9ufCOKbbn2i/giphy.gif

The usage of AI models is skyrocketing, emphasizing the shift from training to inference costs. CPUs are becoming competitive for inference due to advantages like efficient distribution across low-cost CPUs. Inference necessitates different optimization approaches and addresses unique challenges, such as latency.


AMD's acquisition of Mipsology, an AI software company focused on inference, signifies its focus on AI inference and CPU-based solutions. Intel leverages its CPU capabilities and open-source contributions to challenge GPU-centric AI inference. NVIDIA enhances AI inference with TensorRT-LLM, doubling H100 GPU performance for LLMs. 


Read the full story here.

     

TAUSIF ALAM & AMIT RAJA NAIK

Thursday, Sep 21, 2023 | Was this email forwarded to you? Sign up here

     
   

DOWNLOAD OUR MOBILE APP

Stay Connected

info@analyticsindiamag.com

© 2023 Analytics India Magazine

   
Facebook
Twitter
LinkedIn
Youtube
Instagram
   
 
Analytics India Magazine | 280, 2nd floor, 5th Main, 15 A cross, Sector 6, HSR layout Bengaluru, Karnataka 560102

Комментариев нет:

Отправить комментарий

Примечание. Отправлять комментарии могут только участники этого блога.