In the discussion around responsible AI, OpenAI has shown great maturity by acknowledging the moral issues with chatbots. Taking a step further, the company recently announced its plans to set up a Red Teaming Network with experts interested in improving the safety of OpenAI’s models. The company is inviting domain experts from diverse fields to help enhance the safety of its models.
The network will be assembled based on individuals with particular skills capable of aiding various stages of model and product development. The program offers flexibility, with members not necessarily participating in all projects, and their time commitments varying, sometimes as low as 5-10 hours per year.
CEO Sam Altman has frequently emphasized the extensive safety testing processes, spanning a rigorous six-month duration, preceding the release of GPT-4. This testing phase notably involved domain experts forming Red Teams to assess the product's safety.
Red Teamers also played a crucial role in the development of OpenAI's latest image-generation model, DALL.E 3, which was unveiled recently. These domain experts were actively engaged to enhance safety features and address issues related to biases and misinformation generation from the model, as confirmed by the company in their blog.
It's worth noting that the concept isn't unique to OpenAI. In July, Microsoft published an article on the subject of red teaming for large language models, highlighting its significance in ensuring the responsible development of systems utilizing LLMs. Microsoft also revealed that red teaming exercises were employed, incorporating content filters and other mitigation strategies for its Azure OpenAI Service models.
Read the full story here.
Larry says, Relax!
Contrary to the AI doomsayers, Oracle CTO Larry Ellison has expressed strong optimism about generative AI, countering the fear-mongering prevalent in the tech industry. At the Oracle CloudWorld 2023, Ellison emphasized the significance of generative AI, deeming it "probably" the most crucial new computer technology. He discussed privacy as a primary concern for generative AI, highlighting that Oracle is ensuring data privacy and security while facilitating private model training. He also advocated responsible AI use and sustainability, focusing on Oracle's dedication to integrating its cloud with Microsoft Azure for efficiency and sustainability.
Read the full story here.
Killing the Internet with AI
Very subtly, Google has shifted its stance on AI-generated content in its recent 'Helpful Content Update'. Previously advocating 'helpful content written by people, for people’, the updated phrasing, 'content created for people’, acknowledges AI's role in content generation. This linguistic pivot underscores Google's recognition of AI's significant content-creating impact, contrasting prior commitments to distinguish between AI and human-generated content.
This shift coincides with Google's efforts to combat AI-generated misinformation by contextualizing AI content in its Search results. As Google grapples with the duality of AI content, its actions bear the potential to shape the future of the AI-driven digital landscape.
Read the full story here.
It’s the CPU Revolution
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.