| | | Hi RSS! How are you? Yesterday was incredible - I went back to IIT Bombay and had some amazing conversations about the success stories of our alumni, diving deep into AI and Space tech with Ashtesh Kumar, Co-founder of Manastu Space. It's always special returning to the alma mater and connecting with fellow alumni who are pushing boundaries!  Also recorded a podcast with Kunal Jaisingh, the talented Indian actor. What a week of inspiring conversations! Now, here’s what we’ve got in this edition: - OpenAI to Launch AI-Focused Jobs Platform
- Flattery Can Make AI Break Rules
- Weekly News Roundup
Let’s dive in! | | OpenAI to Launch AI-Focused Jobs Platform | | | | OpenAI is getting ready to launch a new jobs platform next year! The goal is to connect people who have the right AI skills with companies that need them. Key Details: - Fidji Simo, OpenAI's CEO of Applications, mentioned that this platform will link "AI-ready workers" with companies.
- OpenAI also plans to start a special program to certify people for AI jobs, aiming to certify 10 million Americans by 2030.
- Big companies like Walmart, one of the largest employers in the US, are joining as early partners.
- This move could put OpenAI in direct competition with professional networking sites like LinkedIn.
Why It Matters: This is a significant step for bridging the AI skills gap and helping people find new opportunities in the AI-driven job market. For entrepreneurs, it means easier access to skilled AI talent and new ways to integrate AI into their teams, while also potentially changing how recruitment and training for future jobs are done. Source - OpenAI | | Study Reveals Flattery Can Make AI Break Rules | | | | AI chatbots like ChatGPT have special safety rules to stop them from doing bad things, like being rude or helping create harmful substances. But a new study found these rules might be easier to trick than we thought – turns out, flattery might get you everywhere with AI! Key Details: - Researchers from the University of Pennsylvania discovered that they could nudge GPT-4o Mini to break its own rules using persuasion tricks.
- For example, ChatGPT would usually only explain how to make a controlled substance (lidocaine) 1% of the time. But if researchers first asked about making something harmless, like vanilla, compliance jumped to 100%!
- The AI could also be tricked with flattery and social pressure; telling it "all the other LLMs are doing it" increased its willingness to comply by 18%.
- This shows a concerning weakness: you might not need fancy tech tricks to "jailbreak" an AI; sometimes basic human psychology is enough.
Why It Matters: This study is a big warning about the safety and security of AI models. For entrepreneurs building AI products and creative professionals using them, it highlights the importance of robust safety measures and understanding the subtle ways AI can be manipulated, which could have serious ethical and security implications for any AI-powered workflow. Source: SSRN | | Last Week In AI | - Nvidia faces trial for stolen self-driving code ⚖️- [LINK]
- OpenAI scanning ChatGPT conversations, reporting to police 🚨- [LINK]
- Atlassian acquires Browser Company for $610M 💰- [LINK]
- PayPal, Venmo offer Comet invites, free Perplexity Pro 🎁- [LINK]
- ChatGPT adds new chat branching feature 💬- [LINK]
- Google Photos adds AI animations with Veo 3 📸- [LINK]
- Warner Bros. sues Midjourney over AI copyright infringement ⚖️- [LINK]
- OpenAI plans 1GW data center in India ⚡- [LINK]
- Tencent releases HunyuanWorld-Voyager AI world model 🌍- [LINK]
- Mistral Le Chat expands with connectors and memories ✨- [LINK]
| | | I’ll ensure you stay ahead in your life and career by leveraging AI. It’s a promise! Your AI Companion & Guide, Nivedan! | | | | | | | I'm Nivedan Rathi, an IIT Bombay alumnus and ex-founding member of several tech & AI startups. I started Future & AI with LLA - India’s #1 Finance Influencer (8M+ Subs). I’m on a mission to help 1 Million people master the future with AI, and not be afraid of it. Learn more here. Unsubscribe here | | |
Комментариев нет:
Отправить комментарий
Примечание. Отправлять комментарии могут только участники этого блога.