Поиск по этому блогу

Search1

123

понедельник, 31 июля 2023 г.

These new tools to help protect our pictures from AI

Sponsored by Hedonova


The Algorithm

By Melissa Heikkilä • 7.31.23
 

Welcome to the Algorithm! 

Earlier this year, when I realized how ridiculously easy generative AI has made it to manipulate people's images, I maxed out the privacy settings on my social media accounts and swapped my Facebook and Twitter profile pictures for illustrations of myself.

 
The revelation came after playing around with Stable Diffusion–based image editing software and various deepfake apps. With a headshot plucked from Twitter and a few clicks and text prompts, I was able to generate deepfake porn videos of myself and edit the clothes out of my photo. As a female journalist, I've experienced more than my fair share of online abuse. I was trying to see how much worse it could get with new AI tools at people's disposal.

While nonconsensual deepfake porn has been used to torment women for years, the latest generation of AI makes it an even bigger problem. These systems are much easier to use than previous deepfake tech, and they can generate images that look completely convincing.
 
Image-to-image AI systems, which allow people to edit existing images using generative AI, "can be very high quality … because it's basically based off of an existing single high-res image," Ben Zhao, a computer science professor at the University of Chicago, tells me. "The result that comes out of it is the same quality, has the same resolution, has the same level of details, because oftentimes [the AI system] is just moving things around." 

You can imagine my relief when I learned about a new tool that could help people protect their images from AI manipulation. PhotoGuard was created by researchers at MIT and works like a protective shield for photos. It alters them in ways that are imperceptible to us but stop AI systems from tinkering with them. If someone tries to edit an image that has been "immunized" by PhotoGuard using an app based on a generative AI model such as Stable Diffusion, the result will look unrealistic or warped. Read my story about it. 

Another tool that works in a similar way is called Glaze. But rather than protecting people's photos, it helps artists  prevent their copyrighted works and artistic styles from being scraped into training data sets for AI models. Some artists have been up in arms ever since image-generating AI models like Stable Diffusion and DALL-E 2 entered the scene, arguing that tech companies scrape their intellectual property and use it to train such models without compensation or credit.

Glaze, which was developed by Zhao and a team of researchers at the University of Chicago, helps them address that problem. Glaze "cloaks" images, applying subtle changes that are barely noticeable to humans but prevent AI models from learning the features that define a particular artist's style. 

Zhao says Glaze corrupts AI models' image generation processes, preventing them from spitting out an infinite number of images that look like work by particular artists. 

PhotoGuard has a demo online that works with Stable Diffusion, and artists will soon have access to Glaze. Zhao and his team are currently beta testing the system and will allow a limited number of artists to sign up to use it later this week. 

But these tools are neither perfect nor enough on their own. You could still take a screenshot of an image protected with PhotoGuard and use an AI system to edit it, for example. And while they prove that there are neat technical fixes to the problem of AI image editing, they're worthless on their own unless tech companies start adopting tools like them more widely. Right now, our images online are fair game to anyone who wants to abuse or manipulate them using AI.

The most effective way to prevent our images from being manipulated by bad actors would be for social media platforms and AI companies to provide ways for people to immunize their images that work with every updated AI model. 

In a voluntary pledge to the White House, leading AI companies have pinky-promised to "develop" ways to detect AI-generated content. However, they did not promise to adopt them. If they are serious about protecting users from the harms of generative AI, that is perhaps the most crucial first step. 

Deeper Learning

Cryptography may offer a solution to the massive AI-labeling problem

Watermarking AI-generated content is generating a lot of buzz as a neat policy solution to mitigating the potential harm of generative AI. But there's a problem: the best options currently available for identifying material that was created by artificial intelligence are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

Meet C2PA: Launched two years ago, it's an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as "provenance" information. The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. Read more from Tate Ryan-Mosley here.
 

Bits and Bytes

The AI-powered, totally autonomous future of war is here
A nice look at how a US Navy task force is using robotics and AI to prepare for the next age of conflict, and how defense startups are building tech for warfare. The military has embraced automation, even though many thorny ethical questions remain. (Wired)

Extreme heat and droughts are driving opposition to AI data centers 
The data centers that power AI models use up millions of gallons of water a year. Tech companies are facing increasing opposition to these facilities all over the world, and as natural resources are growing scarcer, governments are also starting to demand more information from them. (Bloomberg)

This Indian startup is sharing AI's rewards with data annotators 
Cleaning up data sets that are used to train AI language models can be a harrowing job with little respect. Karya, a nonprofit, calls itself  "the world's first ethical data company" and is funneling its profits to poor rural areas in India. It offers workers compensation many times above the Indian average. (Time

Google is using AI language models to train robots
The tech company is using a model trained on data from the web to help robots execute tasks and recognize objects they have not been trained on. Google hopes this method will make robots better at adjusting to the messy real world. (The New York Times

 
ADVERTISEMENT
Sponsored by Hedonova Hedonova
Hedonova outperforms S&P 500: Investors enjoy 3X higher returns

Hedonova outperforms S&P 500: Investors enjoy 3X higher returns

Hedonova, a hyper-diversified hedge fund open to accredited investors has outperformed the S&P 500 by 17%.

Alternative investments are gaining momentum, with 67% of institutional investors predicting that a portfolio including 20% alternatives will outperform the traditional 60/40 stock-bond investment mix with lower volatility.

Hedonova invests in various alternative assets like equipment finance, litigation finance, startups, wine, art, etc. The SEC regulated fund is backed by the likes of Morgan Stanley and is open to accredited investors with a low minimum investment of $5,000.

The fund was also awarded Best Multi-Strategy Hedge Fund at Hedgeweek European Awards 2023.

Learn more →
 
Subscribe for full accesss

Subscribe for full access to the Accessibility issue and learn how technology can work for everyone. Plus, get in-depth stories on assistive devices, sonification, immigration, and climate change.

SUBSCRIBE NOW

Was this newsletter forwarded to you, and you'd like to see more?

Sign up today →
LinkedIn
Twitter
Facebook
View in browser | This email was sent to alexvarboffin.abbb@blogger.com.

Manage your preferences | Unsubscribe | Terms of Service | Privacy Policy

MIT Technology Review · 196 Broadway, 3rd fl, · Cambridge, MA 02139 · USA

Copyright © 2023 MIT Technology Review, All rights reserved.

Opt out of all promotional emails and newsletters from MIT Technology Review

Cryptography may offer a solution to the massive AI-labeling problem


The Download

Your daily dose of what's up in emerging technology

By Rhiannon Williams • 07.31.23

Hello! Today: the search for a better way to label AI has thrown up an interesting solution: cryptography. Plus Twitter isn't Twitter anymore, but what is X supposed to be?

Cryptography may offer a solution to the massive AI-labeling problem

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label AI-generated content.

There's a big problem, though: identifying material that was created by AI is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate.

But another approach has been attracting attention lately: C2PA. It's an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content. The problem is, it's far from a fix-all solution. Read the full story.

—Tate Ryan-Mosley

 

If you're interested in reading more about the search for a better way to label AI, check out the latest issue of The Technocrat, Tate's weekly newsletter covering policy and power in Silicon Valley. Sign up to receive it in your inbox every Friday.

The must-reads

I've combed the internet to find you today's most fun/important/scary/fascinating stories about technology.

1 Twitter as we knew it is dead
What comes next, in its new guise of X, is anyone's guess. (Wired $)
+ It has reinstated Kanye West's account after an eight-month ban. (WP $)
+ We're not tweeting anymore—we're just posting. (The Verge)
+ Why doesn't Elon Musk understand that he needs permits? (NYT $) 
+ We're witnessing the brain death of Twitter. (MIT Technology Review)

2 It looks like another covid wave is brewing
Cases are slowly creeping up, but we still don't know if covid exhibits a seasonal pattern. (The Atlantic $)
+ Cases are on the rise in the UK, too. (The Guardian)
 
3 Starlink controls nearly all satellite internet services
That level of power doesn't bode well for international relations. (NYT $)
+ Starlink signals can be reverse-engineered to work like GPS. (MIT Technology Review)

4 Amazon is asking some of its remote workers to resign
If they can't join office hubs, they're being asked to leave. (Insider $)
+ Things aren't great for UPS drivers either. (The Atlantic $)
 
5 Evangelical Christians are spying on sex workers online
Their surveillance tactics are helping police to obtain search warrants. (The Intercept)
+ Evangelicals are looking for answers online. They're finding QAnon instead. (MIT Technology Review)
 
6 Why EV bikes keep catching fire
Though lithium-ion batteries are generally safe. (WSJ $)
+ The speed limit on certain e-bikes can be circumvented. (NYT $)
 
7 Military start-ups are booming
AI is supercharging weapons and systems, with potentially deadly consequences. (FT $)
+ Silicon Valley has been capitalizing on the war in Ukraine. (MIT Technology Review)
 
8 Creating prosthetic arms has always been challenging
The Boston Arm was among the first to harness electrical signals from its wearer's muscles. (IEEE Spectrum)
+ These prosthetics break the mold with third thumbs, spikes, and superhero skins. (MIT Technology Review)
 
9 3D-printing is helping to protect rare species
By providing convincing replicas of animal body parts used to decorate traditional headdresses. (The Guardian)

10 Please don't drink laundry detergent
Despite what you might see on TikTok. (Vox)

Quote of the day


"To them, we are like robots rather than people. The little things that make us human, you can feel them being ground out of you."

—An anonymous Amazon worker in the UK describes the punishing reality of life inside the company's warehouses to the Guardian.

The big story

Eight ways scientists are unwrapping the mysteries of the human brain

August 2021

There is no greater scientific mystery than the brain. It's made mostly of water; much of the rest is largely fat. Yet this roughly three-pound blob of material produces our thoughts, memories, and emotions. It governs how we interact with the world, and it runs our body.

Increasingly, scientists are beginning to unravel the complexities of how it works and understand how the 86 billion neurons in the human brain form the connections that produce ideas and feelings, as well as the ability to communicate and react. Here's our whistle-stop tour of some of the most cutting-edge research—and why it's important. Read the full story.

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet 'em at me.)

+ The cast of the US version of The Office were avid readers of an early fansite (but if you haven't seen the British original, you really should.)
+ Timelapses of cakes rising is my latest obsession. 🍰
+ Bring back the women's restroom lounge!
+ A pasta recipe for every week of the year is true public service journalism.
+ Clear your mind and your schedule—it's time to take the perfect weekend nap.

Save 25% when you subscribe today

Limited Time Only: Save 25%

Subscribe for unlimited access to our latest Accessibility issue and learn all the ways technology can help build toward a more inclusive future.

SUBSCRIBE & SAVE 25%

Event

Top image credit: SARAH ROGERS/MITTR | GETTY IMAGES

Please send chaise longues to hi@technologyreview.com.

Follow me on Twitter at @yannon_. Thanks for reading!

—Rhiannon


Was this newsletter forwarded to you, and you'd like to see more?

Sign up today →
LinkedIn
Twitter
Facebook
View in browser | This email was sent to alexvarboffin.abbb@blogger.com.

Manage your preferences | Unsubscribe | Terms of Service | Privacy Policy

MIT Technology Review · 196 Broadway, 3rd fl, · Cambridge, MA 02139 · USA

Copyright © 2023 MIT Technology Review, All rights reserved.

Opt out of all promotional emails and newsletters from MIT Technology Review