AI Is Dangerous, but Not for the Reasons You Think | Sasha Luccioni | TED

preview_player
Показать описание
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.

Follow TED!

#TED #TEDTalks #AI
Рекомендации по теме
Комментарии
Автор

People used to say the internet was dangerous and would destroy us. They weren’t wrong. Most of us have a screen in front of us 90% of the day. AI will take us further down this rabbit hole, not because it is inherently bad but because humans lack self control.

ellengrace
Автор

"We're building the road as we walk it, and we can collectively decide what direction we want to go in, together."
I will never cease to be amazed at the utter disregard that scientists and inventors have for *history*. To even imagine that we humans are going to "collectively" make any decision about how this tool -- and this time, it's AI, but there have been a multitude of tools before -- will be developed is ludicrous. It absolutely will be decided by a very few people, who will prioritize their own profit, and their own power.

robertjames
Автор

Ultimately, the problem with AI is not that it becomes sentient, but that humans use it in malicious ways. What she's talking about doesn't even take into consideration when the humans using AI WANT it to be biased. You feed it the right keywords and it will say what you want it to say. So, no, it's not just the AI itself that is a potential problem, but the people using it. Like any tool.

somersetcace
Автор

Art data can't be removed from AI once the AI has 'learned' it's data. As I under stand it they would have to remake the AI from scratch to discard that info. So if you find your work in a database used to train AI it's already too late. Please correct me if I've misunderstood.

donlee_ohhh
Автор

2 people are falling out of a plane.
One says to the other "why worry about the hitting the ground problem that might hypothetically happen in the future, when we have a real wind chill problem happening right now."

donaldhobson
Автор

Yes, I believe we are way ahead of ourselves. We should really slow down and think about what we are doing.

michaelvelasquez
Автор

So, the answer to bad software is to create more software to police the bad software. What ensures some of the police software won't also be bad software?

bumpedhishead
Автор

For Artists it should be a choice of "Opting IN" NOT "Opting OUT" as in. If the artist chooses to allow their work to be assimilated by AI they can choose to do that ie. "Opt In". Not "OPTING OUT" meaning it's currently possible & even likely that when an artist uploads their work or creates an account they might forget or miss seeing the button to refuse AI database inclusion which is what is currently being used by several platforms I've seen. As an artist generally I know we are excited & nervous to share our work with the world but having regret & anxiety over accidentally feeding the AI machine shouldn't have to be part of that unless purposefully chosen by the artist.

donlee_ohhh
Автор

So basically 'Stop worrying about future harm, real harm is happening right now.' and 'We need to build tools that can inform us about the pros and cons of using various A.I. models.'

sparkysmalarkey
Автор

Where it all falls down, is the individual won't get to choose a 'good' AI model, where AI is being used by a governmental entity, a corporation etc. without explicit consent or even knowledge that AI has been part of the decision about them.

mawkernewek
Автор

If we assume that our world is heavily biased, it implies that the data used for AI training is biased as well. To achieve unbiased AI, we'd need to provide it with carefully curated or "unbiased" data. However, determining what counts as unbiased data introduces a host of challenges. 🤔

robleon
Автор

The reason people worry about "existential threats" from AI more than what's happening now is that the speed the technology is improving is practically beyond human comprehension. The chart she shows at 2:59 shows a steady increase but that increase is actually *logarithmic* . If you look closely the abilities of these things is increasing by nearly a factor of 10 every year. In only three years that means AI that can potentially be a _thousand_ times smarter than what we have currently. And that's not even counting any programming improvements.

So we could easily reach the point of no return not in decades but just a few years. And by the time that happens it will be FAR too late to do anything about it. And that's just worrying about a worst case scenario. And in the meantime it's still having profound effects on art, education, jobs, etc. Not to mention the ability to use it to perpetuate identity theft, fraud, espionage and so on.

saken
Автор

The "dangers" identified here aren't insignificant, but they are actually the easiest problems to correct or adjust for. The title suggests that these problems are more import or more dangerous than the generally well-understood problem of AI misalignment with human values. They are actually sub-elements of that problem, which are simply extensions of already existing human-generated data biases, and generally less potentially harmful than the doomsday scenarios we are most concerned about.

crawkn
Автор

Emissions caused by training AI models are negligible compared to things like heavy industry. I wonder if they also measured how much emissions are produced by playing video games or maintaining the whole internet.

Macieks
Автор

This is a crucial topic! Like today's internet, it has a good and bad side, so it really boils down to creating tools that help us develop better models. The tools that she made are a great start to addressing the biases in the future. It shows that sustainable, inclusive, more competent, and ethical AI models are possible.

KoiAcademy
Автор

My wife is a portrait artist. I just searched her on SpawningAI by name, and the first 2 images were her paintings (undoubtedly obtained from her web-based portfolio).

mattp
Автор

01:07 🌍 AI has current impacts on society, including contributions to climate change, use of data without consent, and potential discrimination against communities.
02:08 💡 Creating large language models like ChatGPT consumes vast amounts of energy and emits significant carbon dioxide, which tech companies often do not disclose or measure.
03:35 🔄 The trend in AI is towards larger models, which come with even greater environmental costs, highlighting the need for sustainability measures and tools.
04:35 🖼 Artists and authors struggle to prove their work has been used for AI training without consent. Tools like "Have I Been Trained?" provide transparency and evidence for legal action.
06:07 🔍 Bias in AI can lead to harmful consequences, including false accusations and wrongful imprisonment. Understanding and addressing bias is crucial for responsible AI deployment.
07:34 📊 Tools like the Stable Bias Explorer help uncover biases in AI models, empowering people to engage with and better understand AI, even without coding skills.
09:03 🛠 Creating tools to measure AI's impact can provide valuable information for companies, legislators, and users to make informed decisions about AI usage and regulation.

dameanvil
Автор

Interesting. So the hypothesis here is that all the electricity used to train large language models came from non-renewable sources, unless it was her firm that was doing it. Also, AI models would rank the probability of an image being true based on a user's query. This doesn't necessarily mean that less probable choices do not represent other scientists.

It sounds more like smart publicity!

streamer
Автор

The bit about AI (and other techs) that concerns me the most is the free-for-all personal data harvesting by corporations without any laws to control what they do with it. Only the EU has taken some steps to control this (GDPR), but no other nation protects the privacy of our data. These corporations are free to collect, correlate and sell our profiles to anyone. AI will enable data profiles that know us better than we know ourselves... all in a lawless environment.

nospamallowed
Автор

AI prejudice is the least of my concerns. A mother brain in charge of nukes, the grid, cameras, communication satellites, and killer drones = concern.

chetisanhart