Experts warn about rising risk of AI-led human extinction while countries set to...

preview_player
Показать описание
Experts warn about rising risk of AI-led human extinction while countries set to establish AI guardrails

AI 발전이 가져온 편리함의 이면, AI 가드레일 현주소는?

The AI conversation has been heating up over the past few years with some voicing concerns.. that there should be guardrails as technology development starts to speed up.
Today, we delve deeper into the other side of AI technology with our business correspondent Lee Rae-hyun.
Welcome, Rae-hyun.

Great to be back, Jung-min.

Tell us about these two faces of AI.

Well, Jung-min there's no doubt that AI has its positives and can make life easier for us.
Think about the recent OpenAI chatbot ChatGPT providing information on various topics and assisting you with tasks such as composing emails or writing essays.
But some people are really worried about AI technology, including the CEO of the company that made ChatGPT.
Take a listen.

"If this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening."

Last week, top industry leaders, scientists, and experts including OpenAI CEO and Google DeepMind chief executive Demis Hassabis signed a one-sentence open letter to the public.
The letter, which expressed concerns about the risks of AI noted that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
So, it actually warned of a possible AI-led human extinction.

There have been a lot of AI-related issues, can you tell us more about those?

You're right, Jung-min if you take a look at what's going on in the music industry in April, a music track called "Heart on My Sleeve" by an artist named "Ghostwriter" was appeared online but got removed from streaming services right away.
The track sounded just like well-known pop stars Drake and The Weeknd as it was made by an AI tool to combine the voices of the two stars.
Now this is just one of many examples of the copyright issues of AI-generated works.
Microsoft last year also faced a lawsuit over its AI-driven coding assistant GitHub Copilot for violating the rights of the programmers who posted code under open-source licenses on GitHub.
AI image generators had been sued by a group of artists, too.
Now, copyright is not the only problem here as some are also worried about fake news.
A report released by NewsGuard, a tool that tracks online misinformation, shows that it has identified almost 50 fake news websites, that are generated by AI language models the ones like ChatGPT.
In fact, last month, a fake image generated by AI was posted on Twitter a picture of an explosion in the headquarters of the U.S. Department of Defense.
This caused a brief dip in the U.S. stock market after it was promptly spread by other news agencies.
After the image was debunked, Bloomberg reported that this event was “possibly the first instance of an AI-generated image moving the market.”
And other issues include people losing their jobs to AI as recent data shows that 5 percent of lay-offs last month in the U.S. were due to AI.
There are even forecasts made by Goldman Sachs that generative AI will likely replace 300 million jobs globally.

Well, you mentioned AI guardrails earlier. Where are we in terms of AI measures?

That's true, which is why major countries are working to establish AI guardrails.
This was one of the topics discussed at the recent G7 summit where countries including the U.S. and Japan agreed to establish generative AI measures by the end of this year.
The EU, for instance, has been discussing the world's first-ever AI Act since 2021.
The act would include the definition of AI, classify risk levels and responses, and the appointment of supervisory authorities.
China also revealed its blueprint for AI companies last month while Washington following its AI Bill of Rights established last year is also preparing for detailed guardrails as it kicked off its subcommittee hearing on AI in May.
South Korea, meanwhile, has announced plans to strengthen the hyper-scale AI sector but there are no regulations on AI as of yet.
One expert says AI legal ethics can resolve technology backfires.
Take a listen.

"South Korea is also working on establishing what's called "AI legal ethics." There are some challenges, though, as they may restrict technological development. That's why last year ethical principles in AI development, usage, and distribution were set first."

So many countries see eye to eye on the need for AI regulations but it seems further discussion is required before deciding.

Alright, thank you for your report today, Rae-hyun.

My pleasure.

#AI #Guardrail #AI_guardrails #AI_technology #Development #Technology #ChatGPT #OpenAI #인공지능 #기술 #발전 #편의 #이면 #Arirang_News #아리랑뉴스

2023-06-08, 18:00 (KST)
Рекомендации по теме
Комментарии
Автор

The only thing government want AI is to build a robot that can police humanity that will follow orders without question, that is why they always turn on them.

SpaceManAus
Автор

I am beginning to consider the benefits of extinction; especially when I see the decline of moral values in this world.

NEMES-S