ChatGPT's HUGE Problem

preview_player
Показать описание

Free-to-use, exceptionally powerful artificial intelligences are available to more people than ever, seemingly making some kind of news every day. The problem is, the public doesn’t realize the problem in ascribing so much power to systems we don’t actually understand.

✅ MANDATORY LIKE, SUBSCRIBE, AND TURN ON NOTIFICATIONS

📲 FOLLOW ME ON SOCIETY-RUINING SOCIAL MEDIA:

😎: Kyle
✂: Charles Shattuck
🤖: @Claire Max
🎼: Mëydan
Рекомендации по теме
Комментарии
Автор

This reminds me of a story where Marines trained this AI sentry to recognize people trying to sneak around. When they were ready to test it the Marines proceeded to trick the sentry by sneaking up on it with a tree limb and a cardboard box ala Metal Gear Solid. The AI only knew how to identify people shaped things not sneaky boxes.

Marjax
Автор

One of the best examples of this concept is the AI that was taught to recognize skin cancer but it turns out it didn't at all, it instead learned that pictures of skin with rulers was an indication of a medical image and began diagnosing other pictures of skin with rulers as cancerous because it recognized the ruler not the cancer.

Vaarel
Автор

I love how an old quote still holds and even better for AI “The best swordsman does not fear the second best, he fears the worst since there's no telling what that idiot is going to do.”

JoseMartinez-pndy
Автор

The biggest achievement wasn't the AI. It was convincing the public that it was actual artificial intelligence.

Doktor_Jones
Автор

I’m not afraid of the AI who passes the Turing test. I’m afraid of the AI who fails it on purpose.

Parthornax
Автор

I like to think of the curent age of AI like training a dog to do tricks. The dog doesn't understand the concept of a handshake, it's implications, the meaning, but still gives the owner it's paw because we give it a positive reaction when it does so.

Xendium
Автор

I remember reading that systems like this are often times more likely to be defeated by a person who has no idea how to play the games they are trained on, because they are usually trained by looking at games being played by experts. Thus, when they go to against somebody with no strategy or proper knowledge of the game theory behind moves and techniques, the AI has no real data to fall back on.

The old joke "my enemy can't learn my strategy if I don't have one" somehow went full circle into being applicable with AI

Elbenzo
Автор

This has actually given me a much greater understanding of "Dune".
When I first read it I thought it was a bit of fun sci-fi that they basically banned complex computers and trained people to be information stores instead.
But with all this AI coming out now....I get it.

mafiacat
Автор

Great video! I am an ML engineer. Due to many reasons its quite common to encounter models in real production that do not actually work. Even worse it is very difficult for even technical people to understand how they are broken. I enjoy finding these exploits in the data because data understanding often leads to huge breakthroughs in model performance. Model poisoning is a risk that not that many people talk about. Like any other computer code, at some level this stuff is broken and will fail specific tests.

troymann
Автор

I learned that in data ethics, *transaction transparency* means " _All data-processing activities and algorithms should be completely explainable and understood by the individual who provides their data._ " As I was learning about that in the Google DA course, I've always had a thought in the back of my head "how are the algorithm explainable when we don't know how a lot of these AI form their networks". Knowing how it generally works is not the same as knowing how a specific AI really works. This video really confirmed that point.

Leonlion
Автор

One of the things I've been saying for a while is that one of the biggest problems with ChatGPT and similar is that it's extremely good at creating plausible statements which sound reasonable, and they're right often enough to lure people into trusting it when it's wrong.

Eldin_
Автор

Funnily enough I find this kinda "human", I've seen this so many times in high school and university, people instead of "learning" they "memorize", the so when asked a seemingly simple question but in a different way than usual they get extremely confused, even going as far as to say they never studied something like that, it's a fundamental issue in the school system as a whole.

So it's funny to me that it ends up reflecting in A.I. as well.

Understanding a subject is always superior to memorizing it.

someguy
Автор

When I used to tutor math, I'd always try to test the kids understanding of concepts to make sure they weren't just memorizing the series of steps needed to solve that particular kind of problem.

DogFoxHybrid
Автор

Another fun anecdote is the DARPA test between an AI sentry and human marines.
The AI was trained to detect humans approaching (and then shooting them I suppose)
The marines used Looney Tunes tactics like hiding under a cardbox and defeated the AI easily.

On chatGPT, midjourney & co, I'm waiting for the lawsuits about the copyright of the training material. I've no idea where it will land

XH
Автор

I am a student, and I gotta admit, Ive used ChatGPT to aid on some asignments.
One of those asignments had a litterature part, where you read the book and it is suppose to help you understand the current project we’re working on.
I asked ChatGPT if it could bring me some citations from the book to use in the text, and it gave me one.
But just to proof test it, i copied the text and searched for it in the E-book to see if its there. And it wasn’t.
The quote itself was indeed correct with helping with writing about certain concepts that were key to understanding the course, and I knew it was right, but it was not in the book, ChatGPT had just made the quote up.
I even asked it for the exact chapter, page and paragraph it took it from.
And it gave me a chapter, but that was completely unrelated to the term i was writing about at the time, and the pagenumber was on a completely different chapter than the chapter it had said.
The AI had in principle just lied to me, despite giving sources, they were incorrect and not factual at all.

So Yeah, gonna stop using ChatGPT for asignments lol

pinkpuff
Автор

As a Computer Scientist with a passing understanding of ML based AI, I was concerned this would focus on the unethical use of mass amounts of data, but was pleasantly surprised that this was EXACTLY the point I've had to explain to many friends. Thank you so much, this point needs to be spread across the internet so badly.

isaiahhonor
Автор

One of the biggest issues is the approach. The AIs are not learning, they're being trained. They're not reasoning about a situation, they're reacting to a situation. Like a well trained martial artist. They don't have time to think, and it works well enough most of the time. But when they make mistakes, they reflect and practice. We need to recognize them for what they are. Useful tools to help. They shouldn't be the last say, but works well enough to find potential issues, but still needs human review when push comes to shove.

BenjaminCronce
Автор

I'm actually deeply worried by the rise of machine learning in studying large data sets in research. Whilst they can 'discover' potential relationships, these systems are nothing but correlation engines, not causation discoverers, and I fear the distinction is being lost

linamishima
Автор

I like AI systems for regression problems because we understand how and why those work. I also think that things like copilot are going in a better direction. The idea is that it is an assistant and can help with coding but it does not replace the programmer at all and doesn't even attempt to. Even Microsoft will tell you that is a bad idea. These things make mistakes, they make a lot of mistakes but using it like a pair programmer you can take advantage of the strength and mitigate the weaknesses.

What really scares me are people that trust these systems. I had a conversation with someone earlier today on if they could just trust the AI to write all the tests for them for some code and it took a while to explain that you can absolutely not trust these systems for any task. They should only be used working with a human with rapid feedback cycles.

Immudzen
Автор

This was brilliant. Previously my concerns about these AI was their widespread use and possible (and very likely) abuse for financial and economic gain, without sufficient safety standards and checks and balances (especially for fake information). Plus making millions of jobs obsolete. Now I have a whole new concern

... Aside from Microsoft firing their team in charge of AI ethics. Yeah...that isn't concerning.

minutestomidnight