Scale AI CEO shares misconceptions on model improvement #scaleai #artificialintelligence

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Its like giving up on a person at age 4 beacuse they cant manage your business for you yet

theguyonyoutube
Автор

We've been conditioned through decades of using very narrow purpose software solutions, to expect that the answers we get out of a computer should just always be correct, but the only reason that has been the case, has been because it had narrowly defined scope and a team of QA people had tested it meticulously.

When we move into the realm of AI with general purpose knowledge sets, the scope is wide open, QA can only be very general purpose, and in many cases we actually want it to hallucinate (tell stories).

The models will certainly continue to be enhanced and the QA will get better, but we also need to adapt our expectations and the way we interact.

The situation with prompt engineering is very much like the parable of the genie - you need to be really careful what you ask for.

If the veracity of the result matters, then say so, and tell it to check the references. If it's a complex problem requiring guidance and decisions, then tell it to work through the problem a step at a time with you ... that sort of thing.

WerdnaGninwod
Автор

I tried explaining this to my family during the holiday season and everyone thought I was being funny. The next ten years are going to be filled with a lot of atodisos

UntoTheLaot
Автор

Gary Kasparov said a computer couldn't beat him. Deep Blue beat him. He tossed up his hands in disbelief and walked away. (Admittedly though that one match was a matter of DB not coming up with any good moves and following its default to make a guess that made no sense to Kasparov, totally psyching him out.) In one industry after another, the Gary Kasparovs in the field will throw up their hands in disbelief at what is being achieved. Even computer scientists were blown away that letting deep learning find the patterns with minimal expert knowledge worked better than giving the computer access to a lot of expert knowledge.

cdorman
Автор

First of all, make sure you understand what LLMs are and what they doing with the trained data.
It is only useful if there is someone who can understand and correct the output.

davidjulitz
Автор

Gen AI can learn from experience but imagine that in a quantum computer AI will learn faster plug them to Internet it will have social media accounts for example that's just a beginning imagine the potential probably it will think humans are useless in few years.

PhilShnider
Автор

Model Improvement is one side, the other is accessible training data. "Real" data, generated by humans, is limited. Once it is all scraped up and most of the newly generated data is made by AI itself, the development will approach a concrete ceiling. At least as long as we are talking about AI models, which cannot extrapolate in a meaningful way (fyi: ChatGPT can't do that). However, Reinforcement Learning-AI might strike through that ceiling, once the game of life can be awarded in a meaningful way....but this is a more philosophical question rather than a technical one.

zenia
Автор

I agree because even Apple intelligence is not working for me and it's not intelligent enough. That's where you realize the GenAI bubble is going to pop real soon

xingxing
Автор

We believe in the tech, what we don't believe in is the shitty business model. GPT5 is going to get way more expensive, not less.

_sparrowhawk
Автор

Right, I will believe it only when autocorrect stops making mistakes

numeroVLAD