We're 'at least a decade away' from solving AI, says NYU Professor Gary Marcus

preview_player
Показать описание
Gary Marcus, New York University professor emeritus, joins 'Squawk on the Street' to discuss artificial intelligence implications, the future of generative AI, investor decisions, and more.
Рекомендации по теме
Комментарии
Автор

Finally, somebody that tells it as it is. Ai is the future, but we are trying to run before we can even stand up.

Twyscape
Автор

He's correct and if you look at all the newest models being released it's a race for faster AI and more ways to combine existing technology but the model's aren't necessarily getting a lot smarter. They are a game changer when it comes to certain things like responding to customers and basic writing and they can improve the accuracy somewhat but it's simply isn't anywhere comparable to human creativity and critical thinking.

Zach-ov
Автор

Artificial Intelligence was cited for 800 job cuts in April, the highest total since Challenger first tracked job cuts for this reason in May of 2023, when 3, 900 cuts were cut due to this reason. Since then, companies cut 5, 430 job cuts due to AI replacing workers

HardKore
Автор

I agree with his assessment that AI is overhyped, with one exception. I believe the current level of investment in AI is necessary to address the hallucination issue, improve the software's power efficiency, and identify more relevant consumer applications. My business relies on AI for many of my business and client interactions, and it has likely replaced one full-time employee that I would have otherwise hired. However, I must supervise the results it generates for me because the errors and hallucinations can be extremely frustrating. I do not use AI for any critical client interactions. I can't take the chance with the hallucinations.

haroldpierre
Автор

If you view the advancement of generative computing (probably what “AI” really currently means) as just the introduction of LLMs, something only possible because it USES generational computing, then he’s probably pretty accurate if not a bit negative on how companies will use them (LLMs) to produce profit in the next ~5 years. But if you recognize that generative computing is an entirely new technology with vastly untapped capabilities, as we are only in the first few years of its introduction into our world, then it seems a bit silly to only look at LLMs to assess the impact the technology will have on our economy/society. If anything, LLMs should be instructional so as to prepare ourselves for how big of leaps are now possible using the underlying technology, but we have no idea what else will be made possible and how quickly it can now happen. This is the real reason why companies are investing so heavily, it’s not that they want to make a bunch of competing GPTs, they want to discover the next application of generative computing.

GenomiMontavani
Автор

CNBC people are not happy with what he is saying.

yunusbarna
Автор

In my business I am constantly pitched by SAAS companies that use "AI". I always ask them "walk me through real world applications with data logs, etc to prove the efficacy of your expensive "AI" service." None have ever taken me up on it. Just like crypto, this bubble will pop and in 5-10 years the real companies that leverage this technology will emerge.

tackthekack
Автор

Ai is not AGI yet but it does solves a ton of problems. It's not the Hololens. It is great for analysis, writing, brainstorming, and much more. Really fantastic tool. It's not searching for a problem. Literally everyone I know uses it. Nobody I know uses driverless cars. Never compare by analogy.

Cool-gkmc
Автор

No, I would not agree that driverless cars have failed overall. While there have certainly been challenges and setbacks in the development and deployment of autonomous vehicle technology, significant progress has been made and driverless vehicles are very much an active area of research and development.
Some key points about the current state of driverless car technology:

Companies like Waymo, Cruise, Tesla, and others have autonomous vehicles operating in limited geographic areas and conditions, providing ride services to the public.
Advanced driver assistance systems (ADAS) with increasing automated capabilities are becoming more common in new vehicle models from traditional automakers.
Investment and research into driverless technology remains robust from automakers, tech companies, and startups alike.
Regulatory bodies are actively working on developing frameworks to eventually enable broader deployment of fully self-driving vehicles.
Technical hurdles remain, especially around operating in extreme conditions, edge cases, and achieving the ultra-high safety levels required for full autonomy everywhere.

So while the road to full Level 5 autonomy has been longer and more challenging than initially predicted by some, driverless vehicles are very much an active pursuit that has already achieved significant real-world milestones, even if broad consumer deployment is still years away.
Unless Gary Marcus made these comments very recently, I would be somewhat skeptical of a blanket characterization that "driverless cars failed." The technology is still rapidly iterating and advancing, despite the admitted difficulty of the challenge. More nuance is likely required in discussing the progress made so far.

HardKore
Автор

The journey of a thousand innovations begins with a single line of code. Here's to embracing the uncertainty and marveling at the surprises AI has in store for us, whether it's a decade away or just around the corner.

EasyAIForAll
Автор

Addressing AI hallucinations is an important challenge that can potentially be mitigated to a large degree using current generative AI models and techniques, without necessarily requiring artificial general intelligence (AGI).
Generative AI models like large language models and diffusion models have already demonstrated an ability to generate highly coherent and relevant text, images, etc. when properly trained on curated datasets. With improved training data filtering, retrieval augmentation, constitutional training objectives, and techniques like rejection sampling, we may be able to significantly reduce hallucinations from generative models.
That said, AGI that has a deeper, more general, and multi-modal understanding of the world could potentially solve hallucinations more definitively by having a unified world model to draw from. An AGI system may be less prone to simple pattern completion errors that lead to hallucinations.
However, AGI is still a grand challenge with immense unsolved problems around generalization, reasoning, grounding in reality, and avoiding broader failures beyond just hallucinations. So while AGI could be the ultimate solution, making continued progress with current generative AI to detect and mitigate hallucinations is likely the most viable path forward in the near-to-medium term.
In summary - while AGI may represent the most complete solution by gaining a comprehensive understanding, we can likely make significant strides in reducing AI hallucinations using enhanced generative AI models and techniques without necessarily solving the full AGI challenge first. But both avenues of research are important going forward.

HardKore
Автор

Most people don’t expect AGI overnight. The development of A.I. will be gradual, but it will be the most rapidly developing “gradual” we’ve ever seen. And as it was almost impossible to predict something like Facebook or Uber with the advent of the internet, there are many future applications of A.I. that will exist within a few years that are hard to imagine now.

People have to remember that today is likely *the worst that A.I. will ever be* and it’s pretty good already.

DynamicUnreal
Автор

“It is a solution in search of a problem” Just what I have been thinking over the last 1.5 year.

krizz
Автор

Listen to Gary Marcus at your own risk.

reedriter
Автор

●●Self driving cars have also been in operation in arizona since 2017

Does Gary actually do any research anymore?

gd
Автор

I see the point he is making but it lacks some nuance. It's true that the hallucinations will prevent A.I. from taking on certain types of responsibilities, it won't prevent A.I. from doing most human jobs. After all, we all make mistakes. I'm sure it will be possible to install checks and balances that bring over all capability up to that of most groups of people.

MichaelForbes-dp
Автор

Gary Marcus is a psychology professor which means that this guy study pseudo science for living. Yes, we should listen to him on investment and financial advice. lol

wohola
Автор

There are certain questions, the answers to which are hard to get/find, but once you get them it is very easy to crosscheck or validate them.
And there are certain cases, where the cost of error(given it's low frequency) is not very high.

These are the only use cases of present AI I see.

manubhatt
Автор

I agree soo much...the hype is real..Probably another We Work situation

ML.
Автор

He makes a very good point about hallucinations not being resolved, which is a major part of the problem with current AI models. But I want to point out the fact that, just by scaling up, models will become more capable and smart. Responsible and sensible humans will continue to use AI as powerful assistants, not just Mono- tasking tools. We see this already via GPT 4o, Claude Opus, PI AI, Gemini, etc.. Thus hallucinations are no more of a problem for AI as hallucinations are for human minds. Our minds are the source of its hallucinations. We create the data it consumes and reorganizes into new iterations. We easily profess false information just to be responsive just as AI does.

moderncontemplative