AI is now in the world’s top 200 programmers. Who cares?

preview_player
Показать описание
An LLM is just a processor.
Рекомендации по теме
Комментарии
Автор

I'm not sure that the app layer built on top of LLM's is going to have the level of complexity, lock-in and ubiquity that operating systems have.The surface area of an operating system is massive and involves low level programming. The app layer on top of LLM's really only need to interface with LLM's and can be programmed at the highest levels of abstraction. I think that tight coupling to existing operating systems is probably the only way to see a similar moat. Apple and Microsoft do seem like this is what they are working towards. I don't see OpenAI being able to do meaningfully more than provide good models.

Muaahaa
Автор

Calculator is now in the world’s top 1 mathematecians

MyGreenpotato
Автор

They are already commodified. I can switch between Sonnet 3.5 and O1 in my Cursor code editor with one mouse click

tedarcher
Автор

As someone who purchased the very first IBM PC sold in the state of Missouri and then built a St. Louis-based software company around it, I found your AI video to be the most profound and insightful of any I have yet seen. I lived through the PC explosion. I remember being in a small room with Bill Gates when he still looked like a young kid. I debated with Peter Norton about the best way to use IBM Pascal to write to the PC screen buffer. I'm seeing the next big explosion and, just like the first time, I don't know where I'll fit—I just know I need to be a part of it. I've purchased an AI TLD and I'm starting another company. Your video really refocused how I see the market working, and I know you're right because I was there for the whole ride the first time through.

Thank you so much!

BTW, I paid $24, 000 in today's dollars for that PC: a maxed-out 640K, 8088-based computer with two 160K floppy drives and a monochrome screen. I still have it.

TimeForLifeTravel
Автор

I like the idea of an "AI Stack." You can interchange the core ingredients of LLM, Thinking/Planning System, and Chat Interface to get different results. Different "flavors" of AI.

benbowers
Автор

One of the best AI topic video's I have seen here on Youtube. Thanks!

berthuygens
Автор

In contrast to traditional computer programs, the big issue with LLM’s is that with each prompt cycle the response is not 100% reliable. This issue makes it really difficult to apply it to real world problems. The more complex the problem becomes the harder it becomes to validate the correctness of the response.
Suppose your calculator app on your phone is known to make random mistakes. So you never really know whether the answer is correct, would you ever use it?
In other words LLM’s are only useful in combination with traditional programs that provide reliable outcomes.

murphyvanoijen
Автор

all in all, general key to success - is to rob someone else's idea and sell it as quick as possible

notSLy
Автор

dude, you are so good at explaining complex stuff, and yes I liked your perspective. Keep it up. Anyway, merry christmas!

ehza
Автор

Bill’s Mom was a director at IBM. It was a nepo contract.

MrMaguuuuuuuuu
Автор

Overall I agree with this analysis for the current state of the technology. LLM's are increasingly commoditized, and the interface, and products built around them are increasingly more impactful in choosing between them. A bare LLM is very limited compared to one that is well integrated into a chatbot workflow, or a coding workflow. The main factor this ignores imo is that there is also a kind of philosophical element to this discussion that doesn't have a great analogue to the development of microprocessors.

LLM's, and systems built around them have the capability to be intellectual force multipliers. Currently, they are decent at this. They can really speed up coding depending on the context, and they do the same things that search engines do better in a lot of circumstances. The key question is whether LLM systems will ever be a meaningful force multiplier on LLM systems research. Personally I'm pretty skeptical because many of the people talking about AI accelerationism have a horse in the race, and incentives to exaggerate, or downplay the timelines and likelihoods, but it is a possibility. If one lab (whether it is OpenAI, Google, Apple, or some other hyperscaler with a ton of compute) gets a meaningful lead in developing a system that accelerates LLM development (and by LLM development, I mean both training base models, as well as creating system, or algorithmic improvements on top of base models like reasoning systems), then they could conceivably exponentially pull away from the others.

In that case, the apps and products built around their system would be much less important than the raw competency of their model at key tasks. Time will tell if progress is hiting a wall now, or just starting to speed up.

b
Автор

I’m using a top tier model with excellent prompts to code and it is hit and miss. I have to keep a close eye on it and can never use anything as-is even if it doesn’t have hallucinations or syntax errors. When it works it’s great and saves time. But often it is very frustrating.

GSKEVER
Автор

I appreciate this perspective. Good stuff, thanks for putting it out there.

Drixidamus
Автор

wow! loved this way of framing ai within the basis of the larger tech space. ive never seen your channel before but you have such nice camera shots and content

chloe_
Автор

Great Info for someone who wants to understand Ai and how it works .. thanks !

PaulKwitekmusic
Автор

This is exactly what I have been thinking. I really appreciate how much thought you put into this. I have often wondered why, when Chatgpt landed, all the other companies started building chat bots...it's not like they were actively doing any research and development behind the scenes. But immediately openai lunched their LLMs, the whole world started building their own models. I thought that these models were built off the same architecture as that of chatgpt.
That aside, I would like to know how image generation works.
Also, like you mentioned earlier, software engineers are always crowding in the midst of the hot thing in the market, so I have a feeling that developers are going to be heavily involved in the development of AI products, going forward.
And I have also thought along the lines of what you said concerning the progress of AI. The fundamental architecture of the technology is basically the same. Only thing that will be done to improve it is to tweak and maintain it so it can be faster and more accurate, and possibly eliminate hallucinations....but that's all it would ever be.... nothing more than just faster and more accurate....

joshuaadewale
Автор

My theory has long stood the test of time:

Every tech advancement doesn't not save time for the average person. It just makes more work for them at the same or lower salary and the entrepreneur takes on more risks at the cost of running their business to the ground as megacorps are the only ones that get bailed out.

SimGunther
Автор

A lot of older systems from that time didn't even have a UI, just a terminal.

Nacalal
Автор

Benchmarks are one thing, but actual coding problems are much more difficult for o1 or o3 to solve. These things can barely debug

InternalStryke
Автор

The very fact that Youtube finally get your channel into my recommendations is making my day. Awesome and accessible explanations as well as interesting insights, 10/10 will come back for more

woosh_woosh
welcome to shbcf.ru