Will ChatGPT replace programmers? | Chris Lattner and Lex Fridman

preview_player
Показать описание
Please support this podcast by checking out our sponsors:

GUEST BIO:
Chris Lattner is a legendary software and hardware engineer, leading projects at Apple, Tesla, Google, SiFive, and Modular AI, including the development of Swift, LLVM, Clang, MLIR, CIRCT, TPUs, and Mojo.

PODCAST INFO:

SOCIAL:
Рекомендации по теме
Комментарии
Автор

Guest bio: Chris Lattner is a legendary software and hardware engineer, leading projects at Apple, Tesla, Google, SiFive, and Modular AI, including the development of Swift, LLVM, Clang, MLIR, CIRCT, TPUs, and Mojo.

LexClips
Автор

Fire all your programmers and hire prompt engineers to manage your codebase. Let me know how it goes.

PetrovForever
Автор

GPT doesn't possess cognition. I have utilized it in my own programming projects, and at times, it proves to be a better tool than Stack Overflow. However, it occasionally generates incorrect code and apologizes when you point out the mistake. Instead of providing the correct version, it continues to offer additional incorrect suggestions, resulting in an infinite loop of erroneous recommendations. It amuses me when people claim that these issues will be resolved in future versions of GPT. In reality, such problems are inherent to large and matured language models and cannot be completely eliminated unless a revolutionary alternative emerges. Ultimately, when GPT fails, I find myself turning to Stack Overflow to seek human feedback. in simple terms, What GPT creates looks only impressive to the untrained eye and mediocre programmer like me.

Mikegeb
Автор

Newer(just a year+) self taught programmer and I’m getting to that point of building real robust projects and ideas so chatgpt has been amazing when I realize for example hey I need to loop over this particular data type or something simple and I don’t know how in that moment but if I ever ask it for something a bit more complicated I always end up fighting erroneous responses and having to over explain my ask. This could definitely be worked on in future gpt iterations but I think this idea of truly understanding what someone wants and needs seems super hard to reproduce with a LLM.

ttc
Автор

Before watching: NO, it won't and I'm honestly tired of this BS discussions.

There is almost infinite demand for software and the only reason it's not being made is because it is incredibly expensive - corporate system implementations cost BILLIONS. Tools like GPT will drive the price down, one developer will be able to increase the output, companies will want more stuff, more frequent updates etc.

piotrjasielski
Автор

As of now LLMs that write code are like low skill interns that do tiny code generation that needs to be supervised by actual developers. It needs to be guided along and it needs a lot of help to be integrated into an actual project. It is very impressive don't get me wrong, but it's no where near human replacement and I don't see it changing drastically anytime soon.
Programming, unlike other activities, needs a lot of contextual understanding. It is on the opposite side of the spectrum from highly specialized activities like digital illustration. We saw the latter being perfected already. I'm going to say that AI will be capable of doing the former sort of activities LAST and the latter first. Especially things like game dev requite so much unrelated skills like music, level design etc. If one AI can do all of that then I suspect it could do everything else in the world at which point we have AGI and the singularity. I wouldn't be too worried about programmers... I'm more worried for the world as a whole.

oredaze
Автор

“They’ll tell you they need a faster horse when they need a car”

Fucking loved that

ttc
Автор

I think future AI models will probably replace some parts of what we do quicker than we think - but I also think anyone who has the mind capable of manufacturing complex software will probably find a way of building something interesting and novel with the new tools AI creates - or extending our capabilities to simply make more fantastical software. I can't imagine it writing any and all complex software projects that we could produce, expecially when we consider it as a tool to extend our capabilities.

Just some thoughts:

Even if it were capable of responding to "Generate me a Cloud based Web Video Viewing application" - You might want things like - "Oh but with support for webm videos" - "and support video comments" - "but make the comments filter out profane language for users under X age" - or "with a REST API and documentation for comments and stats"

So programming could definitely become simplified into product / technical descriptions some day.

Where rather than a repository of code - you could have a repository of a product description with caveats and nuances in human readable and understandable language (perhaps plain English descriptions).

Humans love pushing the limits - so we'll probably use those programmers to push the limits of how complex of prompts we can generate, and generally solve novel problems in the realm of "what do we want exactly". At least until AI can predict what we want and produce outputs better than we could even think to ask for. Wouldn't be surprised if humans and AI are in a reinforcing feedback cycle of humans training AI with new & improved input - and AI providing new and improved tools for humans to produce new and improved outputs for.

Wouldn't be surprised if many of us move towards (many many years from now) mostly working by training and improving AI/LLMs with high quality inputs, and providing feedback / improvement in the long run - or providing the right prompts/inputs for the desired output. idk - impossible to really predict but it will be interesting at least to see where things go

SuperCoolBoymg
Автор

What a kind and lovely person Chris Lattner is. 
Thank you for Swift <3 the most amazing programming language out there.

petrulutenco
Автор

As a programmer who has spent hours fighting with ChatGPT to get working code for a new problem (and failing)... of course it won't replace programmers. Future developments could change that, but even then it will take years of "co-piloting" with human coders before it could possibly be trusted. To be clear, I'd use this in an IDE co-pilot role at the drop of a hat - but that's productivity increase, not replacement.

KenOtwell
Автор

Let's take a moment to appreciate the variable names in the thumbnail code.

takeuchi
Автор

I really admire how down-to-earth Lex is. Despite being really smart and talented, he never comes across as arrogant. It's refreshing to see someone like him who interacts with others in such a humble way. I hope to learn from his example and be more like that in my own personal interactions.

jonnyschaff
Автор

In my experience the jump from GPT 3 to 4 is huge for programming in python specifically. I've found it helpful to first communicate back and forth with GPT to design an algorithm, then once satisfied with the logic to ask just one time to produce the code.

lukemelo
Автор

For me it generates incorrect code most of the times

MonisKhanIM
Автор

Reflect on the unique value humans bring to programming at [0:11].
Consider how large language models (LLMs) are changing programming practices at [0:18].
Recognize the potential for LLMs to automate routine coding tasks at [2:32].
Explore the role of LLMs as companions in the coding process at [2:38].
Contemplate the interplay between human creativity and LLM-generated code at [5:28].

ReflectionOcean
Автор

Bruh AI went from zero to 100 in a short period of time and aint stopping, and ppl still saying nah thus is the best it can get 🤡

soggybiscuit
Автор

for me LLM's works best figuring out an error and not writing code but explaining why the error happend and what I can do to fix it

SynthByte_
Автор

Jesus Christ, finally someone who's not following a narrative built on irrationality! Nobody can predict the future, but if you program complex systems, you know very well where the limits are. Maybe AGI will arrive, and by then we're all on the same boat, and the issue will always be if we're not ALL on the same boat!

yacce
Автор

I think ChatGPT is too hight level and leaves space for a lot of ambiguity. It will obscure a lot what the machine will be doing and how will be performing. So there's still going to be needed a lot of computational knowledge, even more than these days if you want to get something sized and performant, without surprises in terms of costs, and of course, something that can be maintainable in the future. I don't see big companies leaving all their profit and reputation to a bunch of guys without computational knowledge playing with a prompt. Also, I don't think the prompt will be the future of programing. There will be something in the middle that can deal with ambiguity and that is more precise. I believe programing language will be needed less and less, but computational knowledge more an more.

rubendariofrancodiaz
Автор

I am a novice in programming, but it has proven to be extremely useful in my work and I am still learning. I can imagine that many advanced programmers might be able to accomplish more sophisticated tasks, but for a beginner like me, GPT-4 has been an incredible booster. I believe I have at least doubled the amount of useful code I can produce. Additionally, it has taught me how to learn about programming. My fear was that using GPT-4 would make me complacent or reduce my drive to learn more, but in reality, based on my experience, it has accelerated my learning and helped me make more progress. I think the impact will vary greatly depending on one's level of programming expertise, but this technology is bound to transform the field in one way or another.