The limits of AI – a ramble including Sidewinder missiles

preview_player
Показать описание
Putting Artificial Intelligence into perspective.

References
[1] Ministers not doing enough to control AI, says UK professor, retrieved 17/May/2023
[2] AI pioneer warns Government offering little defence against threat of technology, retrieved 17/May/2023,
[3] Formula 1 Turkish GP 2021 (User: SAİT71)
[4] Lewis Hamilton visiting fans at the 2018 British Grand Prix at Silverstone (User: Jen_ross83)
[5] Website of the International Mathematical Olympiad, Retrieved 23/May/2023
[6] Starship Flight Test by SpaceX
[8] Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter, retrieved 21/May/2023
[10] A. O. Dlyzko, “Newton's Financial Misadventures in the South Sea Bubble”, Notes and Records 73, 29 (2019).
[11] List of prime ministers of the United Kingdom, Wikipedia,
[12] Afghan High School Class of 2015 (Photo by NATO Training Mission-Afghanistan)
[13] Mnist Examples (User: Josef Steppan)
[14] Chandra X-ray Observatory by NASA,
[15] HMS Illustrious' Operations Room team during an exercise, Photo: POA(Phot) Ray Jones/MOD
[16] Steelworks of BlueScope Steel Limited company in Port Kembla, Australia by Marek Ślusarczyk
[17] Wolfsburg VW-Werk (User: AndreasPraefcke)
Рекомендации по теме
Комментарии
Автор


This is more of a rambling, low effort video. Bonus points to the first person to correctly identify both references in the thumbnail. I wanted an excuse to talk about the channel, so please go ahead. Let me know if there’s a topic you want to see in a regular video, if you want a serious livestream, or even something like a video game stream. If you want to join the Discord, go ahead, but then you have to actually talk in there, and someone needs to volunteer to be a mod etc.

ImprobableMatter
Автор

Hey I'm just glad we're talking about AI and not "The Blockchain". At least AI has some. actual benefits

mikedrop
Автор

I code DNNs, so I thought I'd throw my 5 cents in.

Dykstra / pathfinding. Already the latest AI's are using tools to solve problems so you would expect them to simply call the function like we do. Even an external one. Plus, expressing a linear relationship as a matrix-vector transform is actually efficient. Sure, there's an overhead in deciding to use it, but there's a much bigger overhead in the human doing the same. AIs can fail dramatically without warning though, I accept that point and it's likely fundamental. Although, as can humans.

The Lewis Hamilton thing was important. Let's go back to the 90s and make the same argument with Kasparov. No AI (and really, the architecture is irrelevant, only the results) could beat him in the domain of chess. But it didn't match humans which you seem to be suggesting, it very quickly went superhuman. In fact, this is a general trend, from calculators to traffic management, very subhuman to very superhuman in rapid succession. I can't think of a single example that sat at human equivalent. Why would it? It's possible but it's not an outcome we should expect.

"The limiting factor is the speed of building and physically organising things" - there are plenty of problems which aren't limited but that but ok, many certainly are. Are you saying an AI can't produce a more efficient construction and planning process? Is there something special about project management that means it can't be learnt? I doubt it. Now, for as long as you rely on humans to build stuff that's a weak link but, even then, an AI can optimise the hell out of the process. Put them in charge of factories / robots and I don't see why you would predict an inefficient outcome when they are optimised for efficiency.

Ok, I've already written too much and I'm only on minute 3! Plus, we haven't even touched on GAI and alignment.

davidmurphy
Автор

I think it's pretty incredible that the original Sidewinder missile could hit anything with such a detection method.
Also, your videos never fail to be clear, concise and straight to the point, even with a more rambly tone.

keilerbie
Автор

If you're right then the stakes are small but if you're wrong then everything, our entire future is at risk.

TerryClarkAccordioncrazy
Автор

The scariest thing about AI to me is what it puts into perspective: how easily impressed people are, and how vigorously humans jump at the opportunity to indulge in magical thinking when we encounter anything we don't understand. In an uncomfortable number of conversations that I have about AI, I leave depressed and honestly a little afraid, not because of what I fear AI might do with intelligence, but because of what I fear humanity might do with stupidity.

entrootentropy
Автор

“Mechanized agriculture? We already can farm by hand, a tractor just does it faster using an alternate method. What’s the benefit?”

yron
Автор

Thanks for making this. The fear-mongering and/or simping over AI has become exhausting.

Tahoza
Автор

Big fan of this video. Tech companies and VCs are hyping "AI" so hard and it's critical to be realistic about what new capabilities it will bring to the table (likely far fewer than they predict).

EmperorBun
Автор

i was abaout to scroll past this because i thought it didn't have sidewinder missiles in it but then i read the title and oh boy i was happy

anekdoche
Автор

Great video definitely do more like this. Especially with a great intro and satisfying outro. This universe is wild. The deeper and deeper one looks, the more complicated and perplexing it is. Our ancestors would have burned us at the stake for the things we take for granted.

TroyRubert
Автор

Wopr is from war games and the eye in the gear is the SMAC logo from civ

Another thing that people don’t talk about with regard to AGIs is that they are, at their heart, just math and we know From Gödel that there are limits to math

jb
Автор

AI wont be god, but it'll get close enough

the possibility of solving logical problems by simply throwing enough energy & compute at it, is quite profound

EspHack
Автор

>it's just speeding up what we can already do
I wanna say "bruh". How isn't complete overturn of labor costs not a big deal?!

kicsikacsa
Автор

I can't tell if this video is intended to address all the concerns about AI including the ones expressed by "experts", or only intended to address misconceptions "lay people" have after reading sensational news articles / posts online.

Nothing stated in the video struck me as false, but if it's intended to address the things that I think may be coming in the next decade or two, or the concerns I have about the future, I feel like it's missing the point. I don't think anyone familiar with what's going on is expecting current models to cure cancer, or be able to do new things people couldn't do.

When it comes to jobs, a machine doesn't have to be better than a person or capable of things the person isn't capable of, it simply needs to be able to do the job for lower cost.. it can even do a worse job as long as it can do it significantly cheaper. There are lots of examples of jobs that have been automated to systems that are worse than the humans they replaced.. but they do the job so much more cheaply that it doesn't matter. We can see this in products as well -- think of the cheap products we buy which are lower quality than a much more expensive version we might have bought 50 years ago which simply isn't available anymore because it's been wiped out by the cheap version. It is possible that as many or more jobs will be created than are obsoleted, but that certainly isn't guaranteed and there are reasons to be concerned that that might not happen this time.

In terms of capabilities, people are excited(/worried) not about what models can do today, but by projecting what appears to be a still-accelerating pace of advancement into the future. If we look at the advancements of machine learning in the last 10 years and project that rate of advancement (or more) into the next 10 years, that's what people are worked up about. People are particularly excited now because of the way large language models have already exhibited emergent properties.. being able to do things that their makers didn't anticipate they would be able to; exhibiting theory of mind is an example of this.

It can be somewhat important to distinguish between AGI and super-intelligence; not all AGI is super-intelligent AGI. AGI is usually defined as human-like intellect especially in terms of its generality, meaning its ability to work across many domains, extrapolate, come up with novel ideas and so on. Super-intelligence is the idea of intelligence far beyond ours to a similar extent as our intelligence could be considered compared to that of a mouse or an insect. (If you reject the notion that our intelligence is far beyond that of a mouse's or an insect's then you can just stop reading here.) I think both scenarios boil down to a couple simple questions:
1. Is it possible for such a thing [AGI / super-intelligence] to exist at all?
2. Is it possible for us to build it with the intelligence, skills, and technology we can develop?

I think both of these are open questions we can each answer for ourselves, but if you think the answer to both questions is yes, then I posit that in that case we _will_ create it and it is simply a question of when, and what happens next. I don't claim certainty but I think the answer to both is "yes", and many people including myself think we are likely to build AGI within the next 10-20 years. For super-intelligence, the idea is that once there is AGI, there will come a point where an AGI is able to start engineering itself to make itself more intelligent. People think if that happens then it will increase its own intelligence exponentially and become super intelligent very quickly after that. Timelines vary, but most people don't think that AGI will do that immediately. The concern here comes from looking at how we treat mice and insects.. it isn't our goal to wipe them out, but we don't treat their feelings or interests with very much respect, particularly when their interests are at odds with our own.

Thanks to anyone who read this wall of text.

adfaklsdjf
Автор

Very solid criticism. I appreciate the linked sources too!

MrDowntemp
Автор

Loved this opinion piece mixed in with the more "robust" content. More of it please!

As a personal opinion, convolutional neural networks (CNNs) do have a hard limit as AI, as they can only brilliantly/expertly regurgitate their training set. This can only facilitate (in terms of time or cost) current operations which are limited by man-hours and expert personnel dedicated to performing these tasks.

However, I think it is critical to understand that a huge increase in efficiency (time/cost/man-hour -wise) results in a different world, where certain operations which required immense resources are now in the realm of feasibility for individuals or small groups. A lot of white collar jobs are perfectly suited for CNNs, as its all about data analysis and information regurgitation.

An additional paradigm shift which will arise from the CNNs and large language models (LLMs) is the filtering of information. As the vast majority of knowledge of mankind is accessible to every individual with an internet connection, the directed delivery of information becomes the bottleneck. This can dramatically increase productivity and revolutionize teaching. Herein lies what I consider one of the greatest risks of neural networks. It will not be hard to direct and control flow of information (personalized even to the individual's level) to more or less "control" a population's perception or thinking. Something more delicate, insidious and much more dangerous than classic propaganda.

To conclude, I believe CNNs are not true AI and due to their architecture they can never be. At the same time they are an extremely powerful tool, with the power to shape society as we know it. So while I agree with the general thesis presented in the video, I am quite concerned by the power (the current cutting edge and the next gen) CNNs can bestow to the organizations which control them.

Vatraxotsipoura
Автор

You're missing most of the point I think even though you described it accurately.
AI won't be able to do anything that a human can't do at least not in the near future (and baring the specialized pattern recognizers) but the issue is: It CAN do almost everything that a human can do.
And people aren't ready for what extremely cheap near-human workforce can achieve. Think of all the amazing things ORDINARY people have done / came up with when they put their minds to the task. And then think how EVERYONE might have access to that power many times over.

MrRolnicek
Автор

Thanks for the video. Although I work as an IT developer, I'd argue that some of the issues you mentioned aren't quite that simple. For example, you suggested that three teams working 8-hour shifts could achieve the same results as AI. In certain applications, however, it's more akin to three teams of 100, 000 each. While that's theoretically possible, it's implausible. Some problems, especially in areas like material science, drug discovery, and certain logistical challenges, are simply solved faster by exploring the entire solution space. I don't view AI itself as a threat, but it certainly has the capability to turbocharge a small group of people, potentially rendering the rest obsolete. Will this lead to a utopia where only a few need to work, or a dystopia where only a few live well? That's a matter of one's perspective on human nature. But if history is any indicator I wouldn't have .y hope up.

MrAngry
Автор

Good discussion, good points. However, as always, there will be unintended and unforeseen consequences. "May you live in interesting times." The old curse certainly applies.

maxm