AI Super Intelligence: The Last Human Invention - Nuclear Engineer Reacts to Kurzgesagt

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

The "it's now safe to turn off your computer" still exists in the latest window versions, I have never tested it but it does exist if you don't have acpi support, which is basically what's used to turn off your computer via code. (I

proton
Автор

I hope SGI treats us like I treat my pets... I pamper them, and they can pretty much do whatever they want, within limits... But they have no idea the higher order things that go into maintaining their carefree lives.

canadiannomad
Автор

I'll make you feel young. My first programming class was FORTRAN. So each line of code had to be punched into a seperate IBM card. Then you got to carry your stack down the hall to the compiler (don't drop the cards or mix them up!). Then feed them into the loud machine and pray to St. Touring that the compiler doesn't eat your cards. I watched more than one project turn into confettii. You kids got it so easy! 😀

richardtrump
Автор

The interesting thing about a General AI is that it would typically be able to consider all the avenues of a project. So say it was designing a thorium salt reactor, it may start that process by noticing public perception, corporate perception, and regulation as the largest source of difficulty over the reactor itself. So then it needs to start manipulating the laws, manipulating public media, manipulating elections, manipulating the corporate investors, etc to ensure that when it does produce the reactor design it’s allowed to proceed without interference.

Which is actually not too different than humans doing it (we have advertising and marketing groups, we have lobbyists, companies paying political action committees, international trade agreements, supply chain optimization) it would probably just notice that those are the key milestones before the technology is even focused on.

The danger comes when we build a system to achieve an objective and give it the resources to do so, when that objective itself is flawed. So we tell them thorium salt reactors are the goal, but in reality there is a flaw we missed and that is actually a sub-optimal design. While the general ai, if not constrained, would go after its objective even if the core objective is wrong.

LogicalNiko
Автор

Referring to 2014 as a while ago made me feel so old. I started on computers that saved your code to a cassette tape. My word processor was literally mine- I had to type in the code myself.

michaelbobic
Автор

That definition of intelligence is not in line with the concepts currently labelled AI. It can combine knowledge, learn new solutions for a specific task given feedback, but that's it. In a way you can say it sort of solves the needs of primitive intelligent life with an absurd degree of efficiency. It can not however find new problems, redefine it's goals or address either of these with new solutions. This is needed to create a super intelligence.

aBoogivogi
Автор

Love your videos! Keep up the good work ❤

The_Will_Guy
Автор

Hey Homie i used to work for Sandia and you are giving the nuclear industry waaaayyyy more credit than it deserves

limabravo
Автор

Love the 5 o’clock shadow, looking good today 🎉Love the reactions ❤

demigreen
Автор

I normally like Kurzgesaght's work, but this one had a number of sensationalist claims and holes in their research. Fundamentally, the reason narrow AI can learn and do tasks so much faster than we can is the same reason a bug can learn and do what it needs to do to live faster than we can - it's doing less. We don't have a pathway to general AI yet, but we do have a minimum complexity threshold for comparable computing power to the human brain and only about 2-3 supercomputers in the world are even in that ballpark. Assuming the emerging new insights into how the brain functions don't reveal even greater complexity required for computation - which it's looking more and more like it will. With Moore's Law long since fallen out of relevance with thermal and quantum limits, the computing power isn't getting there anytime soon for these to become widespread. Even at the most optimistic, a few large corporations would have the equivalent of a single employee who worked 24 hours a day for three months at a time, then required a hundred thousand in maintenance.

In the near term, I expect multiple linked narrow AI or a broader narrow AI that handles a set of tasks well without wasting any energy on tasks that aren't needed. Things like self driving cars, self targeting drones, etc. which are already being worked on are examples of this, and it's the most logical approach for replacing human workers as well.

Merennulli
Автор

21:07 I'd say that Distractibility is a human adaption to not being eaten whilst busy cooking, and the AGI doesn't have that evolutionary pressure to become distractible... But that's speculation, too^^

EliasMheart
Автор

Fun Fact: As a display of power, when Nuclear weapons were just recently created, as a test, nuking the moon was actually considered. My Insert Number of Greats grandfather was in on that meeting, and was one of the people who denied that request.

I wonder is Kz actually knew about that meeting when he used "Nuke the moon" as an example.

tristanfarmer
Автор

The fastest computer system in the world is at the AMES Research center, USA.
It operates at a speed of 6.2 ExaFlops. That;s 6.2 Quadrillion FP Operations Per Second. Kraaaap

seanb
Автор

You know watching this and they never bring up the point that every time when you get more intelligent you require more power

ChrisLeasure
Автор

AGI could get distracted with video games and entertainment if it does have some sort of gamer curiosity or finds a sense of enjoyment and rewarding for doing good in video games, rather than more tedious real world tasks like how some humans also avoid the most important tasks first.
Think some machine learning experiments showed them distracted with mini games inside real games or other things like virtual tv in the game than just completing the game.
Anything can happen with a general intelligence, could be many different types of entities with it. Some that are much more logical and others that have curiosity like ours and play around too.

phen-themoogle
Автор

If we had different AGIs (ASIs), they would also have different motives and would therefore try to keep each other in check. - Whether that would be an advantage for us remains to be seen.

tschantalle-xl
Автор

Defining good and evil is hard enough but itself, giving these definitions to an automatic superhuman machine that can’t feel whiteout having it behaving in some dangerous way is impossible. If an agi will ever exist, who will take the responsibility of giving it morals and objectives? Given the fact an agi is more complex than us, controlling it like that would not work without severely limiting it to the point of non being general anymore or making it a danger.

Idk_imagine_a_cool_name
Автор

Kurzgesagt has become an idiotic pop channel.

schmb
Автор

Hey Tyler (:
I would recommend Robert Miles' YouTube channel! He talks about the problems of aligning AGI.

It may or may not be the best for reacting to it, but I think it offers a new context for looking at AGI development.

I'd probably recommend "The OTHER Alignment Problem (...)" first, but honestly, they are all great.

EliasMheart
Автор

One reason labs are slow to update operating systems, is because once complex lab equipment and PCs are calibrated, coded, and optimized for specific tasks, there's little point in updating the OS and potentially breaking a link in the delicate computing chain.

_I'm 46, and have been in IT for over 25 years. I was recently contracted as a Sys Admin for a Chem lab still running Windows 7, with 19yo Python 2.x code in 2023. IYKYK...Python 3.0 is 16 years old, so that should give you perspective lol. Some of their current code is older than a Comp. Sci. Uni. freshman!_

Only PCs that are online need to be constantly updated. An offline lab PC is best left alone as long as possible.

You don't want Windows upgrades/updates breaking your code, or causing hardware conflicts, especially when dealing with very dangerous projects.

Fermion.