Elon: BILLIONS of Teslabots Can Achieve AGI

preview_player
Показать описание
Join me, Elon Musk, Sabine Hossenfelder, and Jodie Burchell, PhD in Psychology, as I use very recent (and confluent) video clips to explore the nature of Large Language Models, autonomous driving, humanoid robots, embodied AI and the possibility of artificial entities gaining the power of consciousness. It's a fun ride!

Join this channel to get access to perks:

Experts: EMBODIED AI is the Path to Consciousness

Get The Elon Musk Mission (I've got two chapters in it) here:

For a limited time, use the code "Knows2021" to get 20% off your entire order!

**You can help support this channel with one click! We have an Amazon Affiliate link in several countries. If you click the link for your country, anything you buy from Amazon in the next several hours gives us a small commission, and costs you nothing. Thank you!

**What do we use to shoot our videos?

**Here are a few products we've found really fun and/or useful:

Tesla Stock: TSLA

**EVANNEX
If you use my discount code, KnowsEVs, you get $10 off any order over $100!

Instagram: @drknowitallknows

Sources:
Рекомендации по теме
Комментарии
Автор

This is really good content John, thanks for linking to Jodie, Sabine and Elon to clearly explain this difficult concept of consciousness and generalised intelligence. Having an LLM (Grok) in your Tesla to explain what the car is doing and why would be a massive leap in demonstrating the car's intelligence to any passenger, and would generate a higher level of trust as a result. One wonders whether Tesla will demonstrate a rudimentary version of this on

musicman
Автор

great episode, thank you! Would be great if you could get Sabine on for a discussion about it!

percurious
Автор

As other commenters agree, this was an excellent video and discussion. IMO, LLMs primarily are good for memory and certain kinds of reasoning and as user interfaces. Embodiment with full sensor suites and manipulators provides both the context recognition for action and the possibility of action itself - i.e. agency. However, whether robots come with AI or AGI and are conscious or not, in order to work and dwell closely with people in small business or educational or domestic situations, they will need to have names, some modicum of personality, "continuity" (between shut-offs, upgrades, etc.), and above all possess very high degrees of loyalty, privacy, and security. The latter are all qualities expected from family members and employees with access to private information.

In terms of how people learn tasks, yes, they bring their lives' worth of experiences (from birth onward) as "priors." Then they conceptualize (in imagination) or are given and explained and/or shown "a task" (typically involving cause and effect within a context). So they try to imitate that, making all sorts of errors in the process (and learning). The task may be simple and easy to accomplish (by some criterion, usually judgmental) once. However, also typically, the context may be highly variable. To generalize task performance and achieve higher levels of proficiency or competence, well, practice makes better (but never quite "perfect").

A book I would recommend is Personal Knowledge: Towards a Post-Critical Philosophy (1958 but still good 🙂) by Michael Polanyi which provides a discussion of tacit knowledge and skill and focal behaviors.

WarrenLacefield
Автор

This is a fascinating conversation. Appreciate this one immensely.

Limitless
Автор

Humans remember and practice. Until machines can do these things we won’t have AGI. Also machines need to conjure new ideas from previous knowledge. Go Dawgs! UGA 1965 and 1973 and GSU 1980!!

atlantasailor
Автор

Yes! Prehumans began walking upright, long before they became more intelligent than other animals. The hand is the cutting edge of the mind. Humanoid robots will gain intelligence much faster than large language models or automobiles.

ddally
Автор

I have to differ from your statements about humans learn top down, while AI is learning bottom up. Looking back, I can see (well ask my mom & dad) how when I was born I knew nothing. I was shown how to do everything. Until I was able to learn those basic tasks (eat, catch the ball, not touch the hot stove, etc) I was not able to apply top down function.

AI had to be given the basics (300, 000+ lines of code, autopilot, enhanced autopilot, early FSD, etc.), before being able to do the top down process (FSD 12).

robertgamble
Автор

If slavery is defined by physical/emotional inhibition, causing a diminished physical autonomy and the emotional pain that results.
It’s difficult to imagine a truly enslaved bot that actually suffers pain of any kind, or emotions, sadness, anger, shame, love, empathy etc.
A mechanical being wouldn’t feel these things, even if it conceptualizes these feelings in humans and other living things.
Cheers from San Diego

Julian-
Автор

Language is mind boggling. On one end it’s an abstraction of the world, on the other it defines things that do not actually exist. For example, a table is a collection of parts and does not “exist” without language.
In Sanskrit the belief is that language is all encompassing - that it is larger than the universe - because it can describe anything in the universe. For that reason OM (first and last letter/syllable) in Sanskrit is all encompassing because it includes the entire language and thus the entire universe

ranig
Автор

What an interesting subject ... Thank you

JD-kkcl
Автор

The distinction may come down to suffering. Suffering, existential suffering, is caused by the removal of something that is irreplaceable. Self monitoring, self awareness may know something is gone and may have to stop goal seeking on the object, but there is no “pain” in it’s absence. Further, the AI can always reproduce facsimiles to mimic the presence of the missing thing. And, anything removed from the AI can be returned as it is just digital capture. To riff on Descartes, “I suffer. Therefore, I think on my suffering and wonder who I am.” AI can ever only mimic suffering because if you remove something from the AI, there remains digital backup. Human suffering is knowing there is no backup. Death has no meaning to an AI.

fentoncs
Автор

Clear.
Concise.
Well presented.
Thank you for sharing.

r.a.monigold
Автор

maybe that’s the next step in training FSD have the car talk through its reasoning and then you can give it advice, sort of like a driving instructor.

metatron
Автор

I personally believe it will be through Omni-Models that we will achieve AGi. Just like humans, visual, auditory, touch, taste, everything will be required to achieve AGi.

Limitless
Автор

proposition: consciousness is something very simple, but it cannot be expressed in words. (or, if you express it in words, it wouldn‘t make any sense.)

Robert...Schrey
Автор

Thanks for sharing. I always feel well informed when you keep posting stuff like this. I’m far from being competent with using any computer language, but at least I can understand the basic fundamentals with these posts.

Inspace_noone_can_hear_u_honk.
Автор

LLMs are like our cerebellum, lots of data in, simple behaviors out, no long term planning, no rational logical reasoning. Our cerebellum needs our cerebrum on top of it and the cortex on top of that for reasoning.

MichaelDeeringMHC
Автор

In before finishing he video: Embodied model does fit with the human model. The game that is played is being a market actor and the harsher survival job which does not generate much feedback. Being a market actor, that conserves resources and has profit and loss to maximize on gives AI the level of feedback to become a General Intelligence. Anyone taking on general intelligence needs to fully understand Mises's Calculation Problem. Many conclusions regarding how this limits the space of general intelligence. They must be one of many market actors, they are not good slaves since they require the management of resources to determine their fitness...

thaddeuswalker
Автор

For humanoid robots they could allow higher rates of error. Humans always do 'good enough' and mistakes to save computing power. A dog can do much harm but they are allowed.

uber_l
Автор

AGI is like the holy grail, It is not the the grail itself, it is the hunt for it that matters most. We want AI system there can do specific task better that humans, but like humans cant be the best at every possible tasks, it kind of naive to think we can create a super intelligence, that can do everything better than the best humans can do. The more knowledge you get, them more you get aware of how little you know.

TimLauridsen
welcome to shbcf.ru