You don't understand AI until you watch this

preview_player
Показать описание
How does AI learn? Is AI conscious & sentient? Can AI break encryption? How does GPT & image generation work? What's a neural network?
#ai #agi #qstar #singularity #gpt #imagegeneration #stablediffusion #humanoid #neuralnetworks #deeplearning

I used this to create neural nets:

More info on neural networks

How stable diffusion works

Here's my equipment, in case you're wondering:
Рекомендации по теме
Комментарии
Автор

This video was entertaining, but also incorrect and misleading in many of the points it tried to put across. If you are going to try to educate people as to how a neural network actually works, at least show how the output tells you whether it's a cat or a dog. LLM's aren't trained to answer questions, they are mostly trained to predict the next word in a sentence. In later training phases, they are fine tuned on specific questions and answers, but the main training, that gives them the ability to write, is based on next word prediction. The crypto stuff was just wrong. With good modern crypto algorithms, there is no pattern to recognize, so AI can't help decrypt anything. Also modern AI's like ChatGPT are simply algorithms doing linear algebra and differential calculus on regular computers, so there's nothing there to become sentient. The algorithms are very good at generating realistic language, so if you believe what they write, you could be duped into thinking they are sentient, like that poor guy form Google.

kevinmcnamee
Автор

Your channel is a great source, thanks for linking sources and providing information instead of pure sensationalism, I really appreciate that.

Owen.F
Автор

The only Channel about AI that is not using AI. Congrats man

Essentialsinlife
Автор

"It's just learning a style just like a human brain would." Bold statement. Also wrong. The neural network is a _model_ of the brain, as AI researches _believe_ it works. Just because the model seems to produce good outputs does not mean it's an accurate model of the brain. Also, cum hoc ergo propter hoc, it's difficult to draw conclusions, or causations, between a model and the brain, because - to paraphrase Alfred Korzybski - the model is not the real thing. Moreover, it's just a set of probabilistic levers. It has no creativity. And since it has no creativity, the _only_ thing it can do, is to *copy.*

kebman
Автор

AES was never thought to be unbreakable. It's just that humans with the highest incentives in the world have never figured out how to break it for the past 47 years.

DonkeyYote
Автор

Links between the structure of the brain and NNs as a model of the brain are purely hypothetical! Indeed, the term 'neural network' is a reference to neurobiology, though the structures of NNs are but loosely inspired by our understanding of the brain.

cornelis
Автор

I thought the section on AI and plagarism was pretty lazy. It doens't take in to concideration the artists qualms with that it can copy a certain style from an artist and then be used to make images for a company for a fraction of the cost and zero credibility to the artist, basicly making something that they have tried to monitize, with creative directivity and skill, futile since someone can essentially copy their ideas, make money off of it, and not paying for something that was for sale. Artists have a right to say how their work is being used, such as refraining from that someone uses their art without their permission. A style like a watercolour cannot really be plagarized, neither can chords in music, nor a genre of film, but you can take someones script, pretty much use it and change a few things here and there, and that would be considered plagarizm.
The main concern as I understand it is that it can be used in a way that would undermine the artists work, by pretty much taking from them and then making them obsolete.

The thing that you missed when it came to the news article is that other outlets ALWAYS reference their reference material, ChatGPT doesn't always do that, which makes it easier to plagarize something.

Zekzak-wk
Автор

5:00 Short version: The "all or none" principle oversimplifies; both human and artificial neurons modulate signal strength beyond mere presence or absence, akin to adjusting "knobs" for nuanced communication.

Longer version: The notion that neurotransmitters operate in a binary fashion oversimplifies the rich, nuanced communication within human neural networks, much like reducing the complexity of artificial neural networks (ANNs) to mere binary signals. In reality, the firing of a human neuron—while binary in the sense of action potential—carries a complexity modulated by neurotransmitter types and concentrations, similar to how ANNs adjust signal strength through weights, biases, and activation functions. This modulation allows for a spectrum of signal strengths, challenging the strict "all or none" interpretation. In both biological and artificial systems, "all" signifies the presence of a modulated signal, not a simple binary output, illustrating a nuanced parallel in how both types of networks communicate and process information.

GuidedBreathing
Автор

for your argument around 17min i agree with the surface of it, but i think the people are angry because unskilled people now have access to it, even other machines can have access to it which will completely change and already has changed the landscape of the artists marketplace.

benjaminlavigne
Автор

The protein predictor doesn’t take into account different cell milieu which actually fold the protein and add glycans so its predictions are abstract. Experimental trial still needed!

jehoover
Автор

Alien: "Where do you see yourself five years from now?"
Human: "Oh f*ck! Here we go again"

mac.ignacio
Автор

I think the real issue artists have are the definite threat to their livelihood, but also the devaluation for the human condition. Choice. Inspiration. Expression. In the commercial scene, that doesn't really matter except for clients that really value the artist as a person. But most potential clients- and therefore the lions share of the market- just want a picture.

pumpjackmcgee
Автор

I thought it was a great explanation, up to about 11:30. It's not just that "details" have been left out -- the entire architecture is left out. It's like saying, "Here's how building works --" and then showing a pyramid in Egypt. "You put the blocks on top of one another." And then showing images of cathedrals, and skyscrapers, and saying: "Same principle. Just the details are different." Well, no.

LionKimbro
Автор

A complete misunderstanding of the human brain led to the invention and development of AI based on neural networks. Isn't that funny?

charlesvanderhoog
Автор

Should point out.. the decryption problem is highly irregular, small change of input causes huge change of coded output. The protein structure prediction problem is highly regular by comparison, although very complex.

dylanmenzies
Автор

Nice and comprehensive presentation! I think, it is useless to ask AI any questions that need any consciousness or abstract level understanding, because actually it is just bringing up something from a data base that fits best.Thx for sharing!

electronics.unmessed
Автор

Working class artists are often concerned about the generative qualities of these tools not because they are replicating images, but due to the relation of the flow of capital within the social relations of society and the potential for these tools to further monopolize and syphon up the little remaining capital left for working class artists.

karlkurtz
Автор

Great video! My views are : Humans are sentient because we defined the term to describe our experiences. Ai is unable to define its own explanation or word for its feelings and perceptions, and thus cannot be considered sentient. Second, being sentient means being able to perceive one's own experience rather than a collection of other people's experiences and patterns.

teatray
Автор

5:02 I would argue that in the human brain, the percentage of information that gets passed on is determined by the amount of neurotransmitter released at the synapse. While still a 0 and 1 system the neuron either fires or does not depending on the concentration of neurotransmitters at the synaptic cleft

MrEthanhines
Автор

you just answered one of my major question on top of my head, how this AI can learn about what is correct or not on its own without the help of any supervisors or monitoring, and the response he cannot, it 's like we would do with children, they can acquire knowledge and have answers on their own but not correctly all the time, as a parent we help them and reprimand until they anticipate correctly

sengs.