Neural Network is a Ridiculous Name.

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Computer scientists? Using abstractions? Surely not.

gyinagal
Автор

AI history actually had lots of research into smarter "neurons", and the cutting edge neurons got larger and more complex. Then around 2010 people started seriously asking "what if simple but lots of neurons and lots of data?", and since then the neurons have gotten simpler (eg, sogmoid to relu) since it lets them power through more training.

laremere
Автор

Simplifying the transformer architecture to just ANN's is a bit too much i would say 😂

kaustabhchakraborty
Автор

Vast oversimpification of Transformers at the end there. It's not merely the size but the encoder-decoder and self-attention that makes GPT interesting

MNTCLURS
Автор

Guys it's fine guys the human brain it does what it does with 20 watts, surely we're about to reach AGI if we just throw another 1, 000 watts at it. It's definitely not like we have our architecture wrong, we just haven't thrown enough money at it.

Dogo.R
Автор

Maybe neural networks aren't an overly simplified model, maybe the brain is an overly complicated one

orterves
Автор

Long before the AI guys of the 60's/70's, statisticians called it "edge-regression."

JiveDadson
Автор

All the rest of the complexity of a neuron is about self repair and physically growing these networks, but the "thinking" bits seem to work just as described. All the chemical messaging just builds a state whereas with a LLM like chatGPT the state is set by programmers with its starting prompts (but this could be done with an LLM too)

wormalism
Автор

I think its important to note that these neurons (weights) are static in each trained model ... I hope one day to see real time training AI ... or perhaps that would be the last mistake we make?

abstract_duck
Автор

Sir, you just explained a perceptron, which is basically just a single neural network node. Artificial NNs were literally inspired by biological NNs because the outputs of several neurons need to have a sum of enough energy to pass a threshold, to then fire and activate their output node. This part of the systems are similar. What’s different is that which nodes should fire is found out through backpropagation in ANNs. I’m not sure how exactly bio NNs rewire, but it’s not backprop.

Don’t know *too* much about bio NNs though, but they are pretty damn cool and I suggest watching a video on it that would explain it way better than I could in a comment

philipbutler
Автор

All math is simplified reality - if all constants are approximations then of course it’s just a simplified version of what we have. They haven’t even incorporated quantum entanglement into the algorithm

dimitrisivak
Автор

It’s called a perceptron. Also I heard ppl call chatgpt a large language model more often than a neural network

nguyenkhoa
Автор

So, what are the simplifications, for those of us not as knowledgeable?

ming-lunho
Автор

Only true for neurons with ReLU activation function

TheNinjinx
Автор

So what does the brain do differently?

sergeylyakh
Автор

Just nope (you are mising the secret sauce). Simpler models do work this way, Transformers do not, but they do use these neurons. Also this is one of many neuron activations functions that there are.

colorpalet
Автор

It's.... A partially neuronally-governed algorithmic system, not a neural network. It's "machine learning" in its simplest sense, if you must, but this is not an accurate depiction of its construction.

Damn, is every video on this page "close, and yet so far?"

Guynhistruck
Автор

Too much an oversimplification to lead to those conclusions. It need not be reductive.

Coppermeshman
Автор

I am sorry to disappoint you, but transformers in ChatGPT are much more complicated.

michaelprinc