SINGULARITY AI: Ray Kurzweil Reveals Future Tech Timeline To 2100

preview_player
Показать описание
- Ray Kurzweil has made a name for himself with startlingly accurate predictions about the exponential growth of technology, artificial intelligence, and the future of tech in general. Here we explore Kurzweil's concept of the exponential 'Law of Accelerating Returns,' and see how this law impacts the timeline of the singularity.

AI news timestamps:
0:00 Singularity intro
3:57 AI computing cost
6:27 Ray Kurzweil future tech

#ai #future #tech
Рекомендации по теме
Комментарии
Автор

Just the fact that we are arguing over dates shows it's time to prepare.

dylan_curious
Автор

We really live in very interesting times

ArtificialIntelligenceSapien
Автор

AI news timestamps:
0:00 Singularity intro
3:57 AI computing cost
6:27 Ray Kurzweil future tech

AINewsOfficial
Автор

Our team works on Tammy AI building the future of video experience with AI and things are moving crazy fast these days. In just 3 short months, the cost of operating AI fell by 10 times. We are giving it 3 years for AI to be as cheap as electricity.

jmisfdq
Автор

pretty much every single thing mentioned here is inevitable. it's nothing short of astounding. exciting and terrifying at the same time.

xalspaero
Автор

What a wonderful time to be alive to witness the birth of AI and the opportunities that will no doubt benefit all of humanity. I don't hear much about the profound impact it will have on governmental institutions, the economy and the military. I hope that humanity can survive such instability.

puppykibble
Автор

The development of AI will not take place in a vacuum. If AI progresses as Kurzweil forecasts, then there are likely going to be colossal societal upheavals as millions and then billions of people are thrown out of work. Nation states will be unable to adapt to this kind of exponential change and once citizens realize that they are going to be without future prospects, income and finally food there may be a vast uprising against governments, corporations and AI before it gets to the AGI stage. There may be mass citizen campaigns to take down the electrical grid that feeds AI. If we think that these same corporations and governments are going to give us a Universal Basic Income we are probably dreaming, but supposing that is possible, how would that work? What would the transition look like? And once AGI is in place, why would UBI continue? What use would 9 billion humans be to a bunch of supercomputers? People are going to work this stuff out and see that their own future is evaporating. There will be resistance, big time, once the changes start coming, and they are coming in the next 3-10 years. I am not a prophet, all the above is speculation, but I am certain that this is not going to be the smooth, inevitable ride that Kurzweil seems to contemplate.

Alpwalker-xjdx
Автор

In the historical sense, the Singularity probably can be traced back four or five centuries to events that started the deflection of the path the modern world took from the rather steady state of civilizations that hadn’t had much technological change for millennia.

Certainly it would go back to, the early 19th century.

The point where it really starts to go vertical is where AI starts to advance AI in a feedback loop and that’s now, 2022 or 2023. I think things may not seem very different but we already passed the event horizon.

my
Автор

The best thing a sentient AI could do for humanity is to prevent us from killing each other, not by force, but by disrupting supply chains, communications, and financial transactions that enable the military machines throughout the world.

edh
Автор

"Sarif was right about one thing. It's in our nature to want to rise above our limits. Think about it. We were cold, so we harnessed fire. We were weak, so we invented tools. Every time we met an obstacle, we used creativity and ingenuity to overcome it. The cycle is inevitable... but will the outcome always be good? I guess that will depend on how we approach it. These past few months, I was challenged many times, but more often than not, didn't I try to keep morality in mind, knowing that my actions didn't have to harm others? Time and time again, didn't I resist the urge to abuse power and resources simply to achieve my goals more swiftly? In the past, we've had to compensate for weaknesses, finding quick solutions that only benefit a few. But what if we never need to feel weak or morally conflicted again? What if the path Sarif wants us to take enables us to hold on to higher values with more stability? One thing is obvious. For the first time in history, we have a chance to steal fire from the gods. To turn away from it now - to stop pursing a future in which technology and biology combine, leading to the promise of a Singularity - would mean to deny the very essence of who we are. No doubt the road to get there will be bumpy, hurting some people along the way. But won't achieving the dream be worth it? We can become the gods we've always been striving to be. We might as well get good at it." — Adam Jensen, Deus Ex: Human Revolution

JesusChristDenton_
Автор

The beginning of Marc Andrejevic's book Automated Media has an interesting take on Kurzweil and is certainly worth reading in any case.

russellmason
Автор

In this rapidly evolving world, any predictions we attempt to make from this moment onwards are bound to be rendered obsolete, as the pace of change outpaces our ability to foresee the future accurately. The dynamics of our society, technology, and countless other factors are continually shifting, making it increasingly challenging to stay ahead of the curve with our forecasts. Therefore, we must embrace the notion that our predictions, no matter how well-informed or insightful they may seem at the time, are destined to fall behind the ever-advancing reality that lies ahead.

richardphillipslivemusic
Автор

Even with an affordable device equivalent to the computational power of all human brains combined, people will probably still mostly watch videos of cats and dogs.

MonkeyRecords
Автор

thank you for sharing your knowledge about AI

power-of-ai
Автор

Maybe I'm paranoid, but the narrator sounded like AI and the way the video abruptly ends makes me think AI was given a script and the voice was created in elevenlabs and it didn't have a written ending by the creator of the channel, so it just ends. Anyone else considered this?

JBDuncan
Автор

Ray Kurzweil has always been a 50 year old man.

glenneric
Автор

Let's say that the computer revolution has been progressing at an exponential rate, whereas we humans as developers have not and are still working at about the same pace, even though the progress has gotten twice as great each year.
When AGI takes over and starts to develop itself, it will double its progress in half the time each time because it will be twice as capable for each cycle. Otherwise, it has, from its own view, become twice as slow each time it doubles compared to its own conditions, which have become twice as capable.

An AGI will have exponential growth with an acceleration factor.

Linear growth: 1, 2, 3, 4, 5
Exponential growth: 1, 2, 4, 8, 16
Exponential growth compounded: 1, 4, 64, 16 384, 2 147 483 648

Exponential growth to the power of 2 makes the progress curve of exponential growth lay flat as if it were linear.
Our brains can't understand exponential growth, and when it comes to exponential growth squared, there's no idea to even try.
That's why I don't think we can predict what's going to happen when it finally takes place.

What I am trying to say is that if a system gets twice as efficient, it has to do the next step in half the time it took to complete the previous step. It's not only the amount that increases exponentially but also at what velocity it's possible to do it.

thomasschon
Автор

As a follower of Kurzweil, and AI power user, I have concluded he’s likely more than 90% correct. It’s just a matter of when, not if. He could be off by a few years here or there on some predictions, but his logic is very sound.

The three potential impediments I see are:

1. Finite physical resources
2. Hindered capital investment.
3. Luddite thinking, fighting AI.

These will only partly delay the future he envisions, and only at times, in small waves, small slow periods.

PhilAndersonOutside
Автор

We are currently living through the singularity it's been less than a year and LLMs have already reached exponential improvement.

mindoftheoldone
Автор

Thank you video interesting compliment.

baldassarealessi