What Will The World Look Like After AGI?

preview_player
Показать описание

Imagine we are witnessing a singularity event in our lifetime. We create something that is infinitely more intelligent than all of humanity combined. What would the world look like? Is this humanities final invention? Are we causing our own extinction or are we building utopia? We look at both cases and what’s in between.

Join my channel membership to support my work:

Further sources:
Рекомендации по теме
Комментарии
Автор

“The view keeps getting better the closer you get to the edge of the cliff.”

- Eliezer

Vince_F
Автор

For those vast majority of us living meager lives of quiet desperation, a major change, whatever it is, is unlikely to be worse than what we already experience. SGI can't come fast enough.

JJ-siqh
Автор

“ 'Ooh, ah, ’ that’s how it always starts. But then later there’s running and screaming.” - Jurassic Park, The Lost World

cmralph...
Автор

USA life duration expectation has been decreasing since the last 30 years.
Stress, drugs, suicides, murders...

Are we sure that new technologies help humanity?
We thought that it would, just like we thought Social Medias would help the world.

I don’t see a happy world where humans lack of challenge, are defeated in every task and just share an identical universal revenue.

AxelBitcoin
Автор

Your videos keep getting better and better!! Keep it up bro!

Andrewdeitsch
Автор

It is not anymore about whether we ll witness the singularity in our lifetime.. but about whether in 5 years or in 15 years

AndyRoidEU
Автор

I suspect AGI would be rather god-like. Reminds me of something Voltaire reputedly said over 300 years ago, "In the beginning god created mankind in his own image.... then mankind reciprocated." He meant something else obviously but it's ironic nonetheless.

chrissscottt
Автор

The thing that bothers me about the extinction scenario is that it isn't necessarily a bad thing. The version of humankind we are living in right now might very well be the final version of humankind evolving by itself. Look at the advances not only in AI but brain-machine interfaces, neural networks, biological computers, brain emulation, etc. AI might be able to teach us more about ourselves on a fundamental quantum level than we could achieve alone. We may very well begin to implement AI into ourselves and evolve along side it as time goes by. At the very least, that is one way we go extinct without necessarily being just wiped out completely. It might actually be better to implement this type of technology into transforming the human paradigm as time and understanding goes by rather than scapegoating it into our next enemy through fearful hatemongering.

bruhager
Автор

After ASI takes all the work from us, what is left is life in all its colors.

mohammedaslam
Автор

Finally, actually using chat GPT to ask questions about starting a business I can definitely say I’m more on the positive side how things will unfold. I could be wrong, but I definitely hope I’m not. Maybe this will be the thing that ends extreme capitalism.

thefirsttrillionaire
Автор

The problem I have with most of the discussion about AGI (and by extension ASI) is that it always assumes an AGI will have it's own drives and motivations that might be different from humanity's, but in reality it can't have - unless it is created to act in a self-interested way. I think this is a kind anthropomorphism, where we basically assume that something that is really intelligent must be self-interested like us but the reality is that it will be a *tool*, a tool that can be given specific goals or tasks to work on.
In my opinion the big threat is not from an autonomous AGI running amok but from the enormous power this will give whoever *controls* an AGI or ASI, as they will be able to outsmart the rest of humanity combined, and once they get that power there'll be basically no way to stop them or take it away from them because the AGI/ASI will be able to anticipate every human threat that could be posed. It will be the most powerful tool *and weapon* that humanity has ever invented, it will be able to be used to control entire populations with just the right message at just the right time, to assuage fears or create fear, - whatever is needed for whoever controls it to foil any threat and increase their power further and further, until basically humanity is subjugated -and probably won't even know it.

NottMacRuairi
Автор

Great Video with some interesting points I didn't think of yet. And the AOT reference was brilliant 😄

marmeladenkuh
Автор

One of the AGI/ASI problems that keeps me up at night is how will the classic "neighborly dispute" be resolved. Conflict of interest. Say my neighbor wants to play loud music and it drives me nuts, but he's driven nuts by being disallowed from doing this - what's the right answer? Is one of us forced to move? To where? Why one of us and not the other? Things like this stand directly in the way of anything we could consider utopia.

gubzs
Автор

I have been waiting for the singularity for decades - almost here. ChaptGPT is the infant

DidNotReadInstructions
Автор

My solution to the fermi paradox is this.
1. We call oursrlf a intelligent species.
2. We destroy our own planet in many ways, not only climate change, mass extinction, pollution, sea level rise, scarcity of phosphorus and other rare materials and so on.
3. Maybe an AI is doing the same, but even faster? It leads to the destroying of everything, even the technology.

gonzogeier
Автор

We will just merge with AI, it'd be a smooth and safe process

markmuller
Автор

In regards to the topic of AI singularity, it's essential that we, as humans, don't make the mistake of programming artificial intelligence to cater solely to our own needs and desires. If an AI were to become human-like, it might view us as inferior beings, much like how we often perceive other life forms. This would mean that the AI would have no reason to show compassion or consideration for us, potentially leading to catastrophic consequences. In essence, our goal should be to create a benevolent, god-like entity that transcends our baser instincts and operates for the greater good of all sentient beings.

admuckel
Автор

my mind is sore after thinking about all the possibilities and the fact that I'm 18 means I might see it actually unfold

paddaboi_
Автор

I believe, in your use of Rome, you failed to recognize that Seneca was reflecting on his observations of what, seemingly, the vast majority of people with an opportunity for leisure chose to do. They did not choose "meaningful" pursuits of learning or challenge - they chose luxury and what we'd call decadence. it's safe to say that most humans will aspire toward that baseline because we're still the same animals now as then. There are a very few intellectuals and philosophers, but most people just want to wake up and have a nice relaxing day.

aludrenknight
Автор

I think your video reinforces my feeling that we have bit off MUCH more than we can chew and we may CHOKE on it. So many things need to happen to allow this inevitable transition to take place and ALL of them have been incredibly difficult by themselves to implement let alone trying to get them all at the same time on the same issue is virtually impossible. There is just no way to stop it now so we are passengers on a runaway train, destination unknown.

dondecaire
welcome to shbcf.ru