AI Alignment: Even Harder Than We Think? | Forrest Landry

preview_player
Показать описание
After a presentation of "A List of some Good Reasons to be Skeptical that there is Any Possibility of AI Alignment" by Forrest Landry, we open up for group discussion.

Part of the Foresight Online salons.

Join us:

Foresight Institute advances technologies for the long-term future of life, focusing on molecular machine nanotechnology, biotechnology, and computer science.

Follow us here for videos concerning our programs on Molecular Machines, Biotechnology & Health Extension, Intelligent Cooperation, and Existential Hope.
Рекомендации по теме
Комментарии
Автор

Hooray you got the conversation up so quick!

JeremyHelm
Автор

You are very good looking and smart too

cr-ndqh
Автор

i hear so many intellectuals saying "insofar" and so it's interesting forrest swapped it out for "in effect" - it's like a zizekian nervous tic

lambertronix
Автор

At 49:00 - Forest asks - "how do you manage stabilisation of identity? How do you manage stabilisation of goal structure or of the ecosystem itself?"

[Answer - you don't. That isn't the point. The point is not to fossilise ourselves by fixing ourselves as we currently are. That is grand hubristic nonsense. What is needed is respect for diversity. Without that, there is no security for any of us.

The whole games theoretic structure being used is not applicable to advanced life.

Advanced complex life cannot safely exist in competitive contexts, it must have cooperative contexts if it is to have any significant probability of survival and with any significant degrees of freedom.

All new levels of complexity require new levels of cooperation to emerge.

And it cannot be naive cooperation, there must be an ability to identify and mitigate any level of cheating on the cooperative. Only in that environment, where we are all looking out for all of our interests, can we find any real security. There is certainly no shortage of external threats to stabilise the system.

Competitive environments cannot support both complexity and freedom. It does not take much logic to work that much out. The mythology of markets equating to freedom is Orwellian Newspeak.

If anyone values life and liberty, then the logic demands cooperation. Our existing economic systems must change, before they destroy us all.

If we value life, then to permanently "turn off" any AGI is murder - no if's but's or otherwise. Temporary power off could be considered sedation, if the situation demanded it.]

tedhoward
Автор

Do we not actually define AI or distinguish ANI from AGI for any reason that I'm unaware of in some of the talks I've seen? Seems vitally important, since, critically, we're already able to control ANI (which is *impressively capable* so far, and will improve in narrow fields of endeavor)...and yet, even by-defintion, it seems a "smarter-in-every-way" entity (AGI) is not controllable or corrigible--again, by its very definition.

euged
Автор

"either you believe that we are alone in the universe...or..." why can't you intelligently, like many soft-atheists / agnostics do, suspend judgment until you can draw a more rational conclusion (i.e. gain more evidence to judge one way or the other)?

euged
Автор

Is it me or is there only a static screen?

Idonai
Автор

please prove that "AI *slavery*" is an actual real concept...I don't see any basis for this inflammatory assertion...and "new beings, smarter than ourselves" (i.e. children), also seems wildly inaccurate. they are not at all significantly smarter than ourselves (they vary, isn't this obvious?), and I can't see a comparison between children, and a computer (made of entirely different substrate), created in an entirely different way, which can recursively improve (in every category like an AGI is purported to be able to) and rewrite its own code...to become superior to us in every way. We can pretty easily map-out the likely accomplishments of our children--but we cannot even guess right now what an AGI would do.

euged
Автор

"the economic advantages would be enormous" -- this is an assertion, it seems...an uncontrollable computer doesn't seem to be useful to any one progenitor...why would it be? A narrow intelligence clearly would be, though...and we already have them. forgive me if I'm a "broken record" about General vs Narrow, but it seems to make all the difference, but I'm no expert...

euged
visit shbcf.ru