CAN MACHINES REPLACE US? (AI vs Humanity)

preview_player
Показать описание
Maria Santacaterina, with her background in the humanities, brings a critical perspective on the current state and future implications of AI technology, its impact on society, and the nature of human intelligence and creativity. She emphasizes that despite technological advancements, AI lacks fundamental human traits such as consciousness, empathy, intuition, and the ability to engage in genuine creative processes. Maria argues that AI, at its core, processes data but does not have the capability to understand or generate new, intrinsic meaning or ideas as humans do.

Throughout the conversation, Maria highlights her concern about the overreliance on AI in critical sectors such as healthcare, the justice system, and business. She stresses that while AI can serve as a tool, it should not replace human judgment and decision-making. Maria points out that AI systems often operate on past data, which may lead to outdated or incorrect decisions if not carefully managed.

The discussion also touches upon the concept of "adaptive resilience", which Maria describes in her book. She explains adaptive resilience as the capacity for individuals and enterprises to evolve and thrive amidst challenges by leveraging technology responsibly, without undermining human values and capabilities.

A significant portion of the conversation focussed on ethical considerations surrounding AI. Tim and Maria agree that there's a pressing need for strong governance and ethical frameworks to guide AI development and deployment. They discuss how AI, without proper ethical considerations, risks exacerbating issues like privacy invasion, misinformation, and unintended discrimination.

Maria is skeptical about claims of achieving Artificial General Intelligence (AGI) or a technological singularity where machines surpass human intelligence in all aspects. She argues that such scenarios neglect the complex, dynamic nature of human intelligence and consciousness, which cannot be fully replicated or replaced by machines.

Tim and Maria discuss the importance of keeping human agency and creativity at the forefront of technology development. Maria asserts that efforts to automate or standardize complex human actions and decisions are misguided and could lead to dehumanizing outcomes. They both advocate for using AI as an aid to enhance human capabilities rather than a substitute.

In closing, Maria encourages a balanced approach to AI adoption, urging stakeholders to prioritize human well-being, ethical standards, and societal benefit above mere technological advancement. The conversation ends with Maria pointing people to her book for more in-depth analysis and thoughts on the future interaction between humans and technology.

TOC
00:00:00 - Intro to Book
00:03:23 - What Life Is
00:10:10 - Agency
00:18:04 - Tech and Society
00:21:51 - System 1 and 2
00:22:59 - We Are Being Pigeonholed
00:30:22 - Agency vs Autonomy
00:36:37 - Explanations
00:40:24 - AI Reductionism
00:49:50 - How Are Humans Intelligent
01:00:22 - Semantics
01:01:53 - Emotive AI and Pavlovian Dogs
01:04:05 - Technology, Social Media and Organisation
01:18:34 - Systems Are Not That Automated
01:19:33 - Hiring
01:22:34 - Subjectivity in Orgs
01:32:28 - The AGI Delusion
01:45:37 - GPT-laziness Syndrome
01:54:58 - Diversity Preservation
01:58:24 - Ethics
02:11:43 - Moral Realism
02:16:17 - Utopia
02:18:02 - Reciprocity
02:20:52 - Tyranny of Categorisation

Interviewer: Dr. Tim Scarfe
Рекомендации по теме
Комментарии
Автор

I only offer this next comment because Maria holds herself up as an authority and academic, publicising a book on this topic and outlook. Many of the world's greatest ills come about by people speaking with great conviction about things that they do not understand, and often hold a personal bias about that they wish to propagate.
I expect that she'll be very well received among the religious, the superstitious and likely even those who wish to undermine science.

kyneticist
Автор

Love MLST for bringing different views and showing us that even though someone is an academic, author, or expert it doesn't always mean they have any idea what they're talking about.

pythagoran
Автор

The guest shows too much confidence for my liking. That we currently don't know something, is not an answer to a questions if something is possible or not. What happened to intellectual humility?

ollantaymedina
Автор

First time I ever had to turn off an MLST episode. Just a series of adamant blanket statements but none of them express even a hint of evidence of understanding what she is talking about. This person is very confused.

richardsantomauro
Автор

This might be all going over my head but honestly I don't feel like she's really arguing for anything. Not in the sense that arguing requires an argument and hers often seems to boil down to "I don't think x can be done, things are complicated." It's more a series of statements and her belief that a series of things not being explainable now means they'll never be explainable or that somehow they are driven by a wholly different kind of causality. It feels to me like some kind of residual dualism. I'm probably not being fair here, because I could only make it 20 minutes in, it's pretty painful

jakewu
Автор

She has a very unscientific view of the world.

andrew
Автор

The most tough listen from MLST so far. I wish I could say it is challenging in a constructive way, but at 50 minutes in, I have not heard a single compelling argument from her.

osalbaro
Автор

May I answer your question? AI doesn't need to be anything like us to replace us. There's no law of physics that declares that only humans can be a dominant species.

kyneticist
Автор

I live in the world of management consulting and there are elements of it which are closer to motivational speaking than anything else. Santacaterina is best seen as a motivational speaker and has neither the mindset nor the incentives of a scientist. There will be lots of business people who like this kind of thing, but I agree with the comments that it's not a good fit for MLST.

dcreelman
Автор

Tim, I know you're trying to be charitable and present an open perspective to viewpoints that differ from yours and your audience but... You know as well as the rest of us that a lot of what she said here she pulled out of her arse. Sure there were some valid concerns mixed in, but her total ignorance of the main subject matter combined with her borderline woo-woo dismissal of the scientific enterprise, it makes for tedious listening when there's practically zero pushback with minimal attempts to correct or educate her. I lost count of how many times you replied with 'I agree' seemingly just so you get her to stop talking for a bit.

It's basically a running joke at this point that you'll find some way to complain about ChatGPT but even with this it sounded like you were overstating your criticisms in order to win her favour.

Perhaps i'm being a little harsh right now but the fact is you have one of the more respectable podcasts in this field. It's great that you have different viewpoints on, i truly welcome more of that, but you need to be more willing to challenge people when they talk as much shite as this woman did.

mutantdog.
Автор

A car doesn't run like a cheetah, a jet doesn't fly like a bird, and AI won't have agency in the exact terms that a human does, but it will have agency. (To paraphase Feynman and a hundred other people who said this.)

stuartmarsh
Автор

I am really sorry, but have to voice my opinion on this episode. She is exceedingly good at not answering the question, divergent and going off on her own tangent, throwing around random (many times misplaced) words, often just to end up in the argument; it’s inanimate, therefore this or that, and throw in a few references here and there to philosophers and Latin. I don’t think she even took in and processed the questions before starting a counter argument 😅 I also noted quite a few statements that were simply factually wrong 😅 It seems like she was just there to have her monologues 😅 I didn’t care for watching the whole thing though. I lost patience and interest 🧐

Sorry for the hiccup, I don’t usually complain 🤪

CasperBHansen
Автор

On a human, emotional level I want to agree with some of the things she says... but I feel it's like arguing with someone who is certain they'd be able to tell if they are/would be in a simulation against someone who literally knows this is all but impossible. The latter will never make the former realize because such state of affairs just can't be accepted as true and self-evident.

This was very much wishing and very little knowing. Interesting how little imagination she has for someone who is so adamant AI can't create or do things without being commanded to (which, for now is kind of true, but may not matter in terms of actual results, soon...)

Please DO ignore the bozo downcomment saying he doesn't want such content. We need to know all sides (well, within *some* limits... but she is far from loonie-town, and clearly not stupid, so - no problemo!)

Irresistance
Автор

I would like to know Santacaterina's views on posthumanism in relation to AI; how posthumanism contrasts certain beliefs and views within humanism that emerged prominently during the Rennaissance. Specifically, the belief that humans and human experiences cannot be modeled due to some kind of human exceptionalism may be anthropocentric, no? To me in Santacaterina's argument is this exceptionalism from humanism that is being expanded to life versus machine, which is one of the barriers posthumanists also challenge (see Haraway's A Cyborg Manifesto)

I love this conversation so far! Just my initial thoughts. We need more STEM and humanities cross-disciplinary discussions because there is a LOT to discuss!

MixedRealityMusician
Автор

Not to be a negative nancy, but this is exactly the type of content I don’t want from this channel

ajohny
Автор

tim really has the patience of a saint.

amesoeurs
Автор

Machine Learning goes the opposite way of human learning. Ours starts with sensorial interpretations and ends in abstract symbols of socially-sustaining value. Ai goes first to our output, that network of symbols we made, for its "internal world model" and IF it is to evolve into something like having sentience, then it must eventually interpret the world sensorially. I dont know if Nvidea's Omniverse is a first step to this in-universe learning which might lead to "sensorial interpretation". Now, I dont think a camera is an eye, I dont think haptic technology directly translates to the sense of touch... But something different might happen from here. Overall I don't think sentience can arise from classical computing at all (Godel vs Hilbert) but that doesnt mean the future there wont be different types of computing. Quantum's basis for computation is far less arbitrary than classical.

BinaryDood
Автор

13:00 "You will never be able to"
Is something we have heared before, even from people who actually have studied the subject matter and not something else entirely.
That to me equally speaks of hybris as those who think we need now brain implants so we still would be able to talk to our future machine gods.

kinngrimm
Автор

She's wrong practically every time she uses the word impossible. However, I think she is right in everything she says about the social dangers of misusing AI. This interview is great, anyway. She is quite intelligent and very articulate.

CodexPermutatio
Автор

You should have explained to her the difference between alpha go and alpha zero. I have to say was frustrating to watch

TJArriaga