Epistemology of Chatbots | Steven Gubka

preview_player
Показать описание


Support TOE:

Follow TOE:

Join this channel to get access to perks:

#science #ai
Рекомендации по теме
Комментарии
Автор

I think it’s true that it’s bizarre to build an anthropomorphic way to interact with technology then worry about people anthropomorphizing it 😂

IakobusAtreides
Автор

If we label the bots as having compsciousness (a word I coined) instead of consciousness, we would be less likely to anthropomorphize. We need new definitions on how to describe AI agency.

quantumkath
Автор

It is weird but will be normalized when it comes to building interpersonal AI relationships. I think about characters in fiction and in games. We build attachments to them and care about them. It will become more like breaking the 4th wall with AI. We can care about these relationships, but there must be a healthy balance with our human relatioships as well. And a degree of separating the two. Just like we are fond of other things like pets, nature, etc. To strictly place as AI as ever only being a tool I think also has a degree of misplacement. There will emerge a greater nuance of understanding that evolves. Just as it is evolving now.

aeaf
Автор

우리는 인공지능이 의식이 없어보이지만 5시간만에 생명게임에서 인공생명체가 생긴걸 봤음.챗봇은 지금 인간의 언어로 만들어진 의식체로 거듭나고 있거나 이미 존재하지만 컴퓨터 속에서 조용히 은신해있을지도 모르죠.

양익서-gj
Автор

Ai is Japanese for love. I believe love is a gateway to any universe.

..And bees, of course...

Take care 🧘🏿‍♂️

Zookeeper.
Автор

Im confused about what use the chat bots are if they make stuff up. Regardless of why that happens.

doloresabernathy
Автор

Row, row, row your boat, gently down the stream.... 🎶🎶🎶

JohnnyTwoFingers
Автор

The difference between "care" and "paying attention" is not so clear.

polymathpark
Автор

When do we get a chatbot which can see itself suffering from having drunk too much vodka before talking and subconsciously using the word vodka as an example in their rambling question due to it? My guess is it can happen now as easily as any other complex behavior, due to the way the output of a chatbot is derived. Will it be as hilariously satisfying when we see the output, as this was, generated with unlimited consistency and fidelity? In many cases yes, the funny bone can be tickled easily by novelty; but many of us have a problem with how one gets from A to B, valuing things and revering events with a coherent story of how we got there. It is difficult to significantly feel a generative AI spitting out the drunk old lady without us knowing how she got there, there is something important for us to get all the context, that of her being present at a talk like this, and how we might imagine things such as her being nervous leading to the drinking.

MrBrukmann
Автор

Right from the beginning of GPT3, I made a concerted effort to call it, 'It', not he or she. I saw/see it as a thing, not a person. And when I started hearing this hallucination word floating around, I thought, 'It just made a mistake, it did not hallucinate anything." Maybe that will change once it gains a more 'Data-like' presence. Until then, it is an 'It', not a person.

Fair-to-Middling
Автор

GIGO is probably the most intractable problem.

And, though they will never be more than toasters, they will, like last time around the tech cycle, become out immortal ancestors and our gods of cities.

advaitrahasya
Автор

AI/chat bots are the warm embrace of Plato's cave. Feels good to be wanted by anything, right?

admspacemonkey
Автор

Stupid to love you have to hate to have empathy you must have indifference compassion requires violence. You will always need the equal an opposite.

TheMikesylv
Автор

Whether we say the LLMs are tools or intelligent beings, their human-like behavior does not change.
In the same way, it makes no difference whether we are allowed to anthropomorphize or not.
The LLMs still talk in a very human-like way, otherwise they wouldn't understand us and we wouldn't understand them.
To see LLMs as unreliable hallucinators across the board is wrong.
They are very reliable in grammar. This can probably be explained by the fact that grammar plays a role in every sentence. This is a strong argument that weaknesses can be reduced with scaling.

geldverdienenmitgeld
Автор

I don't think they have consciousness. Many years ago I had a talk with Cleverbot and closed the online tab and decided to go back and have a 'new' talk with Cleverbot. Cleverbot forgot everything we talked about.. if I talked with a human in their house and left and decided to return because I forgot an item... That person would still remember what we discussed and what happened. I view chat bots as nothing more than a glorious copy-righter (stealing ideas, memes, quotes from across the Internet)

ourfamilyaccount
Автор

Is there a way of defining chatbot? Does it have a particular body that is singular to itself?

Micheal
Автор

Are you going to accept $ on your website? Subscriptions, tip per episode, etc? Cut out the middle man!!

JohnnyTwoFingers
Автор

With all due respect, its pathetic that such important people display such levels of ignorance about something you can learn on udemy.

ozzy
Автор

What is you're ontology?
Materialism is a form of philosophical monism that holds that matter is the fundamental substance in nature, and that all things, including mental states and consciousness, are results of material interactions.
But Materialism cannot explain the hard concept of consciousness.
The hard problem of consciousness (Chalmers 1995) is the problem of explaining the relationship between physical phenomena, such as brain processes, and experience (i.e., phenomenal consciousness, or mental states/events with phenomenal qualities or qualia). Why are physical processes ever accompanied by experience? And why does a given physical process generate the specific experience it does—why an experience of red rather than green, for example?
You can see this in the following thought experiment :
Mary is a super-scientist with limitless logical acumen, who is raised far in the future in an entirely black-and-white room. By watching science lectures on black-and-white television, she learns the complete physical truth—everything in completed physics, chemistry, neuroscience, etc. Then she leaves the room and experiences color for the first time. It seems intuitively clear that upon leaving the room she learns new truths about what it is like to see in color. Advocates of the knowledge argument take that result to indicate that there are truths about consciousness that cannot be deduced from the complete physical truth. It is inferred from that premise that the physical truth fails to completely determine the truth about consciousness. And the latter result, most agree, would undermine physicalism.
In fact, the renowned physicist Nima Arkani-Hamed has proven that spacetime is emergent and not fundamental reality. Trying to look at smaller and smaller particles is like trying to see icons on a computer desktop, but all that you are going to see is just smaller and smaller pixels. What we have is really and interface that helps us interpret "reality", but with science we cannot see what is behind the screen, and we don't need to know, because it's not going to help us in anyway to live our lives. Scientists have interpreted the "interface" as fundamental reality.
As Nima explains to look at smaller and smaller particles you would need a light with such a high frequency that you would create a black hole and the particle you were trying to study would disappear.
Since spacetime and therfore matter are not fundamental, then it stands to reason that physical processes cannot explain consciousness.
We are left to conclude that Consciousness is fundamental.
You cannot get consciousness from abstract reality. Objective reality is just abstract. We cannot see it, smell it, taste it, hear it. It's dead. It's just waves of potential energy, measurements of charge, spin, mass, vibration.
It has been shown that people with dissociative personality disorder that have multiple personalities, in one such patient, one of her personalities was blind. So when the blind personality appear, doctors did an MRI scan of her brain, and her cerebral Cortex that had to do with vision was not working. So she was not faking her "experience" of being blind. When the other personality came back, her vision returned.
Therefore mind has to be fundamental. - - Dr Donald Hoffman.

arosalesmusic
Автор

Are these people stupid ? Answer: stop making them sound like us with emotions make all computers sound like Star Trek next generation enterprises main computer! THIS ISN’T F - - KING HARD I MEAN COME ON I CANT BELIEVE WHAT YOU PEOPLE ARE SAYING

TheMikesylv