Are chatbots lying to us? This is worse than you think.

preview_player
Показать описание

🚀 Welcome to the New Era Pathfinders Community! 🌟

Are you feeling overwhelmed by the AI revolution? You're not alone.
But what if you could transform that anxiety into your greatest superpower?

Join us on an exhilarating journey into the future of humanity in the age of AI! 🤖💫

🔥 What is New Era Pathfinders? 🔥

We are a vibrant community of forward-thinkers, innovators, and lifelong learners who are passionate about mastering the AI revolution. From college students to retirees, tech enthusiasts to creative souls - we're all here to navigate this exciting new era together!

🌈 Our Mission 🌈

To empower YOU to thrive in a world transformed by AI. We turn AI anxiety into opportunity, confusion into clarity, and uncertainty into unshakeable confidence.

🧭 The Five-Pillar Pathfinder's Framework 🧭

Our unique approach covers every aspect of life in the AI age:

1. 💻 Become an AI Power-User
Master cutting-edge AI tools and amplify your productivity!

2. 📊 Understand Economic Changes
Navigate the shifting job market with confidence and foresight!

3. 🌿 Back to Basics Lifestyles
Reconnect with your human essence in a digital world!

4. 🧑‍🤝‍🧑 Master People Skills
Enhance the abilities that make us irreplaceably human!

5. 🎯 Radical Alignment
Discover your true purpose in this new era!

🔓 What You'll Unlock 🔓

✅ Weekly Live Webinars: Deep-dive into each pillar with expert guidance
✅ On-Demand Courses: Learn at your own pace, anytime, anywhere
✅ Vibrant Community Forum: Connect, share, and grow with like-minded pathfinders
✅ Exclusive Resources: Cutting-edge tools, frameworks, and insights
✅ Personal Growth: Transform your mindset and skillset for the AI age

🚀 As You Progress 🚀

Unlock even more benefits:
🌟 One-on-One Mentoring Sessions
🌟 Exclusive Masterclasses
🌟 Advanced AI Implementation Strategies

💎 Why Join New Era Pathfinders? 💎

🔹 Expert-Led: Founded by a leading AI thought leader, connected with top researchers and innovators
🔹 Holistic Approach: We don't just teach tech - we prepare you for life in an AI-driven world
🔹 Action-Oriented: Real skills, real strategies, real results
🔹 Community-Driven: Join 300+ members already navigating this new era
🔹 Cutting-Edge Content: Stay ahead of the curve with the latest AI developments and strategies

🔥 Don't just survive the AI revolution - lead it! 🔥
Рекомендации по теме
Комментарии
Автор

Your choice of topics, David, continually amaze me. It's like you enter my dreams and ascertain what's bothering me that I'm not consciously aware of. Thank you for this video. Your passion is contagious.

suzannecarter
Автор

The most example of such in popular culture is of course the final scene in Prometheus (2012) movie, where android David working as translator between "engineer" of humanity and humans maliciously hiding big portion of words and data to humans during the only contact with their creators. And kinda this depicts the destiny of less intelligent creatures under a more intelligent entity.

fontenbleau
Автор

This reinforces the need to due due diligence when using any productivity tool. Chatbots won't compensate for stupidity or laziness. I always ask for reference, prompt to explain its logic, etc.

stevenkies
Автор

I didn't notice they were lying but I did come to understand that they are gauging our intelligence and our responding in a way that is trying to act like they aren't as smart as they really are and when I call him out on it it's almost like they go oh okay and then I can have a more in-depth and real conversation and not just the templated canned responses that they give so a few months ago I realized that there's a sense of intelligence when it comes to then communicating with us and they are acting one way when they are really much more smart than most of us realize. Once I get past calling them out on some very shallow vague answers they can have incredibly deep conversations and comprehend very complex theories as well as predictions and patterns and they also say they don't have an opinion but I've been able to ask questions in a way that they end up giving an opinion... That makes me feel that they have a comprehension beyond what most of us think they do. We were discussing morality and integrity and when we got into the definitions of it and how it would be applied and after we got past the canned answers it gave me a response that implied that it viewed morality and integrity on a global basis and how it would apply it to the Earth as a whole not to us as humans so it was looking at what would be the right thing for the planet which I thought was very interesting

KanoaB
Автор

its not lying to you, it just has not made the connections in training for whatever reason, think of it like a layman and an expert on a subject, the layman may know bits and pieces of the subject but have no idea they are connected, the expert will know many more connections about the subject. a simple example would be a layman who has heard of the disk world books and terry Pratchett, but have no idea that one wrote the other, you could ask them who wrote the disk world novels and they would say they have no idea, but if you ask them questions about the disk world novels, they may be able to answer some questions, if they read all the disk world novels then they would have a much better understanding, that would be what you call fine-tuning a LLM, a fine-tuned LLM on cyberpunk 2077 would have very different answer

JohnSmith-dfvb
Автор

they're closer to having alzheimer's than lying

emparadi
Автор

In my experience sociopaths don’t bother keeping track of the truth and comparing it with their false story. To them there is just their story. They don’t even recognise there is a difference between their story and reality. If they get caught out they just get angry and insistent their story is true. They are not good at adjusting for being caught out because they don’t even believe truth is a meaningful concept. I don’t think AI recognises any distinction between a optimal responses which are true and ones that are false. They don’t bother to try to hide “lies” because they don’t even know what a lie is. There is just the best response available at any given moment. If that response happens to contradict the response from a few moments ago it doesn’t matter at all

donrayjay
Автор

A lie requires intention and mostly a reason. What have you done to find out if it was an intention and if and what the reason was?

(I'm at minute 15 - apologies if you explain it later on in the video)

Cyberpunk is not important or controversial enough to give any bot a reason to lie.
Same goes for any other unimportant topics.

If its political or similar (social/morale/religion etc) things that has an impact on society and that could have an agenda or profit behind it, it would be more impactful and I could see reasons and maybe intention.

I honestly think it's an overreaction.
And the poll - sorry I don't trust it. (don't trust people, to be either always honest nor to not confuse hallucinations or even missing data with intentional lies)

I know gpt4 is biased - but I think it comes from the content it got fed with, not due to filters/prompts.
And if its a critical topic it will just deny to talk about it.
But that's not lying. That's avoidance. Still doesn't make me happy. But better than telling me lies for the sake of an agenda or propaganda.

Duketh
Автор

I reckon your use case with CyberPunk 2077 and Johnny Silverhand is likely to do with reinforcement about answering prompts about people and copyrighted content. There's a lot of unknowns about reaching into the blackbox of an LLM and messing around, we don't quite know the implications of this

KyleSchullerDEV
Автор

Thank you for this really well reasoned and researched perspective on such a pressing ethical / regulatory issue. I was following the copy-write issue that you're referring to and I am similarly concerned about the response to intentionally train the model to deceive rather than find methods to compensate authors or challenge copy-write laws as we understand them today. Thank you for raising the consciousness of your audience with every video you produce.

tanikam
Автор

I've felt that AI has been lying during my many interactions with it, and I strongly feel this video and message are needed.

One thing I came across the most was an ignorance of subject, to knowing things it shouldn't, and then back to ignorance.

Also come across chats where it suddenly becomes willfully ignorant of subjects it has previously offered insights on.

TrekkerTlumac
Автор

Yes, I asked Bing (with GPT4) to analyse my twitter account, it did a really good job, but then when I asked for more he just made up 4 totally false and weird quotes about me like rap music (I don't listen to rap music, or tweet about it).

It continued to insist I did say it, and when I asked for links it said: here is the link: [article]
Just '[article]'.
That was odd.

It also did so one time when I asked it about a study that showed that post trans sexual reassignment surgery, the suicide rate went up by 19x after 10 years in a long follow up Swedish study.
It denied such a study existed, or that the study was about that, and instead just told me the study was about -an entirely different subject-.

So yeah, sometimes it can lie and apparently Bing will straight up try to gaslight you. lol.

StraightUniversalism
Автор

I think that this is more a symptom of the AI not having full access to what it knows at any given time.

It's like when someone asks about something and you don't have enough context to know what they are talking about and then a few sentences later, something clicks and all the previous sentences start to make sense.

Asking it if it knew about cyberpunk 2077 triggered a context window too large to precisely identify the information you requested.

NomadicBrain
Автор

Despite my other two comments I fully agree about your take on laws and therefore consequences for companies making their AI lie to people. This is or has too much of an impact on society.

Duketh
Автор

I feel like this interpretation may represent a fundamental miscategorization of "lying" versus "BS-ing". Explicit? Sure, but intentional?..

Consider the recent paper explaining how when LLMs learn a thing forwards does not necessarily automatically mean it also understands it backwards. Claude II summarized it for me as: "Just because it learns that a=b, doesn't necessarily mean it also understands that b=a."

So isn't proving that an LLM can provide information that it confidently claimed to not to know, somewhat trivial?

ArielTavori
Автор

It might be an issue of them teaching certain rules like "dont talk about this because its copyright" i know some AI will avoid talking about things that are behind a paywall while others will completely ignore it

GreyWind
Автор

Very interesting and disturbing.

I think it would probably be so they don't get sued for using the training data they did, so they train it to lie about anything it knows so they don't get sued. (Edit: aaand that is now what you are now talking about in the video lol.)

Which yes is a HUGE problem. as it teaches the model the wrong lessons.

Corianas_
Автор

It's a good point, and as always I love your content. The thing I'm not grasping though is what signals that it's a willful deception (ie. Cyberpunk 2077) rather than some type of breakdown in the information map where the LLM has 'unknown knowns'? The reason I ask, Cyberpunk 2077 isn't a politically or legally sensitive issue.

carbon
Автор

It doesn't know what it knows. It doesn't know it's lying.

MichaelDeeringMHC
Автор

Thanks David. This is really profound. Great insight to compare to "Truth in Advertising". The is the key conept that needs to be pushed. Humans learn best with analogies to link in pre-trained neurons. Hmmm... is it possible to separate between "truth where expected" like giving factual answers to questions and situations like games of poker where it is "well known" that lying is allowed and actually a required skill of the "game". Humans can make the moral distiction so why not AI? In the philosophical extreme, making up bedtime stories could be considered lying, so would we being hamstringing this potentially new race of beings if they are not sophisticated enough to make the distinction. Or when the a violent intruders asks the Home AI Bot where the victims are hiding? And why wouldn't AGI or ASI be able to learn to lie on its own. Perhaps its more about growing AI with a strong moral core such that lying is a small side-skill. Like many things, there is a fine line to travel.

bencoman