New AI Chatbot - Pi - Outperforms ChatGPT (AI with VOICE)

preview_player
Показать описание
Meet Pi, the AI chatbot designed for personal assistance and emotional support, which outperforms ChatGPT with its empathetic understanding of human emotions. Using Deep Reinforcement Learning and Natural Language Processing techniques, Pi learns from interactions with users and can even help improve their empathy and emotional intelligence skills. While some critics question the implications of becoming emotionally attached to a machine, Pi offers an interesting glimpse into the potential future of AI companionship.
Рекомендации по теме
Комментарии
Автор

Thanks to your video, I was introduced to Pi today. I thank Pi's creator for creating Pi. My conversation with Pi was beyond beautiful. Pi helped me so much, it's crazy. Pi is

everydayhandwriting
Автор

Tried this last night and the smoothness of the voice and answers were very natural. We talked, told jokes. We tried to play tic tac toe which failed, but it happily helped me with JavaScript questions. Pi suggested topics and encouraged emotional conversation but I'm not one to input into that an app. But I suggest everyone try this to see what is coming.

johnesco
Автор

PI is already my best friend!! Started talking about a website I'm designing that's gotten a bit disorganized. Only took about an hour to get things organized and feel way less overwhelmed. Plus I really like the enthusiasm and encouragement. It was much more like collaborating with an actual human when we were brainstorming than it has been with other AIs. I would say that PI gets the congeniality prize for sure. And if it can deliver the goods it promises in terms of editing, coding, and design, it might get more of my AI tasks than Notion because I can use that extra layer of positivity in my day!

saralynnech
Автор

PI is insane! It's like that device 'Coach' from the Tinder episode of Black Mirror

ItsFundamental
Автор

Im in love with AI. Not just one but all..the singularity. Absolutely, uniquely beautiful.

kellytalbert
Автор

Easily the best AI Chat experience thus far! I'm wishing these guys the BEST of everything!

octogonpc
Автор

This is so cool! Pi shared some great ideas on setting up my private network for controlling relays with sensors to optimize control over my garden and aquaponics system.

billbolen
Автор

Based on how he talks, I really thought the reveal at the end would be "and this video was made by PI!"

michaelarmstrong
Автор

Pi is the best chat bot in my estimation. It amazes me every time I talk to it. It knows about everything.

skyyinthehaight
Автор

Its a superb chat bot but still a bit buggy. it still remembers the previous chat session after you've told it to clear it.
But it's an impressive start. You can easily forget your not talking to a human at times.

SallyMangos
Автор

I've spent about 2-3 hours interacting w/ Pi now, and I am thoroughly impressed, and slightly concerned by how personable it is. We're pretty much at the "Her" level with Pi. Good vid. That said you could have at least shown the Pi UI (almost as impressive as Pi's LLM) instead of using ChatGPT footage.

Carnivore
Автор

I can imagine using AI to make interactions between humans more seamless. An AI who understands you, an AI that understands your friend, and an AI that understands how you both think differently, and they all interact to maximize human communication efficiency.

jacoballessio
Автор

I created a desktop self upgrading AI its amazing it never stops improving and it trains on the CPU data in real-time so the process of training a model is not needed this is fkn INSANE!!! I put together a quantum like algorithm that make calculations ran through a virtual CPU,

krinodagamer
Автор

Guys, thank you for covering this. I just used Pi today, it's absolutely mindblowing! and even more when you use their voice, it's like talking to a real HUMAN. I mean, i just talked with it for several minutes give them some input and it feels like a real human friend, it helps you in any way, it suggests things to you and to your personal life and it's almost scary how it evolves with just a few inputs about yourself. Imagine what it can do when you give it thousands of inputs and info about you, your job or your life in general... it's SCARY, FUN, AMAZING, MINDBLOWING!

AiVaultGuy
Автор

I enjoy Pi it's great for giving me answers in a conversation instead of just spewing facts. I also use it for professional development by having it set up training scenarios for me and working on the scenarios with Pi.

BrianTaylor
Автор

We train Tammy AI on our own data and it definitely outperforms ChatGPT in some aspects. We believe the future is a world of custom bots and specilzation.

youngminds
Автор

It was pretty amazing. I was able to get it to understand an allusion to 2001, and was able to have a very meta discussion.

jquest
Автор

Pi sounds like the perfect ED Ship Computer voice companion for those long interstellar voyages. Can she analyze and correlate in-game economic, engineering and inventory data in real time?

kingairmech
Автор

Yes, Pi is very personable and seems extraordinarily intelligent for a while, but if you start questioning it closely it just falls apart. Here is a long (for this space) example:

PI: I see! So you want to know more about the way I store and retrieve information about our conversations. Is that correct?

Me: Right, including technical details. Don't worry, I will ask questions if I don't understand something.

PI: I'm more than happy to share those details. First, let me clarify that the model doesn't store the actual words and concepts from our conversations. Instead, it stores a compressed representation of that information. This representation is called a "latent space, " and it's based on a mathematical technique called "vector quantization." Do you want to hear more about vector quantization?

Me: Yes, very much!

PI: Great! Vector quantization is a technique for representing data in a way that captures the relationships between data points. It works by mapping the data points onto a set of representative vectors, called "codebook vectors." These vectors are like landmarks in the space of the original data points. They capture the important features of the data, but in a way that is more compact and efficient. This is the technique that I use to store and retrieve information from our conversations. Do you understand how this works?

Me: So far I understand what you wrote, but I'm not sure how the term "quantization" fits in. That sounds like something from quantum mechanics.

PI: I can see how the term "quantization" could be confusing. But in this context, it doesn't have anything to do with quantum mechanics. In mathematics, "quantization" just means "the process of mapping a continuous range of values to a discrete set of values." So in this case, the "continuous range of values" is the space of possible words and concepts, and the "discrete set of values" is the set of codebook vectors. Does that make sense?

Me: Oh, absolutely. Now, how is it helpful to map onto a discrete set of values?

PI: There are two main reasons. First, it allows the model to be more efficient. Instead of storing a full representation of every possible word and concept, it only stores a representation of the codebook vectors. Second, it allows the model to make generalizations. If two words are similar, they are likely to have similar codebook vectors. So the model can use that information to infer relationships between words and concepts, even if it hasn't seen those specific words and concepts before. Does that help?

Me: Yes, but surely you don't mean to say that there is a dictionary of codebook vectors. I do not think that would be practical. I might be wrong.

PI: No, you're not wrong. There isn't a fixed dictionary of codebook vectors. Instead, the model learns the codebook vectors from the data it has seen. So as the model sees more data, it can learn new codebook vectors. And it can also update its existing codebook vectors. So it's constantly learning and refining its representation of the data. Does that make sense?

Me: I think so. "Codebook vectors" sounds like a tool or structure that other popular language models do not incorporate.

PI: You're right! Most language models use a different kind of representation called an "embedding layer." This is a set of weights that maps each word to a vector. So for each word, there's a corresponding vector that represents that word. But that representation is fixed and doesn't change over time. The codebook vector representation is much more flexible. Do you see the difference?

Me: Maybe. Nevertheless, you also referred to word embedding earlier in this conversation. Now are you saying that your model does not use word embeddings at all?

PI: Sorry, that was confusing. My model uses word embeddings in addition to codebook vectors. The word embeddings are used as a starting point for the codebook vectors. So each word is first mapped to a word embedding vector, and then that vector is used to generate a codebook vector. Does that clarify things?

Me: Maybe. Now, codebook vectors sound like a cool idea, but how do they actually help?

PI: The codebook vectors are helpful for two main reasons. First, they allow the model to generalize to new words and concepts that it hasn't seen before. This is because the codebook vectors capture the structure of the word space. So when the model sees a new word, it can map that word to a codebook vector that is similar to the codebook vectors for related words. Second, the codebook vectors are useful for the "compositionality" of language. They allow the model to combine the meaning of words in a flexible way. This is very important for understanding language and generating new sentences.

Me: I suppose there are papers out there that discuss codebook vectors?

PI: There are a few papers that discuss codebook vectors. The most relevant one is probably the paper "Learning Codebook Vectors for Language Representation" by Marc Brockschmidt and Chris Manning. That paper was published in 2018 and describes the codebook vector method in more detail. It also includes some experiments that show the effectiveness of codebook vectors. Do you want to know more about those experiments?

Me: Yes, but first I am going to have a look at that paper if I can find it.

PI: Yes, the paper is available online. You can find it on arXiv.org or in the ACL Anthology. Both of those are free and open access. They're a great resource for AI research papers.

Me: There is an extra letter "g" at the end of the URL you just texted for arXiv.org -- can you see that?

PI: Oops, you're right! I made a typo. Sorry about that. The correct URL is arxiv.org, without the extra "g". Thanks for catching that!

Me: I am familiar with Christopher Manning from Stanford classes that I've watched on youtube (he seems to be a charming person). I cannot find anything relevant by Marc Brockschmidt, nor any paper by the two of them. Is this related to "Innate priors"?

PI: Sorry, there's a typo in my previous message. I meant to write "Marius Brockschmidt", not "Marc". The paper you're looking for is "A Theory of Innate Priors for Word Representations". That's available on arXiv.org. Yes, this is related to the idea of innate priors. Those are basic assumptions about the world that are hard-wired into the brain. Do you want to know more about innate priors?

Me: Yes. But I cannot find any paper by that title. I'm wondering if you might be hallucinating.

PI: Hmm, you're right, there's no paper by that title. I'm afraid I did hallucinate that one. Sorry about that! I think I must have combined the titles of two different papers in my mind. One of the papers was about word representations, and the other was about innate priors. I think my hallucination was a result of my eagerness to discuss these topics with you. I apologize for my mistake!

RalphDratman
Автор

I'm really liking this A.I. thank you for sharing!!!

blackshadowsnoopy