PERPLEXITY AI - The future of search.

preview_player
Показать описание

Note: this was a sponsored interview and filmed on April 19th.

Interview with Aravind Srinivas, CEO and Co-Founder of Perplexity AI – Revolutionizing Learning with Conversational Search Engines

Dr. Tim Scarfe talks with Dr. Aravind Srinivas, CEO and Co-Founder of Perplexity AI, about his journey from studying AI and reinforcement learning at UC Berkeley to launching Perplexity – a startup that aims to revolutionize learning through the power of conversational search engines. By combining the strengths of large language models like GPT-* with search engines, Perplexity provides users with direct answers to their questions in a decluttered user interface, making the learning process not only more efficient but also enjoyable.

Aravind shares his insights on how advertising can be made more relevant and less intrusive with the help of large language models, emphasizing the importance of transparency in relevance ranking to improve user experience. He also discusses the challenge of balancing the interests of users and advertisers for long-term success.

Exploring the challenges and opportunities with this new search modality, Aravind highlights how blurring the boundaries between a conversational interface and a search user interface can lead to more personalized search experiences. Perplexity's vision is to create a dynamic, personalized Wikipedia-style experience for users, encouraging them to ask follow-up questions and explore related topics in an engaging loop. As users zoom in on specific details or zoom out to understand broader connections, learning becomes more efficient and tailored to individual needs.

The interview delves into the challenges of maintaining truthfulness and balancing opinions and facts in a world where algorithmic truth is difficult to achieve. Aravind believes that opinionated models can be useful as long as they don't spread misinformation and are transparent about being opinions. He also emphasizes the importance of allowing users to correct or update information, making the platform more adaptable and dynamic.

Lastly, Aravind shares his thoughts on embracing a digital society with large language models, stressing the need for frequent and iterative deployments of these models to reduce fear of AI and misinformation. He envisions a future where using AI tools effectively requires clear thinking and first-principle reasoning, ultimately benefiting society as a whole. Education and transparency are crucial to counter potential misuse of AI for political or malicious purposes.

Join us in this engaging conversation as we explore the future of AI, language models, and the transformation of learning, research, and personal knowledge management with Aravind Srinivas of Perplexity AI.

Aravind Srinivas:

Interviewer: Dr. Tim Scarfe (CTO XRAI Glass)

TOC:
Introduction and Background of Perplexity AI [00:00:00]
The Importance of a Decluttered UI and User Experience [00:04:19]
Advertising in Search Engines and Potential Improvements [00:09:02]
Challenges and Opportunities in this new Search Modality [00:18:17]
Benefits of Perplexity and Personalized Learning [00:21:27]
Objective Truth and Personalized Wikipedia [00:26:34]
Opinions and Truth in Answer Engines [00:30:53]
Embracing the Digital Society with Language Models [00:37:30]
Impact on Jobs and Future of Learning [00:40:13]
Educating users on when perplexity works and doesn't work [00:43:13]
Improving user experience and the possibilities of voice-to-voice interaction [00:45:04]
The future of language models and auto-regressive models [00:49:51]
Performance of GPT-4 and potential improvements [00:52:31]
Building the ultimate research and knowledge assistant [00:55:33]
Revolutionizing note-taking and personal knowledge stores [00:58:16]

References:
Evaluating Verifiability in Generative Search Engines (Nelson F. Liu et al, Stanford University)
Рекомендации по теме
Комментарии
Автор

i found this video by asking perplexity "what are some comments on perplexity ai on social media" its link accuracy and speed is really really cool.

llo
Автор

Terms of Service Nightmare. I can't use this service because their terms of service are basically impossible for me to accept: "When you upload, submit, store, send or receive content to or through the Service, you give us (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with the Service), communicate, publish, publicly perform, publicly display and distribute such content. " which is ridiculous. Imagine you are an author that has a publisher with a sole rights arrangement... you basically violated that with this service. Got an NDA? Forget it with this service. It's basically a giant I.P. nightmare.

marcfruchtman
Автор

Man, the idea of infinite scrolling knowledge is scary. We've already got media rabbit holes, but these are built on human created content. What happens when models can understand your particular uses of language, modifying source texts to conform to your uses of your language, and throw away the rest of the article or snippet (creating rabbit holes conformed to your particular use of language for knowledge)? How is that going to impact our understandings of texts outside of a curated feed? Will language games just get worse? What happens when we abstract from the process of manual (mental) search? We read by scanning, searching, and developing models of mental objects from physical ones. What mental faculties are we giving up, and for what? To know a little more, but for what? This sort of endless scroll makes us feel good, makes us feel like we're smarter, but are we?

I know i sound like a tinhat, I understand the positives and will probably use services like these, but is there a way to also mitigate the potential negatives? Are we even concerned about them?

sklnow
Автор

Oh, man! This application is scary good. Unfortunately, terms of service will once again scare many developers. I'm convinced that the only language model that will succeed is the one where the creators recognise that these applications are based on all our knowledge as a species and therefore belong to all of us. Money will likely only be made by developing solutions like vector databases, speech processing tech, and image generators. etc. Meta ended all AI monolopolies by releasing LLama Opensource. Really Enjoyed this!

truthontech
Автор

Aravinds optimism of what the raising of intelligence across civilization, towards the end of the video was deeply inspiring.I also appreciated that the clear thinking and taste of specialists was sketched out as a human implication of the LLM interactions. I would love to hear him more on this taste qualities for getting the best from systems and what class of professional may emerge in the next 2-3 years and what clarity of thought and thinking from first principles delivers for the social fabric of civilization.

GrantLenaarts
Автор

Very interesting discussion. I especially enjoyed Aravind's comments about advertising. Advertising really is a pernicious influence for a knowledge product. Even when done "well", advertising degrades the quality of knowledge with an interest that tends to grow over time, as Aravind pointed out with respect to Google SERPs. And even without paid placement, the sources of knowledge will still begin to optimize for organic placement. These two tendencies (paid placement and organic commercial optimization) should be resisted by a system which has knowledge as its primary product. Succinctly: knowledge applications degrade when the user is the product for third parties. Knowledge must be the product.

Soul-rrus
Автор

What happened to the video on Geoffrey Hinton 's comments in the media?

raywhyms
Автор

When I am learning a new topic it can be hard to know (especially if it's a technical subject that I am not familiar with) that what I just read I really do understand. LLMs can help with that - we can copy paste the chapter/whatever and ask it to actually ask *me* questions about the text and sort of be a zero-cost examiner.

domasvaitmonas
Автор

Adding citations does not protect you from misinformation. I can link to quotes of the my pillow guy all day and with confidence and conviction but that doesn't mean it's not false misinformation

ianfinley
Автор

53:56 Oh yeah! Auto-Perplexity FTW! Can wait as long as it takes, even for the answers! LFG 🔥💪🤖

andanssas
Автор

Wolfram Alpha is a great world model. I had been trying to create a free English-to-SPARQL translator for Wiki Data without much success. I thought I could get the data, but in the end, Wiki Data wouldn't give it to me. The translator to Wolfram Language instead is just as good, IMO, even better as WL can do so much more.

dr.mikeybee
Автор

41:22 clarity of thought
The best takeaway from this video
46:55 man this guy is so relatable
This guy just clicks with me
57:22 amazing idea

friendlyvimana
Автор

Man you look like this guy named praveen mohan, and speak similarly too.

friendlyvimana
Автор

Thanks for this talk ! and for showing me perplexity AI, its so much nicer to use than bing gpt

RSZA
Автор

Excellent discussion, loved his positive take towards the end while still acknowledging the inherent dangers. Well done; I love this app too

StephenAlberts
Автор

Absolutely fascinating interview. Thanks again, so high value! 🙏

alertbri
Автор

More and more superficial..if you go deeper in one area you remain with the mind of a child in another.

victoranastasiu
Автор

So far, the AI search in Bing is unreliable. Asking it to summarise and make comparisons,
it invented numbers and confused the issue. Not yet ready for prime-time!

smkh
Автор

Great great interview. Some thoughtful and insightful moments.

human_shaped
Автор

Good interview. Aravind - what a cool person. Happy he's working in the AI space, and I look forwards to what he adds to the field.

bgtyhnmju