Introduction to Architectures for LLM Applications

preview_player
Показать описание
In this video, you will learn about the latest approaches to building custom LLM applications. This means that you can build an LLM that is tailored to your specific needs. You will also learn about the different tools and technologies that are available, such as LangChain.

This video is perfect for anyone who wants to learn more about Large Language Models and how to use LLMs to build real-world applications.

Applications like Bard, ChatGPT, Midjourney, and DallE have entered some applications like content generation and summarization.

However, there are inherent challenges for a lot of tasks that require a deeper understanding of trade-offs like latency, accuracy, and consistency of responses. Any serious applications of LLMs require an understanding of nuances in how LLMs work, embeddings, vector databases, retrieval augmented generation (RAG), orchestration frameworks, and more.

Similar videos you should watch:

No prior background in Generative AI or LLMs is necessary to attend this talk.

Table of Contents:

0:00 – Introduction + Agenda
2:12 – Learn Canonical Design Patterns
3:20 – What are Embeddings
8:02 – Vector Database, Storing and Indexing of Vectors, Vector Similarity
14:37 – Understand Basics of Large Language Models
19:28 – Learn Prompt Engineering
23:14 – What are Foundation Models
26:26 – Understand Context Window and Token Limits
29:31 – Customizing Large Language Models
43:16 – Questions and Answers

--

#artificialintelligence #llm #generativeai #vectordatabase #chatgpt
Рекомендации по теме
Комментарии
Автор

How is structured data dealt with in this environment?

Many companies have both to deal with.

CrispinCourtenay