Prompt Engineering is Dead; Build LLM Applications with DSPy Framework

preview_player
Показать описание
Stop prompt engineering in LangChain. You wouldn’t hand-select weights of your neural network, so don’t hand-select your prompts. DSPy is an open-source framework that provides a paradigm shift towards building pipelines to optimize language model prompts, model tuning, and LLM applications with code. In this session executives will learn how adopting DSPy can save time and resources while enhancing application performance, and developers will leave equipped with knowledge on how they can incorporate DSPy into their LLM application development process. We will demonstrate how to move away from traditional prompt engineering to a more systematic approach - stitching together DSPy’s “signatures, modules, and optimizers” functionality to create a system that leverages language models for specific tasks within your application – with the goal of empirically optimizing your LLM application’s performance

Talk By: Matt Yates, Sr. Director AI, ML, & Data Science, Sephora

Here's more to explore:

Рекомендации по теме
Комментарии
Автор

Databricks: Prompt engineering is dead
Also Databricks: use our platform to engineer your prompts!

YeahTheBros
Автор

🎯 Key points for quick navigation:

00:01 *🌅 Welcome and Introduction*
- Speaker greets attendees at the end of the conference day,
- Acknowledges the late session and the clickbait title,
- Discusses a mindset shift in prompt engineering and mentions hiring for an ML engineering leader.
00:41 *📈 Overview of Session Topics*
- Outline of the four key areas to be discussed: agents, prompting strategies, prompt evaluation, and the DSP framework,
- Importance of building meaningful applications with LLMs,
- Challenges of creating customer-facing products and the future potential of AGI.
02:22 *🤖 Building with Large Language Models*
- The value and potential of LLMs and ChatGPT,
- Limitations of relying solely on LLMs and the need for custom development,
- Discussion on intellectual property and the role of agents in enhancing LLMs.
04:41 *🛠️ The Agent Approach*
- Extending the concept of RAG (Retrieval-Augmented Generation) to agents,
- Importance of building systems that interact with the world around us,
- Intellectual property and flexibility in building agent-based systems.
06:44 *📜 Key Papers and Research*
- Overview of influential papers and research in the field,
- Discussion of DSP framework, LLM optimization, and the evolution of prompting strategies,
- Emphasis on the importance of data in building effective LLM systems.
08:59 *🎯 Prompting Strategies*
- Different prompting techniques and their relevance,
- Introduction to DSP framework for programmatic interaction with LLMs,
- Importance of data and evaluation metrics in prompt engineering.
12:13 *🔍 Evaluating Prompt Quality*
- Importance of data in the evaluation process,
- Need for automation in testing and evaluation,
- Insights from researchers on optimization and the scientific method.
16:07 *⚙️ DSP Framework and Workflow*
- Introduction to the DSP framework and its benefits,
- Workflow for building LLM applications, including task definition, data collection, and pipeline setup,
- Emphasis on iteration and optimization in the development process.
19:21 *💡 Importance of Data*
- Historical perspective on the importance of data over algorithms,
- Relevance of this principle to modern LLMs and their training,
- Focus on the data-driven approach within the DSP framework.
20:45 *🧩 Practical Application and Community Support*
- Practical benefits of using the DSP framework in Databricks,
- Community contributions and available connectors for seamless integration,
- Encouragement to leverage the community and available resources for development.
21:14 *🌐 Integrating LLMs in Databricks*
- Setting up connections to LLMs in Databricks,
- External model serving and authentication layers,
- Abstraction layers for managing multiple models.
22:12 *🛠️ Getting Started with DSPy Framework*
- Defining inputs and outputs (signatures),
- Implementing prompting techniques (modules),
- Optimization of pipelines for better results.
24:01 *🔍 Optimizing Prompts and Pipelines*
- Use of training and test data for optimization,
- Programmatically optimizing prompts using few-shot examples,
- Exploring different levels of optimization (e.g., fine-tuning models).
26:07 *📊 Practical Application and Evaluation*
- Importing and preparing data sets (e.g., Reddit comments),
- Setting up evaluators and defining metrics for accuracy,
- Iterating through different optimization strategies (e.g., bootstrap few-shot, random search).
34:22 *🔧 Advanced Optimization Techniques*
- Using more advanced optimization methods to improve accuracy,
- Implementing instruction optimization with powerful models,
- Balancing between powerful and smaller models to achieve the best results.
40:06 *📝 Final Steps and Instruction Optimization*
- Instruction optimization using a separate language model,
- Letting models find the best prompt through iteration,
- Ensuring efficient use of model calls to manage costs.

Made with HARPA AI

kd
Автор

textgrad and agentic processes will outpace dspy

diga
Автор

Yes Prompt Engineering is dead.
AI Agents are the future❤

ravishmahajan
Автор

I disagree with the notion that Large Language Models (LLMs) merely parrot information. Consider ChatGPT, which employs generative AI. If you present it with code that includes an object-oriented class with a single method performing multiple tasks, and inquire about potential future issues and improved implementations, it will direct you towards best practices and identify any loopholes. This is not mere repetition; it demonstrates clear analytical thinking.

Why not tell the people the truth that it is really thinking because of the fear of something?

fil
Автор

Here is the thing, Prompt engineering is more or less like google search skills. So maybe you shouldn't exist as a company.

gokukakarot