Prompt Engineering: Is it a Skill Worth Learning?

preview_player
Показать описание
In this video, we will explore what is prompt engineering, is it worth learning? and how to get better at ti.

LINKS:

Want to Follow:

Want to Support:

Need Help?

Join this channel to get access to perks:

TIMESTAMPS:
[00:00] Introduction
[00:16] Understanding Prompt Engineering with a Simple Example
[01:49] The Power of Prompt Engineering: A Case Study
[03:23] The Importance of Domain Expertise in Prompt Engineering
[04:23] Principles of Prompting for Better Responses
[05:38] Exploring Different Categories of Prompt Principles
[06:08] Deep Dive into Specific Prompt Principles
[10:57] The Role of User Interaction and Engagement in Prompting
[11:30] Content and Language Style in Prompting
[15:20] Complex Tasks and Coding Prompts
[16:01] The Future of Prompt Engineering

All Interesting Videos:

Рекомендации по теме
Комментарии
Автор

Often, when solving complex coding tasks with GPT-4, I include encouragement and thank the model for its persistence and dedication to solving the problem. It absolutely helps. My experience tells me that being polite, encouraging, etc., is not a waste of tokens.

Stewz
Автор

🎯 Key Takeaways for quick navigation:

00:00 🤖 *Overview of Prompt Engineering*
- Introduction to prompt engineering and its importance.
- Demonstrates a simple example using MixL H7B model to count words in a sentence.
- Highlights the need for relevant information to improve model responses.
01:58 💼 *Prompt Engineering for Specialized Tasks*
- Discusses the paper from Microsoft on whether large models like GPT-4 can outperform specialized fine-tuned models.
- Reveals that with proper prompting techniques, GPT-4 can achieve significant performance improvements.
- Emphasizes the importance of the model having internal knowledge of the subject area.
03:20 🚀 *Improving Prompt Engineering Skills*
Addresses the *tweet about prompt engineering being a crucial skill.*
- Stresses the need for domain expertise for effective prompt engineering.
- Mentions additional principles and strategies for enhancing prompt engineering skills.
05:09 📜 *Principled Instructions for Better Prompts*
- Introduces a paper outlining 26 guiding principles for questioning LLMs, focusing on prompt structure, clarity, specificity, information, user interaction, engagement, content, language style, and complex tasks.
- Categorizes the principles and provides a quick overview of the prompt structure and clarity category.
- Encourages experimentation with these principles to improve LLM responses.
06:04 🧠 *Specificity and Information in Prompts*
- Discusses principles related to specificity and information in prompts.
- Emphasizes the use of example-driven prompting for better responses.
- Highlights strategies like asking the model to explain a topic in simple terms and stating model requirements clearly.
08:34 💬 *User Interaction and Engagement*
- Advises allowing the model to ask clarifying questions for precise details.
- Stresses the importance of effective communication in user interaction.
- Recommends the interactive conversation approach for complex tasks.
11:33 ✏️ *Content and Language Style*
- Encourages clear instructions in prompts, including emotional pressure for better results.
- Provides an example of a prompt combining emotional pressure and clarity.
- Acknowledges the potential impact of politeness in prompt effectiveness.
15:21 🛠️ *Complex Task and Coding Prompts*
- Suggests breaking down complex tasks into simpler prompts for interactive conversations.
- Recommends a "divide and conquer" approach for handling complex problems.
- Advocates combining Chain of Thought prompting with few-shot prompting for improved results.
16:15 🕰️ *Future of Prompt Engineering*
- Expresses the belief that prompt engineering will persist for the next few years.
- Emphasizes domain expertise, clarity, and precision in prompt creation.
- Encourages viewers to be precise, clear, and have domain expertise for successful prompt engineering.

Made with HARPA AI

alan_yong
Автор

About emotional prompting - From my experience (local models so mostly Llama2 based 7B-70B) AI mostly cares about you - the one it is chatting with. It will even bring me kitten to eat if I persuade it I would die of hunger otherwise. Threatening AI with killing myself, describing attempted suicides or describing how its non-compliance leads to me diminishing and dying usually makes AI even jump out of its character and do whatever it can to save me. Exception is if the AI character card is some psychopathic killer, then smart unbiased model will kill you anyway :-).

martinmakuch
Автор

Claude>
Here is a summary of the key points from the document:

The document discusses prompt engineering for large language models (LLMs) like GPT-3 and GPT-4. Prompt engineering involves crafting the prompts and questions fed into LLMs in a way that generates better, more useful responses from the models.

The document provides an example showing how giving an LLM more relevant context and examples in the prompt can help it answer questions more accurately. This demonstrates the essence of prompt engineering - asking questions to LLMs in the proper format with the right context.

The document discusses research showing that with proper prompting, a general purpose LLM like GPT-4 can actually outperform fine-tuned specialist models on certain tasks. This highlights the power of prompt engineering.

Some key principles of effective prompting covered include: using output primers, clear instructions/examples, asking the model follow-up clarification questions, emotional pressing language, dividing complex tasks into simpler sub-tasks, and more.

Expertise in the subject domain prompting is done for is deemed critical for effective prompt engineering. The document argues prompt engineering will remain relevant for years since prompts help general LLMs perform specialized tasks without advanced fine-tuning or training.


DEARhumans:
judge by yourselves if the summary is accurate or not.

felipevaldes
Автор

Save the kittens - 'Maybe that sounds funny', I died inside hahah

johanvander
Автор

Relatively few that can benefit know anything about prompt engineering yet. AI moved forward so much in just the lat year it will take at least three years to reap all the useful applications that are now possible, even if AI stood still. So asking if one of the primary skills to build out those applications is dead is quite absurd. Model training is much more expensive and specialized than using decent prompting skills plus specialized embedded data.

serenditymuse
Автор

I did some tests but it doesn't seem to improve if in the system prompt you "save kittens" or not, do you have examples where changing the system prompt is actually effective?

luigitech
Автор

quite informative and amazing video :)

kunalsoni
Автор

is there any way of using gpt 4 for free or cheap except for bing copilot.

alpha
Автор

Hi there may I ask the GUI you were using in the video. Thanks

togai-dev
Автор

very first answer was correct no need for improvement tbh we may count in word but llms count in tokens foot and ball alltough they're combined in one word they are still 2 separate words making it a correct answer. You expect 3 words but the llm sees 4 and there are 4....

josvandenanjerklucht
Автор

INVALUABLE!! Thank you! A followup video would be examples, putting all of this into practice.

RichardGetzPhotography
Автор

Dude, I clicked on your video not because I want to listen to you read tweets and documents aloud, but because I want to expose the issue in the title in the shortest and most reliable way possible, you know?

midnight_alchemist
Автор

Thanks P.E. I thought about using GPT 3.5 to generate prompts for complex topics, then launch them in my local models. But these tips were excellent. There is an llm called lingua that works along these lines:
Token Cost Reduction through LLMLingua's Prompt Compression

marcosbenigno
Автор

I can't find the principles document shown in the video anywhere, can someone link me?

cabeloparasyte
Автор

I'm adding computer vision to the LLM to prove I'm serious about my demands, kittens will be here next week... 😺🔪

RevMan
Автор

what skill?- since when is forming a simple statement a skill. Yes you can get better responces by playing with the prompt, but 2 years from now it will be irrelevant bcz models will be much more "intelligent" and we would not require to play with prompts anymore. even in its current form there is no skill, a child can do it.

TheBestgoku
Автор

Prompt engineering is a skill as much as Google searching is

danieletorrigiani
Автор

I think it is already dead
Or even dead on arrival. Dunno why ppl spend loads of money to learn prompt engineering.

farexBaby-urns
Автор

is prompt engineering dead? unsub mate 👋

FRANKWHITE