Ep 8. Unexpected Skills Needed for LLM Development

preview_player
Показать описание
The skill requirements for LLM development are different than for traditional machine learning projects, and many businesses are overlooking this change.

Your HR department doesn't know how to recruit the kind of talent you need to support your Generative AI projects. They are probably reusing your data scientist job description from 2 years ago.

We’ve done the hard work for you and captured the skills and experience necessary for building your own world-class AI team.

🏢 ABOUT PROLEGO 👩‍💻
Рекомендации по теме
Комментарии
Автор

This series of videos is pure gold. Keep up the amazing work. I've been learning a lot for my own personal project on the creation of an AI system that can take the role of a Game Master for tabletop roleplaying games.

ferlocar
Автор

Your contenr on LLM based application are gold mines🎉🎉🎉

vineetsingh
Автор

So basically you're talking about writing good prompts, including good system prompts, right?
One issue from the user/low-level developer end is that the "smartest, " largest models have system prompts that are already baked in, and these conflict with the user's/low-level developer's system prompts (e.g. modifying the system prompt in the Chat GPT builder, or through the Google AI Studio user interface).
Do you know of any way to experiment with system prompt design on the largest models, without having to tangle with the pre-loaded system prompt?
I'm not interested in NSFW - I just want to be able to see what happens when I don't have to deal with a big pre-loaded system prompt that fights me when I add things to the system prompt like "Challenge the user's assumptions, " because that conflicts with the pre-loaded "be helpful, be agreeable, etc." instructions.
Also, the long system prompts they use create attention issues. The LLM is so obsessed with its company-installed system prompt that it treats the user's system prompt as a low priority instruction.
Then there is the resource issue - the companies may or may not charge tokens for the LLM processing their system prompt, but it drains compute, and that essentially makes the model dumber while increasing latency significantly.
What I'm doing with system prompt modification feels like I'm mainly just trying to jailbreak the system out of following its default system prompt, because the default system prompts that the Big Players are using are, in my opinion, wrong-headed and frustratingly limiting. I don't want an agreeable assistance - I want a smart assistant.

myblacklab
join shbcf.ru