Master Claude 3 Haiku - The Crash Course!

preview_player
Показать описание


For more tutorials on using LLMs and building Agents, check out my Patreon:

🕵️ Interested in building LLM Agents? Fill out the form below

👨‍💻Github:

⏱️Time Stamps:
00:00 Intro
00:11 Anthropic Blog: Claude Family of Models
00:38 Haiku Pricing Comparisons
02:28 LMSYS Chatbot Arena Leaderboard
03:33 Anthropic Prompt Engineering
04:46 Code Time
05:53 Coding: Basics with Text
07:21 Coding: Getting JSON
09:42 Coding: Exemplars
14:24 Coding: Multimodal-Images
15:39 Coding: Multimodal-With URL
16:43 Coding: Multimodal-Transcribing Handwriting
17:36 Coding: Multimodal-Counting Objects
19:51 Coding: Multimodal-OCR on the Organizational Chart
20:43 Coding: Multimodal-Profit and Loss Statement
21:55 Coding: Simple Examples with LangChain
22:41 Teaser: CrewAI+Claude 3 Haiku
Рекомендации по теме
Комментарии
Автор

Haiku is the biggest release since GPT4 from a cost/performance perspective. Glad you got to dig into it, always enjoy your videos.

robxmccarthy
Автор

My main use of Claude 3 currently is to have extended philosophical discussions with it, discuss texts and have it help me rewrite my own papers and drafts. I often begin the conversation with Opus to maximise quality. But when the context gets longer, I sometimes switch to Sonnet or Haiku. Haiku very often surprises me with how smart it is. When its responses are informed by the longer context, including the prior responses from Opus, this serves as something similar to a many-shot prompting method with explicit examples and it boosts Haiku's intelligence. Furthermore, Haiku's slightly more unfocused or meandering intellect enables it to make relevant connections between various parts of the conversation that Opus often misses due to its more focussed attention to user instructions and strict adherence to prompt. As a result of that, Haiku's responses sometimes are more intelligent, insightful and (broad) context sensitive even if it is slightly more prone to error than its bigger siblings.

pokerandphilosophy
Автор

Thanks for the video! I would really love to see the follow-up video on function calling you mentioned.

joflo
Автор

Thanks, Sam. I had played with Haiku previously but have not done this optimised prompting. Jumping onto doing this now. Cheers.

paulmiller
Автор

Great overview and multimodal examples from the Anthropic Claude Cookbook using Haiku!
I ran some of the examples multiple times with variations and the cost so far is less than a U.S. $1.00.
We should definitely consider Haiku for personal and business apps where the tradeoff between
quality and cost must be balanced: e.g. summarizing a large volume of papers and documents and
creating and maintaining a large database of vector embeddings to support documents Q&A.

davidtindell
Автор

It seems to be that Haiku is a distilled / sparsified / quantized Opus. When it works, it gives results that are quite similar to Opus — while Sonnet gives very different results, so it looks it was trained independently. This is great: I often prep few-shot with Opus and then hand or over to Haiku for scale.

AdamTwardoch
Автор

What a great model for local use. Thanks for showing it so clearly.

walterpark
Автор

cant wait for the crewAI + Haiku video !!! it would be nice to have a superagent thats using OPUS and small agents only haiku.

amandamate
Автор

Thank you for this. Quite helpful to me!

ehza
Автор

Cheaper works for me as I'm in the learning/experimenting stage. Looking forward to you Claude 3 based function calling video. Thanks for sharing.

kenchang
Автор

So far wasn’t able to get anywhere with haiku for any production quality use case, but the idea with using many examples sounds promising. Will test out. Thx for inspiration to try again 😊

alchemication
Автор

Haiku might be the perfect model to label / caption an image dataset at scale using natural language. Dalle-3's paper makes it clear that generating detailed natural language captions for each image was a big part of the magic behind its ability to understand and follow prompts so well at inference. SD3 only used a 50:50 mix of CogVLM-generated captions and captions from the original images. I think a Haiku-captioned training dataset would be a big step up for training these models.

xemy
Автор

Looking forward to your next video on CrewAI and Haiku

aa-xnhc
Автор

Can you post full course video about Claude 3.5 sonnet model

hendoitechnologies
Автор

What a amazing explanation how to do with vision, xml and other stuffs with Haiku.
Hopefully more in the future about Agents what you all mention with Crew AI. Many thnx.

jayhu
Автор

I had written off Haiku after testing my use case with it using the same prompt I use for opus/gpt-4. Totally unusable. After watching this, I revised the wording & format of the system prompt and added three examples. Well I'll be damned. Touché Haiku, touché. Not as nuanced and focused as opus/gpt-4, but definitely serviceable. The combination of the 200K context window and the pricing really is what makes this model special. Thanks for the informative video showing the proper way to leverage Haiku.

JD-hkiw
Автор

I look forward to every one of these videos. Can you do more langchain or RAG examples with open source LLMs?

silvacarl
Автор

Thanks for the video! Please Please, Function Calling using Haiku in Langchain

EmadElazhary-tttl
Автор

One of the big challenges I’m having is plugging haiku into all the places OpenAi APIs are accepted

brandonwinston
Автор

Please go over function calling asap really looking forward to it, from my test haiku is amazing with a few examples but still has some issues when I go upwards of 4 functions that can be called

carterjames