Anthropic’s new 100K context window model is insane!

preview_player
Показать описание
Anthropic released a new LLM with a 100K token context window. In this video I'll explain what this means and we look at a demo.

Get your Free Token for AssemblyAI👇

▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬▬

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

#MachineLearning #DeepLearning
Рекомендации по теме
Комментарии
Автор

You literally picked THE PERFECT podcast to summarize. That's literally the one I've been trying to find a time window to digest (having a 2 year old makes this extremely difficult!)
Ausgezeichnet!!

pmarreck
Автор

Great walkthrough/explanation and video format as usual!

lltaha
Автор

Great video, this model looks so powerful! Thanks for sharing!

luisxd
Автор

Wow, spent weeks learning about embeddings and this throws it all out the window. 😂

julian-fricker
Автор

Pretty incredible the speed of progress here

answerai
Автор

So the advantage of Claude over e.g. OpenAI with LangChain are: 1) I don't have to summarize parts of larger documents to later retrieve the ones deemed relevant for answering a given question and thus 2) I don't have to worry about any kind of Vectorstore. Did I get this right?
When using Claude, as the Input prompt contains the large text corpus, I'm feeding a lot more tokens during inference thus paying more?

time.
Автор

Hello, the video is very good. I would like to ask if claude's api has similar SYSTEM prompts as in gpt3.5?Or there's a strong system of cue weights to keep him from forgetting to give him an identity or a situation?

mjxxz
Автор

how expensive is the claude-v1-100k model?

JazevoAudiosurf
Автор

HI @AssemblyAI Excellent video as always! I just had a question is Anthropic API free to use or do we need to buy a subscription for this model?

DeepakSingh-jizo
Автор

How long before you create an agent that listens to your youtube videos and then responds to the commenter questions that were already answered in your video ?

martinsherry
Автор

That section about putting 100K tokens into context was eye opening, that means you could put the original 3 Star Wars scripts and ask it to write a whole sequel??

terogamer
Автор

Do i need to always send the whole data[text], always when doing prompt engineer

guptafamily
Автор

What was the response time of the prompts you demoed?

MichaelScharf
Автор

Yes, but how much does it cost if for each prompt we have to send the 100, 000 tokens of the document to the API ? It seems quite expensive to use ...

aurelienb
Автор

Does it work just in the chatting interface without calling API now?

jackzhang
Автор

Has Claude improved over the past month? Last time I tested it, GPT4 was far superior. Little point switching unless the underlying Claude system has also improved. I'd rather use embeddings with a superior GPT4 model, than 100k context in mediocre Claude model

duudleDreamz
Автор

Hi,
1- Do I have to pay for the API key?
2- Can this read with pdf files? Thanks

yusufkemaldemir
Автор

Does this get better if you give the same context multiple times?

DistortedV
Автор

How much money does it cost to feed it 100k tokens?

Moyano__
Автор

I requested access, got accepted, and then nothing. No email, no web page. Nada. Fail. Not ready for prime time.

tangobayus