InternLM - A Strong Agentic Model?

preview_player
Показать описание
In this video I look at InternLM an LLM which focus on math, reasoning and being able to support function calling.

For more tutorials on using LLMs and building Agents, check out my Patreon:

🕵️ Interested in building LLM Agents? Fill out the form below

👨‍💻Github:

⏱️Time Stamps:
00:00 Intro
01:33 Hugging Face Leaderboard
01:57 InternLM Github
03:02 InternLM: LMDeploy
04:29 InternLM: Lagent
06:36 InternLM Paper
08:29 InternLM Hugging Face Models and Datasets
08:39 InternLM on Ollama
08:54 Code Time
09:15 InternLM Hugging Face Implementation (Colab)
13:12 InternLM Chat Format
13:39 InternLM Function Calling
15:01 InternLM Running Locally through Ollama
Рекомендации по теме
Комментарии
Автор

Very useful content thank you Sam for your valuable insights into these topic areas

keithmatthews
Автор

LMDeploy is a quite interesting framework to deploy and quantize most of the Chinese models. It also works in Kaggle fairly well given it supports also older GPUs.

LaHoraMaker
Автор

Thank you, Sam, for once again highlighting the most interesting new models/techniques in this fascinating field. I note InternLM 2.5 explicitly notes that it "supports gathering information from over 100 websites" with an implementation using Lagent. I'm sure a LangChain implementation could be easily created as well. Actually fine tuning models with Sources for information not in the model (like current weather or news) with function calling and JSON support and using LangChain for finer control would be a great method for using smaller local models. (I feel more comfortable using LangChain than a model specific framework, if possible.) I would love to see other models add this approach. I wonder how much this is done in pretraining vs the base model. (guess I'll have to look at the paper 😉).

toadlguy
Автор

thats a nice SMALL model for function calling alright... appreciate you bringing it to my attention.

mickelodiansurname
Автор

Hello Sam, Thanks to bring this wonderful model to our attention. There is just a confusion in the video between commercial usage and commercial licence: commercial usage is allowed without submitting any form, but with the Open Source licence you might need to Open Source any derivative work (ie finetuning you make for example). If you want to make non open source stuff with it (why would you😊?) you will need to submit the form to obtain a commercial licence, allowing you to do that.
It is a quite classic business model in Open Source software

omarelfaqir
Автор

great job mate! And this is a bit like glm4, not sure about the comparison of benchmark. Both are agentic designed, and could be trained with agentic instructions.

waneyvin
Автор

Tried it with CrewAI and Autogen. In the case of CrewAI, it did not work... Could not call the agents properly or passing the right parameters to the tools. Perhaps because the tools were Annotated, but it wanted to pass JSON, or could not map the JSON to the Annotated function calls. To its credit, it did not hallucinate either, trying to please with answers. I also saw a lot of Chinese coming up on the log file ;). In the case of Autogen, I got the error message: "LLM does not have a tool calling function.' Both experiments with Ollama locally, where Llama3.1 has been tested successfully (of sorts, with plenty of hallucinations.

nikosterizakis
Автор

Kind of interesting that if one of the stronger points of InternLM 2.5 is being able to support agents, I wonder what part of the training data makes it more capable of supporting agents if function calling data only accounts for 16%. Thanks for the video, I'll have to find a way to make time to try it out.

kenchang
Автор

I couldn’t get InternLM to work well with RAG or any embedding. It gives ok answers to simple prompting.

ManjaroBlack
Автор

thanks! in spanish is regular but good that all evolution :)

SonGoku-pcjl
Автор

What is the agentic aspect? Maybe I don't understand something or missed something?

WillJohnston-wgew
Автор

Am I the only one who misses a memory module from Lagent? I'm gonna test this though ASAP

attilavass
Автор

If each model gets a higher rating than its predecessors, when will we reach 100? Also, if I don't watch such videos, will this happen later?

lapozzunk
Автор

Fun fact these Chinese models are banned in the USA and can’t be used for a commercial product

TheGuillotineKing