Build the LLM OS | Autonomous LLMs as the new Operating System

preview_player
Показать описание
Lets build the `LLM OS` inspired by the great Andrej Karpathy

Can LLMs be the CPU of a new operating system and solve problems using:
💻 software 1.0 tools
🌎 internet browsing
📕 knowledge retrieval
🤖 communication with other LLMs

Рекомендации по теме
Комментарии
Автор

Supper stuff. Hope to see this part when this is implemented : Can be customized and fine-tuned for specific tasks

alextiger
Автор

Really like the idea of selecting assistants for a specific task

VTCEnglish
Автор

16:08 you got me right here on this part to try this.

RagnarVonLodbrok
Автор

WoW so cool this! Is the chat history also saved to memory or database? Or maybe an option if you let it do research, that you can save the research so you can later ask questions about it? Or a tool, when you say in the chat save history or something. Or first ask to summarise the chathistory and save, or only save last answer/research. But also after that save last answer/research to a md or pdf. So cool to see an llm as a cpu.

chriskingston
Автор

This is not an "OS", this is an "IDE" what you build. Of course, "OS" sounds more catchy for a title. But what should really go here under the name "LLM OS" is a firmware that is able to boot your computer or phone to a sufficient state that allows you to communicate tasks to it. Instead of Windows full of bloatware it may just start off your welcome page and run some Docker to spawn and orchestrate the tasks.

DmitryMikushin
Автор

Question, tried langgraph yesterday and had to make a router node(which can have a lot of options), which will generate the next step in the route, is this the same as the cpu in llm os? Or is this different? Thanks for your great work love it❤️❤️❤️

chriskingston
Автор

Great presentation, can we use llm os with ollama ?

cedrickahoue
Автор

"Research new/updated functionality required to improve this LLM OS App and produce a new version of it every day at noon"

... Delegating task to Research Assistant ---> Redirecting Task to Python Assistant

KCMNJL
Автор

Someone said it, I’ll say it again. How are these different from conversational agents which langchain showed how to build like a year ago… is it a nicer API? (Langchain at this point like bloatware) but what is the new pattern here? We’ve always had agents where agent = LLM+ knowledge + memory + tools. Otherwise, nice demo!!

sazztazz
Автор

Reiterating something he said: DO NOT expose your terminal to production. The python assistant is very dangerous if you don't know what you're doing and how to isolate it. Users will be able to wipe out your machine if you don't take this seriously.

TheDrokon
Автор

can i change using openai, for like groq ?

liviupopeanga
Автор

I think that a entire OS is to much for a LLM but I expect to use one as a linux shell in a near future.

joseeduardobolisfortes
Автор

When can we see same application running using local llms using Ollama or Lmstudio api?

hilfigerk
Автор

why postgres for memory? is it something about memory that makes postgres a good fit?

wryltxw
Автор

Where do you enter the API key? Apologies am pretty new to this.

willwimbiscus
Автор

Agents share the same memory ok but there s no way to chain agents. Storage and knowledge base should be tools btw.

dancoman
Автор

Interesting title but disappointed to see that its just a website wrapper for GPT-4. I was expecting some level of kernel development that integrates AI to the core and using GPT as a input like mouse or keyboard.

santhosh
Автор

now you're just renaming an "agent" as an "OS". There is nothing novel here. You're just taking LLM processes and equating them to OS processes. It's all quite pointless and doesn't expose anything new or interesting.

avi
welcome to shbcf.ru