Using LLMs on the command line

preview_player
Показать описание
In this video, we're going to learn how to use Simon Willison's library, which lets us run almost any LLM model from the command line. OpenAI, Claude, Grok, Ollama, they're all covered!

We'll see how to run basic commands, how to have the LLM summarise the output of the tree command, and finally I'll share some shell scripts that use the tool.

Рекомендации по теме
Комментарии
Автор

This is absolutely awesome. I have been using this to do one shots, and piping them to piper for TTS. Mark I may not always understand everything but by in far you have educated me the most out of everyone I follow. It's good when I don't understand because it encourages me to push further, thus I end up learning more. Thank you so much for putting all this stuff together!

Steamfactory
Автор

To have my companion directly on the command line like grep is awesome. Seems to be quite straight forward and very easy to use, what is a great plus.

DannyGerst
Автор

Hi Mark, this very much reminds me of fabric, but at more ease. For a proposal, create a script that writes a list of pytest unit files for a set of files (or a directory), execute them and debug it, until they run and if they run, show the result :)

Basically feed the results of the execution back in the llm (I think there is even a option for that -c)

MaksimBronsky
Автор

It is very much like Aider. Does it support OpenRouter api keys ? Can it create or update new files on the folder where you run it ? . Does it work fast with Ollama local Ais ? I have tried Ollam local Ais and they are very slow ? do you know how to speed them up ?

totomomo
Автор

Very cool I have plenty of directories that need checking

paulmiller
Автор

I prefer "aichat" (sigoden/aichat), written in Rust and with a bunch of cool features

ricardokullock
welcome to shbcf.ru