Open Interpreter 🖥️ ChatGPT Code Interpreter You Can Run LOCALLY!

preview_player
Показать описание
#CapCut #capcutonline #aitools #ai #videoediting #chatgpt

In this video, I show you Open Interpreter, an open-source alternative to ChatGPT's Code Interpreter. This is one of the most incredible AI projects I've tested in a long time. You can do everything you can with Code Interpreter, except you're able to run it locally on your computer. That means you can interact with any file on your local machine AND even manipulate your operating system.

Enjoy :)

Join My Newsletter for Regular AI Updates 👇🏼

Need AI Consulting? ✅

Rent a GPU (MassedCompute) 🚀
USE CODE "MatthewBerman" for 50% discount

My Links 🔗

Media/Sponsorship Inquiries 📈

Links:
Рекомендации по теме
Комментарии
Автор

once you can get this running with code-llama, please make another video showing that off, it'd be awesome to see where local models current limitations are for this sort of advanced usage.

theresalwaysanotherway
Автор

Going on vacation letting this takeover all of my oracle databases, ttyl

sucharandomusername
Автор

Hi Matthew, I want to tell you that I just tried it with llama and after the error install llama-cpp-python (pip install llama-cpp-python). It is incredible how well it works, thank you very much for your videos that are excellent!!! Sorry for my English.😅

marceloboedo
Автор

When I saw the interpreter recode to get the tasks done I realized we are now living in the future.

Matthew, hats off to you for being here from the start and continuing to deliver such quality content.

NOTNOTJON
Автор

The issue at the end, just copy each parameter and run the install command yourself. Not sure why it's broke, but it works.

arlogodfrey
Автор

So I've had it scan all my image files and delete duplicates - successful. Then I asked it to convert some images I made with SD into .ico files and it did it flawlessly. Then I asked it to change the square icons into circle icons. 100% within seconds. Most use I've gotten out of AI so far. I am amazed! Simply amazing!

jdsguam
Автор

Yes, yes please Mathew, when you've got it sorted out with Code Llama, please make a video showing how to install it with Code Llama. Also make more use cases like reading a csv file of historical stocks that you previously told it to download and make it perform some statistics on it and plotting it all in a terminal CLI console ! !

hernansanson
Автор

Please note - Anaconda is a turn off for some due 1gb install size - but there's a miniconda without al the ui guff.

johnpope
Автор

I was able to run CodeLlama Python 34B on 4 Maxwell GPUs, at 4 bits load, using 32 float for computation. The key is to use the transformer loader, I think. It is about 5-6 gig per GPU. So less than 24gb. I used text-generation-webui.

pensiveintrovert
Автор

We are REALLY close to AI virtual assistants. Like extremely close. I give it 3 months. So cool to see this.

JohnLewis-old
Автор

Hey Matthew, I just watched your video and I have to say, it's absolutely amazing! I love how you're showcasing the latest developments in AI and sharing them with the world. It's so encouraging to see how technology is advancing and shaping our future. Keep up the fantastic work! #AIAdvancements

umarfarooque
Автор

running pip install llama-cpp-python before running interpreter --local worked on my Mac M1 (for the 13B model on Medium)... But its

kalvinarts
Автор

To run code Llama :

LlamaCpp installation instructions:
For MacBooks (Apple Silicon):
CMAKE ARGS="-DLLAMA METAL=on" FORCE_CMAKE=1 pip install -U Ilama-cpp-python --no-cache-dir
For Linux/Windows with Nvidia GPUs:
CMAKE_ARGS="- DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install Ilama-cpp-python

DucNguyen-
Автор

It would be nice if you could run this within pycharm or spyder or some other interface. We’re getting closer every day

bogdanpatedakislitvinov
Автор

Wow. Thanks! Given the huge benefits and security concerns of letting this run unattended on a system, I'd love to know how to run this locally within a docker container that has GPU access to locally run LLMs. I think it would be an amazingly helpful setup video

murraymacdonald
Автор

No one wonder about the security ? In 2022 we were giving the OpenAI some of our data by putting some in chatGPT. Now we are installing that in our computers with a full access to our life... I highly recommend to use docker containers and isolate the data that you share with the container... at least, you still control what you give to OpenAI...

H_z_r_
Автор

Another great guide Govender, thank you! I hope everyone get's to see your content!
Truly incredible; this is a game changer!

louisapplewhaite
Автор

Hey Matthew, reinstall llama-cpp-python before running interpreter. The following snippet will resolve the llama error.

```
pip uninstall -y llama-cpp-python
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir
```

I'm loving the content on this channel. Cheers!

bb_ninja_cat
Автор

Great video, as usual! The installation went smoothly, but I'm encountering an issue with my API Key. A message keeps appearing, stating that the model either does not exist or I do not have access to it. If anyone has faced a similar situation and could offer some advice, I would be grateful.

wvagner
Автор

I'm using code interpreter on my macbook m1 and ran this command:

LlamaCpp installation instructions:
For MacBooks (Apple Silicon):
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir

For Linux/Windows with Nvidia GPUs:
FORCE_CMAKE=1 pip install llama-cpp-python

alissonryan