AnythingLLM | The easiest way to chat with your documents using AI | Open Source!

preview_player
Показать описание

AnythingLLM is by far the easiest open-source tool to get started chatting with your documents. Using simple off-the-shelf software like OpenAI's API and a free PineconeDB instance you can be up and running in seconds.

No local LLM setup, no crazy RAM specifications - anythingLLM will run on a potato but will give you unlimited power.

Built this because running a localLLM that isn't anywhere as close to performant or good as GPT-3.5/4 is just ridiculous. Let's do the easy thing first, shall we?

Come with data collection scripts, an awesome UI, and easy setup tooling.

#openai #chatgpt4 #chatwithdocs #localgpt #privategpt #gpt #useGPT #gpt4all #ai #aitools #aitools2023 #aitoolsforbusiness #opensource #openaiapi #nodejs #reactjs #opensourcesoftware
Рекомендации по теме
Комментарии
Автор

Private documents need to stay private. Sending them to ChatGPT API defeats the purpose.

Neolisk
Автор

Wow, for someone who has zero experience with LLMs but was looking for an easy solution to chat with documents this looks fantastic. I really like the folder watch so we can just drop all the documents there at once. Keep it up!

vitalis
Автор

This looks incredible! Looking forward to local model support. FYI, there is a tool in private beta that will simplify instruction chains, and handles 1 click installs, CPU queue, and memory management across various AI tools. Public release coming soon, possibly within days...

ArielTavori
Автор

Great explanation, Great Video, and Great product. Thank you for making it open source. Not many are willing to release this for free.

Versole
Автор

Epic bro!!!! OUCH!!
I think two more arms just sprouted of back.

I think I can get used to this!!

dubnet
Автор

The documents are only private on local running LLM's but not when you are using OpenAI

justMeFromDe
Автор

Dude THANK YOU. Very excited to test this out today, big time props to you for making this happen!

mmarrotte
Автор

Excellent project, thanks! This is perfect for ingesting public info I want to use, like making summaries of YouTube videos, etc. OR chatting with my own data that’s already in public channels.

bzzt
Автор

I was looking for something like this, was going to build something similar. Cant wait to try it out it looks great.

JohnDoe-ieiw
Автор

Looks amazing! Excited to give it a try. Thank you! 💕

heather.zenplify
Автор

Great project. Really like it. Thanks so much for open sourcing it.

uhtexercises
Автор

Dude, that was great. Thanks for this I definitely going to test it out

jacobusstrydom
Автор

This looks awesome, and so useful!!!
Getting this going right now - I was looking for something like this 🙏🙇‍♂🥳

AaronMcCloud_Me
Автор

Can't get it to work on windows 10. Would need more thorough setup instructions.

ZeroTheHero
Автор

Looking forward to checking this project out looks awesome! Does it have to use an API from an online service or can you still host your own model and use this? Yhanks for the video!

Aaronius_Maximus
Автор

I wouldn't say it's the easiest but it's like localGPT v100 but using openAI 😢, I like the level of organization, seems you put a ton of work, good job 👍 I am trying to build something like it but backend only for now at least it is called privategptserver I definitely would use some of that awesome work you done😂❤

yossefdawoad
Автор

Hey Tim, I had a question. Say I have a pdf file of 1000 pages and it gets vectored and stored in the vector database. And when I query something, it is checked for similarity against the vector database and retrieves top contexts? and these contexts along with the query are sent to GPT3.5 for an answer? . If yes then if the document is too big then wouldn't the model crash due to max sequence limit? how are you handling this? Cause when I try to inference LLama2 with huge texts it errors out with max sequence length. So how to handle this?

NiranjanAkella
Автор

Tim - Thanks for the wonderful work. Any plans for including Llama support?

mohitbansalism
Автор

This is super cool!

I know everything is pretty fulid with development, but do you have the updated location of that collector main file?

ktruax
Автор

Oh man please make it work with Orca LLM! And thank you so much for making a Docker!

dazzaofficial