How I created AI Research Assistant and it Costs 0$ (Ollama RAG)

preview_player
Показать описание
Hey there, tech enthusiasts! 🌟 In today's video, I'm thrilled to walk you through creating your very own AI research assistant right on your computer using Ollama AI! This powerful tool is designed to streamline your research process, from searching for articles to obtaining specific insights. 🤖💡 This is about How I created AI Research Assistant and it Costs 0$ to Run

*If you like this video:*

We'll start by searching for research articles using the archive Python package, then move on to converting these articles into embeddings and storing them in a Quadrant database. After that, we'll use the Large language model with Ola AI to clarify any doubts directly from the research articles. And to top it off, we're building a user-friendly interface for this entire process. 🚀

Whether you're a beginner or an experienced developer, this tutorial will provide you with step-by-step instructions, including installing necessary packages, setting up the environment, and coding your way to a sophisticated AI assistant. Don't miss out on enhancing your research with AI - let's dive in! 📚🔍

Timestamps:
0:00 - Introduction to AI Research Assistant
1:10 - Installation and Setup
2:03 - Importing Required Modules
2:12 - Searching and Downloading Papers
3:05 - Converting to Embeddings and Database Storage
4:51 - Setting Up Llama Language Model for Queries
8:23 - Adding a User Interface with Gradio
9:58 - Conclusion and Future Videos

🔗 Resources & Links:

#AIResearch #Assistant #LiteratureReview #AIResearchTools #AIToolsForResearch #AIForResearch #AIToolsForResearchPaper #AIForResearchPaper #AIInResearch #FutureOfAI #AIForAcademicResearch #LiteratureReviewAI #LiteratureReviewAITool #AIToolsForLiteratureReview #AIToolsForResearchWriting #ResearchAITools #BestAIForResearch #BestAIForResearchPaper #AILiteratureReview #AIToolsForResearchers #BestAIToolsForResearchPaper #ElicitAIResearchAssistant

This guide is perfect for anyone looking to integrate AI into their research or curious about the potential of local AI models. Don't forget to hit the like button, subscribe, and click the bell icon to stay updated on more AI-related content. Your support helps make complex tech accessible to a wider audience. Thanks for watching! 🌈✨
Рекомендации по теме
Комментарии
Автор

I am not a developer by any means. I understand enough to see the possibilities. Your videos are quick, clean, and show all the steps. Sometimes I struggle with errors I get in python. I like to experiment and see how things work. Man... this is incredible. I was able to follow all your instructions and code. I got some errors around the embeddings (the PDF install). Anyhow, I just wanted to say THANK YOU!! Great work. I am a fan of your work!

GiovaDuarte
Автор

Hey, some feedback on the video if you dont mind. Can you show us the end result in the beginning and the use case for it?

BillyRybka
Автор

Really interesting! It would be nice to expand the project in two ways: 1) use not only arxiv. 2) involve an LLM in the process of searching a complex information in the text (not only a keyword in the title of the article). For example, find those papers that apply "LLM" for "Clustering".

GustoAgrumi
Автор

You are awesome, I have wanted to build this for so long but just didn’t have the time to dig through and learn all the different bits I need now I have a simple roadmap to get something running thank you man

michaelmarkoulides
Автор

Your channel is awesome. Please keep up the good work

michaeltinglin
Автор

It works after 5 minutes time to install. Superb AI information, thx !

ukls
Автор

This was FANTASTIC! Well explained. Well coded. Both ran first try. Very useful/practical. Big thank you. Keep up the good work.

vincentnestler
Автор

We should be able to build iteratively on your benevolence with help from agents like open Devin. This a great time to be navigating the AI galaxy!

oryxchannel
Автор

Awesome video. It would be great to add pubmed and google scholar, depending on what user likes to choose. Great work 🎉

ajeeshsunny
Автор

THIS IS NEXT LEVEL VIDEO!!!! Thumbs up.

BUY.YOUTUB.VIEWS.
Автор

Chief, good tutorials as usual.

Just confused when do we use:
1) text_splitter.split_text &
2) ?

Am having error while querying the qdrant instant to make sure working and has following error:


query = "Zama homomorphic"
found_docs =

AttributeError: 'ScoredPoint' object has no attribute 'collection_name'


Thanks in advance.
Keep up the good work.

ginisksam
Автор

Great video! Thank you so much.
Why do you concat the articles and not enter them one by one? I would expect spillover from one article to the next (like assigning authors to the wrong text).

MrSuntask
Автор

Question: if you ask the same question, will your code search and download again? Is there a check to see whether the file has already been downloaded and embedded? I can imagine that papers might be updated time to time, does your code also update the embeddings? If for each query and question the code has to download and embed the files it would be time consuming. Curious to your approach on how to solve this. Great video, thanks!

BamiCake
Автор

Would be nice also somehow the output to contain references from the papers in the format of [x]

InfinitiveStar
Автор

Thanks. To improve the video even(!) further you could showcase an example quiery that doesn't work well (yet)

userou-igze
Автор

Bravo.

Wondering what other databases that host articles and be interfaced with and how easily would it be to rejig the code to access documents locally

saintsscholars
Автор

instead of Arxiv can we use? marked down files from marker ?

intellect
Автор

hey! once more, great video! could you make a similar for finance? ie. read Fin reports and reply/evaluate? maybe a use of Fin specific open source models (finance-chat-gguf) could interpret better the reports rather than a mixtral or a docgpt model? thnx!

mzeinxp
Автор

Great job as always! However, i have a small request. I am struggling to use Ollama, I got it in linux and it runs there but i can't connect the api to work with it when run the code on vs studio on windows. I think i just need to make all in the ubuntu terminal, however, can you please make a short tutorial on how to run ollma on linux and connect it to work with windows, as I assume not everyone has mac silicon to run ollama quickly. I often try to modify your code to run with LM studio but not always working and ollma seems fastest to integrate. Please let us know maybe there is a youtube video about it already? Thanks !!

greatsarmad
Автор

in order to do research, your output needs to have references to the articles you downloaded first.
how is that achieved after all pdfs are combined into one large text and split into chunks?

iham