Ollama-Run large language models Locally-Run Llama 2, Code Llama, and other models

preview_player
Показать описание
Get up and running with large language models, locally.
Run Llama 2, Code Llama, and other models. Customize and create your own.
----------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

I'm feeling lucky that I got this video in my suggestions.

vishalnagda
Автор

Krish, Fantastic Video and great explanation!!! Keep it up

neerajshrivastava
Автор

Thank you Krish sir. In Building RAG from scratch, sunny sir showed about Ollama. Both of you were giving foundational knowledge and updates in GenAI. It was very useful sir.

divyaramesh
Автор

We need a long versions videos like previously and thanks for your efforts ❤

mehdi
Автор

Content is helpful, thanks for your effort.🎉

AjaySharma-jvqn
Автор

Thanks krish! for sharing this knowledge . what an amazing model it is

rajendarkatravath
Автор

Thank You so much for a such a great video, I have a query, I am getting very slow response does the speed of response depends on system config, I have chekced out system use and while running it isn't using much resource, can you tell how can we increase response speed

manjeshtiwari
Автор

why ollama not taking full gpu? its taking full cpu only, pls guide

SomethingSpiritual
Автор

Hey Krish, thanks for doing this video in Windows.

kenchang
Автор

Thanks Krish, the briliant, innovative and master of the AI 😊, I have a question please related to the hosting, so assume I'd like to implement my solution on a server, will I need to have both, OLAMA and my app in two seperate dockers? they would communicate together? or they could be implemented in one single docker?

lionelshaghlil
Автор

Thanks, it's great video. Wanted to ask when we say local what is the configuration of local is it a cpu or GPU based system? Are models compressed / quantized or same as original ? Is there a model size limitation vs local system config?

NISHANTKumar-ctpb
Автор

Can you make a complete video of production ready open source LLM basically LLMOps

jacobashwinmathew
Автор

Can we get a video about reading tables using unstructured and such frameworks

krishnaprasadsheshadri
Автор

Thanks for Sharing knowledge. Can we fine tune with company domain content in downloaded model and the data is not shared. I mean it comply with IPR if we use locally

usingsk
Автор

Great tutorial! Can you please make a video on finetuning model on custom csv dataset and integration with Ollama.
For instance, consider I have class imbalance problem in my dataset. Can I finetune a model, then ask it in Ollama, to generate more samples of minority class using the finetuned model?

nasiksami
Автор

Sir please complete the fine tuning llms playlist as much as possible sir

velugucharan
Автор

Hey sir😄, please make a video on BioMistral( a LLM trained on Medical and Scientific Data). It would perfectly fit your AI Nutriationist. Thanks for your daily dose of GenAI

omarnahdi
Автор

Make a video on Python framework of ollama. Make a end to end project and also host it somewhere where real people can use it

AjayYadav-xisj
Автор

Can we just download and use or do we require any meta-ai api key as well?

sawankumar
Автор

Can this read a document and answer my questions on that document can it.

sanjaynt