Spring AI with Ollama - Use Spring AI to integrate locally running LLM.

preview_player
Показать описание
In this video, I have explained, how to use Spring AI - one of the newest framework, with locally running LLM using Ollama.
The integration is clean and we can use the existing Spring ecosystem for our use cases.

Download Ollama from:-

While downloading the model in local, please ensure you have sufficient RAM for the model.
You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models.

GitHub link for the examples explained in the video:-

#spring-ai #ollama #llm #largelanguagemodels #llama3

00:00 Introduction - Spring AI
00:45 Ollama
01:40 Ollama setup
05:45 Project setup
07:45 Endpoint for local LLM
14:00 Streaming response
Рекомендации по теме
Комментарии
Автор

There are many videos showing how to use Spring AI with OpenAI's GPT-3 or GPT-4, which require an API key and involve costs. It’s great to see you leveraging Ollama and open-source models without any associated costs!

dhavasanth
Автор

after importing your code when i start server it is up and running when i hit rag api from postman it is thouging 500
why
all me end pointa are currect
can you help me here

NaveenPrasadKarnam