Get up and running with local ChatGPT/gLLMs with Ollama in R

preview_player
Показать описание
My presentation of Ollama, first given at the Societal Analytics Lab of the Faculty of Social Science at the Vrije Universiteit Amsterdam.

Content:

0:00 Intro
2:30 What is Ollama?
4:55 Why use generative AI/generative large language models?
8:40 Why run things locally?
13:58 Quick install Ollama
15:52 First look at Ollama
18:39 Example use cases
19:41 Demo of (r)ollama (R package to access Ollama)
52:57 Known Issues
Рекомендации по теме
Комментарии
Автор

Many thanks for this tutorial, Johannes!

danielsaldivia
Автор

Hi, Johannes. can we deploy it locally without Nvidia GPU, say on MacBook pro?

CanDoSo_org
Автор

Hi Johannes,

What a fantastic video! I am also a R lover, I got a question after I watched your video.

May I know if is it possible for us can turn the LLM model in R as well? such as we got some extract information that we would like to load into it? like academic papers, reports etc.

Cheers,
Wesley

wesleylohoi