Deploy Large Language Model (LLM) Locally. Zephyr-7b-beta test

preview_player
Показать описание
This is a simple tutorial on deploying LLM on your local computer with Windows and Nvidia GPU.

00:00 - Requirements
00:30 - Miniconda installation
02:47 - Create conda environment
03:43 - Nvidia Cuda
04:50 - Get text-generation-webui
05:28 - Installing required libraries
06:46 - Starting text-generation-webui
07:07 - Finding right LLM
09:07 - Downloading the model
10:43 - Loading the model
11:23 - Testing

Music: Stomping Rock (Four Shots) - AlexGrohl
Рекомендации по теме
Комментарии
Автор

can you show changes in steps if running on an ubuntu server?

bitthal