Few Shot Prompting with Llama2 and Ollama

preview_player
Показать описание
In this video, we explore the capability of a quantized version of the Llama2 model to determine if a sentence is a question. We work in a Jupyter Notebook, process sentences from a CSV file, and evaluate the model's performance using metrics like precision and recall scores.

#ollama #LLM #JupyterNotebook #ModelEvaluation #NaturalLanguageProcessing #llama2

Рекомендации по теме
Комментарии
Автор

Which model it will pickUp . Suppose i wanted to set system prompt for " llama2:13b-chat-q5_K_M " then where need to mention ?

bankniftylearning
Автор

what is the extension of the file? I am getting error while trying to execute this command - ollama create question-llama2-base -f on jupyter notebook on aws sagemaker.

khangjrakpamarjun
Автор

Isn’t it hard to control the output of generative models based only on prompt instructions? They seem to go nuts sometimes even when they work for most examples. It doesn’t seem reliable… at least with the current methods. What’s your opinion?

cgmiguel
Автор

This is great and I was looking for the same thing! Now my challenges are:
1. Locally finetuning llm on custom CSV data about multi-class classification.
2. Doing multi-class classification on custom data with a fine-tuned model.
3. As I have extreme class imbalance for a few classes, I want to generate synthetic data for minority classes using the finetuned model and test the data quality with step 2.

Please enlighten me, if you have any idea about the tasks I mentioned or any relevant sources you have. Thank you very much for your tutorial.

nasiksami
Автор

This is excellent, thank you. Haven't seen <s></s> tokens be used before during prompt engineering. Opened my eyes to BOS/EOS tagging.

mazjaleel
Автор

Does the system provides the same three examples for every test sentence while few-shot prompting? Or the three examples are given once for whole test sentences? Thank you

nurbengisucam