Install Qwen2 VL 7B Locally - Step by Step Tutorial - Quality Vision Model

preview_player
Показать описание
This video shows how to locally install Qwen2-VL 7B model and test it locally on image understanding, VQA, OCR etc.

🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:

Coupon code: FahdMirza

#qwen2vl

PLEASE FOLLOW ME:

RELATED VIDEOS:

All rights reserved © 2021 Fahd Mirza
Рекомендации по теме
Комментарии
Автор

Wow, absolutely impressive! I nearly can't believe it! Did you create test images? Just want to make sure the model hasn't been trained on them. I mean these were really difficult tasks. In the first the model got every single chacter right, despite that all the extra symbols wouldn't normally be used that. It also understood the intention despite the added literal spam! I couldn't read the middle text either, only a few words. It'd have been nice if the model had generated at least some indication that there is something. But that a minor issue, its all still insane! I mean not that long ago ChatGPT was using Tesseract as dedicated OCR tool in the background because it couldn't do the task itself.
Btw. did you make a mistake by using the 2b processor at 4:35 or does the 7b model also just use same as the smaller one? Could you maybe test the understanding of lines and sketches? I remember some recent paper demonstrating how all current vision models fail to understand simple geometrical figures and crossing of lines or the relations between them.

testales
Автор

suppose if have created a bot with these model will I can chat text to text ? and if give image as input and gave prompt and started chatting with bot and will it remember the previous history?

arjunreddy
Автор

Can this be installed directly in windows or need WSL?

RubbinRobbin
Автор

how to use the api in gradio_client to Qwen2-VL-72B?

irenemartins
Автор

What is the hardware requirement? I try to run the code on sagemaker ml.g5.xlarge instance with GPU=NVIDIA A10G and VRam=24GiB. But I got "CUDA out of memory" error. I upgraded to ml.g5.12xlarge with 4*A10G and still ended up with the same error which does not make sense to me. I suspect there is something wrong with the memory usage setup but I don't know how to fix it.

leiyang
Автор

Any chance to run it with Ollama & Docker or LM Studio on windows with 32GB ram and 11GB vram?

MediaCreators
Автор

Not run on colab with t4
if run on cpu not run

ROKKor-hstg