Run powerful LLMs on NPU with AnythingLLM | Snapdragon X Elite | Promo

preview_player
Показать описание
In this video, we showcase that AnythingLLM now supports running models directly on the NPU for Microsoft CoPilot PCs with Snapdragon chips! Running LLMs and other models on NPU provides an incredible mix of speed and power-efficient compared to their CPU counter-part.

Available in AnythingLLM v.1.7.2 coming soon for ARM64 Windows PCs

----
This video was published for Qualcomm for CES 2205. It is not a deep dive into the NPU or the technology or meant to be a tutorial. It is a demonstration of AnythingLLM's native ability to use the NPU for inferencing on-device.
---

#CES2025 #CEO #NPU #LLM #ai #localai #aitools #new2025 #aitools #aidesktop #copilot #microsoft #qualcomm #qualcommsnapdragon #qnn #llama #ollama #aitutorial
Рекомендации по теме
Комментарии
Автор

man i am in awe of your mad development skills. I've been following the evolution and sporadically putting AnythingLLM Desktop through its paces each upgrade and well thats why I'm saying this. Love what you're doing and how good you are at it man. Cheers🍻 and thankyou 🙏

PlanetaryPoetsMultimediaGroup
Автор

Tim, thank you and the team for building such a great product! I've been working the past few weeks on NPU-supported models and frameworks, so this is going to expedite a lot of testing in the near future. Cheers!

AEsau
Автор

we are so excited to see this integration go live!

QualcommDeveloper
Автор

Awesome work Mr.Carambat we all appreciate what your doing..👍

tyronemiles
Автор

I have a Intel 7 155H with a 11 TOPS NPU. Wish that the app would support it, although even on CPU it runs surprisingly well. Thank you for making this!

justADeni
Автор

Great tool. Thanks for your and everyones work on this tool

PhilEhI
Автор

I can see what you have been up to. Looking great. I am using both cloud and desktop.

russellwright
Автор

Sweet! You're using an mpu on a product that I do not own a lot of. I have a bunch of Intel laptops and a bunch of AMD laptops with those new npus in them and I want to try to use those using anything llm when you going to release a version for them. I do appreciate the focus on the npus

teromeehaley
Автор

Excellent timing on this for CES. Looking to get the new ASUS Zenbook 14 laptop.

johnnythegeek
Автор

I've tried to use Anything LLM for about a year and still do not know if AI model can access the whole embedded pdf or just a small part of it. Why dont you show context window available for a particular model in the Workspace I use? In the AI answer for example?

zbyszeklupikaszapl
Автор

Looking forward to Apple Silicon npu support

LeeHarrington
Автор

Mr. Carambat, is it possible to connect a workspace to storage folder local or remote?

tyronemiles
Автор

In 1.7.2 it says only X Elite devices are supported, when will also the X Plus chips be supported? Dont they even have the same NPU?

elwii
Автор

I have downloaded your anythingllm app but it won't install when I double click on it?? What am I missing?

AspenVonFluffer
Автор

Although I don’t use AI a lot, I built a machine for this and love this application. Would love to see how well it works with Notion!

HatchdLabs
Автор

Is it possible to get LLM Models running on a AMD's NPU (In my case a NPU for a AMD Ryzen 7 8845HS) anytime soon? Or is this for ARM NPU only?

minecraftfanredstone
Автор

Excited to try this out on Surface Laptop 7! Is the model selection that will run on NPU limited to Llama 3.2 3B and Llama 3.1 8B? It will not be possible to have models downloaded from HuggingFace run on the NPU?

jsnx
Автор

how did you get npu support on qc? is this llama.cpp based?

monstercameron
Автор

Is there Support for using the NPU on Intel Lunar Lake Devices under WIndows?

bglaettli
Автор

Cant find the NPU-Version on your Homepage? Would be a great tool.

ChrisTian-ezjq