Google Just Turned the RPi into a Supercomputer...

preview_player
Показать описание


This this video we run video object detection in realtime using the Coral AI USB accelerator stick.

The Coral USB Accelerator adds a Coral Edge TPU to your Linux, Mac, or Windows computer so you can accelerate your machine learning models. This page is your guide to get started.

All you need to do is download the Edge TPU runtime and PyCoral library. Then we'll show you how to run a TensorFlow Lite model on the Edge TPU.

To learn more about the hardware, see the USB Accelerator datasheet.

From $35
You'll recognise the price along with the basic shape and size, so you can simply drop your new Raspberry Pi into your old projects for an upgrade; and as always, we've kept all our software backwards-compatible, so what you create on a Raspberry Pi 4 will work on any older models you own too.
Рекомендации по теме
Комментарии
Автор

Lol, couldn't coral do face recognition/object with python tensor flow 2 years ago?

Handlebrake
Автор

Do not add text in the middle of the video please

ahmetemin
Автор

Google has had Coral TPUs out for a few years now. Too bad the USB version is really hard to find these days. Especially at MSRP.

Movies
Автор

I have been using the Coral TPU with Frigate for video object recognition (mostly persons and vehicles) with my security cameras for almost 3 years now. It is a nice little device.

Автор

For those who just have a pi and a camera, you can do object tracking and recognition without coral. Quite a bit depends on your code and what you are trying to achieve.

JohnCanniff
Автор

they subtitles in thee centre of the screen is annoying, dislike, dont recommend channel

staticmin
Автор

Remove those captions from middle of screen it’s pathetic idea to put captions

ShardulPrabhu
Автор

The Coral USB Accelerator is almost 3 years old now.. What do you exactly mean with "Google just release its..????"

jeffafaaah
Автор

When Google released Coral AI, a Raspberry Pi cost $35. Now, with scalper pricing, it may be cheaper to buy a Cray. I guess the Pi will save you some electricity cost.😉

JohnPMiller
Автор

These are fairly old kit. Yes, it does put the Raspi to its limits but the limits of a Raspi are very limited, if you can even get one. The challenge you will run into with these USB accelerators is the bandwidth over USB 3.0 which is about 500 MB/s after overhead. A modern laptop (I can attest to the Macbook Pro M2 Pro being quite good for learning AI) or a cloud AI IDE would probably be better for learning these days.

michaelashby
Автор

Annoying subtitles, stopped watching. 😵‍💫

srh
Автор

wtf, why do youtubers think it's a good idea to show those word-by-word captions in the middle of the video. i think just because someone popular started to do it, now pretty much everyone wants to do it. it's soooo annoying. i hate that so much, every time i see that, i immediately hit the do not recommend channel option. your video seemed interesting, but i won't be watching it.

sergiobrito
Автор

Sorry, but the big white captioning of your speaking is a real distraction while watching. I can already hear you and youtube has a close captioning option.

stanstocker
Автор

I cannot stress enough that you should really remove those subtitles. I found myself frustrated watching it because they were too distracting.

camf
Автор

I'm gonna push my raspberry pi to the absolute limit (by doing the computation on something else) and find out if it can handle one of the most challenging computations in all of computer science (citation needed)

thetastefultoastie
Автор

The video was ok but the text in the middle of the screen made this very painful to sit through. Subtitles belong at the bottom of the screen.

WYWHfirst
Автор

The captioning in the middle of the screen is supremely annoying. Top or bottom please.

ShaunBrown
Автор

Hi there, great video! I would have a suggestion, I think you shouldnt put the live captions to the center of the screen, its pretty distracting. Also, I at least dont like seeing spoken text word by word, that feels a bit weird.

johetajava
Автор

You speak very clearly. Why the hard coded subtitles over the center of the video. Is very distracting (for me at least). Content was great. Cool tool.

johnvogt
Автор

Interesting video, though to be honest I missed this product when it launched. A few thoughts based on me recent experiences with AI:
Generally, a lot of AI you want to run will run on a modest, but modern CPU if needed: Support varies by model, but you can often configure your CPU cores as a Cuda device, and run these models slowly in software. Not necessarily useful for high speed applications in real time, but it can be great for processing data in a pinch. I will also note that I'm fairly confident a modern multicore CPU will outperform the Coral AI accelerator, as it's more of a small proof of concept device for getting an initial application working, than for high performance computing. I will also note that Coral AI only supports Tensorflow Lite, (at int8 quantization, and only for CNN models, I believe) which means that many models will require tinkering to function on one of these devices, which is certainly doable, but it's good to know that going in!

Also: Many popular models have great CPU support and plenty of optimizations. Llamacpp comes to mind, but other models such as GPT4 x alpaca, Pygmalion, and so on also have a variety of optimizations that help in running CPU based compute, such as 4 bit quantization, AVX2 support, and so on.
The other thing I'll note is that many models also support either a low VRAM parameter (Stable Diffusion run in Automatic1111's WebUI comes to mind), or attention slicing (Which essentially does the same thing. KoboldAI, a WebUI for large language models, does support this), and what they do is allow you to run part of a model in CPU + System RAM, as opposed to running it all on GPU + VRAM, allowing you to run large models with a surprising degree of competence, even with a modest computer, albeit often at a slower speed. You can also split attention between multiple GPUs with some Python-fu, though it's difficult to do so in a way that perserves performance without a high bandwidth interconnect like NVLink, so that's often limited to professional or extremely high end cards.

If I was to suggest a piece of affordable AI hardware for running popular models, you may want to do a video on using an Nvidia Tesla P40 in a small form factor build. It's affordable, supports popular frameworks, has a large VRAM pool, and uses less power than comparable modern cards (3090 and 4090 for instance), so it's a very reasonable entrypoint into machine learning, though it would require either a system with an iGPU, or a small dedicated GPU as the P40 does not natively support display out as I recall.

novantha