Next-Gen Cameras Mimic Our Eyes

preview_player
Показать описание
Cameras are everywhere around us; taking data and storing it in costly and environmentally expensive servers. Scientists at RMIT University are looking at solving this issue by inventing cameras that work similarly to our eyes. These cameras work by constructing artificial neurons that can both take images and store them in memory. While still in development, these cameras hold a lot of promise in the construction of the next generation of cameras.

Disclaimer: While I am employed as a researcher at RMIT University, this video is separate from my research and teaching duties.

— References —

— Social —
You can hit me up on some of my socials or check out my research.

— Equipment —
If you are interested in some of the equipment that I use to make these videos you can find the information below.
Рекомендации по теме
Комментарии
Автор

Caltech’s Carver Mead began working seriously on this in 1967. A few decades in he focused on vision specifically. 25 years ago he and others established Foveon, Inc. which basically modeled the retina in semiconductors.

STEAMerBear
Автор

Do you know if there are any cameras with the 120 degree field of view like that of a human eye, without distortion? It's difficult to film in small spaces currently.

sirjamesfancy
Автор

Superb video and production quality
Keep it up!!

applepeel
Автор

Does anybody remember the Flowvon image processor ?

VictorGallagherCarvings
Автор

They eye is efficient because only a small area (about 2 cm in diameter at 1 M) has high resolution. An 8K television has the pixel count needed to provide that high resolution anywhere the eye looks. The problem is, where is the eye looking?

That is where the brain identifies what is of interest, so we can direct that high resolution there. The brain fills in a lot of our surroundings, giving us the impression we see everything in high resolution. If you look at the camera on your phone, how much of the screen can you read - not much.

An 8K TV could turn off most of it's pixels by tracking viewers eye, and providing high resolution only where they are looking.

If a space based camera is looking for submarines, use a low resolution scan, where a submarine may only represent 10 pixels. Monitor that data with AI to direct the high resolution camera to anything that may be a sub.

Identifying how the human brain does that, with the low resolution data it gets, would be a good start.

stevesedio
Автор

I feel like i just hit some sort of CIA information distributing platform wtf.

Salara
Автор

Do you know anything about Human Rights? Are YOU willing to lose your freedom even of the FREEDOM of your love ones?

Natalia-gvb