Multi-Stream + Multi Neural Network Inferencing on Xilinx Kria SoM-KV260

preview_player
Показать описание
We are quite focused on "making Edge based AI inferencing cost effective", we called it as "Revolutionizing AI deployment in Edge", that is why we come with "Multi-Stream + Multi Neural Network Model based inferencing implementation". This implementation is done in Xilinx Kria KV260 board.
This type of implementation reduces the "Cost Per Stream (CPS) for AI based deployment targeting the edge" . By this implementation edge based solution as for smart cities, retail analytics, video analytics and computer vision solutions can be more cost effective and can provide good performance in terms of "power/wattage and frame-rate".
#Kria #MultiML #Inferencing #NN #Video #Streams #FPGA #Edge #AI #Acceleration #Xilinx #smartcity #VideoAnalytics #ComputerVision
Рекомендации по теме
Комментарии
Автор

please share a hackstar project of this implementation

SahaParikshit