beebucket mask detector

preview_player
Показать описание
What are reasons, which often prevent usage of Machine Learning and Artificial Intelligence at the networks edge, close to the data originators?

Architectural Cohesion: There are many existing offerings based on centralized ML/AI services. Their concept is based on data centralization. Such solutions are dependent on connectivity to the services.

Connectivity, Capacity and Costs: Many use-cases are dealing with highly dense data, such as video, or audio. Centralizing such data from many data originators is often not a viable option at larger scale, simply because of implied cost vs. anticipated business value.

Privacy: Not every use case permits data exposure outside the data originators premise.

Resources: Distributed scalable virtual resources are not a given, especially at the networks edge, or they may be considered too costly to withstand a value assessment.

How about minimizing the required resource allocation of ML/AI at the networks edge so drastically, that it can run on resource restricted environments?