How the Associated press applied AI to deliver automatic Descriptions of live and non-live Content

preview_player
Показать описание
The Associated Press processes roughly 20.000 hours of original footage on a yearly basis, which consists of live and non-live content. To reduce the time to edit, to ensure content is available across the organisation and to its customers in the shortest possible timeframe, the Associated Press was looking for a solution that processes audio and video, and that makes such automatically generated shot lists available to its editors and its customers.

Limecraft delivered a pipeline using Vidrovr for scene description, facial recognition and gesture detection, as well as using Trint for audio transcription, together delivering a single coherent and frame-accurate description of each individual shot. Moreover, Limecraft cracked the challenge to execute such processing on live streams, several at the time. Doing so, the Associated Press emancipates several 100's of hours of manual labour and significantly reduces the turn around times.

Sandy McIntyre (VP News at AP) and Maarten Verwaest (CEO at Limecraft) were interviewed for the IABM BAM Live! by Darren Whitehead of IABM about the business case for using AI, and about the lessons learned.
Рекомендации по теме