Understanding Fundamental Rights Impact Assessments in the EU AI Act | Lunchtime BABLing 27

preview_player
Показать описание
👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".

Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO of BABL AI, teams up with Jeffery Recker, BABL AI COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode on Understanding Fundamental Rights Impact Assessments in the EU AI Act, is a must-listen for anyone interested in the intersection of AI, regulation, and human rights.

Key Discussion Points:

Introduction to the EU AI Act: Gain insights into the EU AI Act's passing and its significance in shaping the future of AI regulation.
Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments.

Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems.
Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes.

Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation.

Episode Highlights:

Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses.

Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape.

Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject.

Why Listen to Lunchtime BABLing?

Stay Informed: Keep up with the latest in AI regulation and understand its impact on businesses and society.

Expert Opinions: Hear from seasoned professionals with deep knowledge of AI and regulatory landscapes.

Practical Advice: Gain valuable insights into navigating the complexities of AI regulation in your business or organization.

Subscribe to our Channel: Don't miss out on future episodes of Lunchtime BABLing, where we unravel the complexities of AI and technology. Hit the subscribe button and turn on notifications!

Connect with Us:

#techcompliance #EUAIACT #AIRegulation #FundamentalRights #Podcast #AI #TechnologyLaw #DataProtection #BABLing #aiethics #responsibleai #ai
Рекомендации по теме
Комментарии
Автор

Thank you for sharing your knowledge! What methodologies can you use for doing fundamental rights or human rights impact assessments? Is it the framework described in one of your publications, "A Framework for Assurance Audits of Algorithmic Systems"? I'm also aware of others, such as the HRIA methodology for digital activities from the Danish Institute of Human Rights. Are they comparable?

lhiwaya
Автор

Could you elaborate on who is obligated to carry out a FRIA?
Chapter 3, section 3, article 27 says "deployers that are bodies governed by public law, or are private entities providing public services, and deployers high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall perform an assessment of the impact on fundamental rights" but section 3 itself is named "Obligations of providers and deployers of high-risk AI systems and other parties" also mentioning providers.

In another episode, you mentioned that carrying out a FRIA shows the commitment of the enterprise to its customers, building mutual trust and early differentiating from competitors that choose to postpone this process. I quote: "Now's the time to do that because pretty soon everybody's going to have to do this and you're just going to be one among a sea of people who are only meeting the floor of that regulation". Do you imply that in the future a FRIA will be mandatory for every business that implements AI systems or solely for businesses that implement High-Risk AI systems?

Thanks for the great content by the way!

reinouttuytten