iFM2022 Keynote 1 - Verifying Autonomous Systems

preview_player
Показать описание
Invited talk at iFM2022 by Louise A. Dennis.

Abstract: Autonomous systems are increasingly being used for a range of tasks from exploring dangerous environments, to assistance in our homes. If autonomous systems are to be deployed in such situations then their safety assurance (and certification) must be considered seriously.

Many people are seeking to leverage the power of machine learning to directly link inputs and outputs in a variety of autonomous systems via a statistical model. I will examine an alternative, more modular, approach in which the decision making component of the system is constructed in a way that makes it amenable to formal verification. This approach necessitates an integrated approach to the verification of the whole autonomous system – both in terms of validating assumptions about the way the environment external to the system behaves and in terms of compositional verification of the various modules within the system.

Bio: Louise Dennis is a senior lecturer at the University of Manchester where she leads the Autonomy and Verification group. Her expertise is in the development and verification of autonomous systems with interests in rational agent programming languages, and architectures for autonomous systems, with a particular emphasis on ethical machine reasoning, explainability and creating verifiable systems. Her work on verifiable autonomous system has involved the development of the MCAPL Framework for developing verifiable programming languages.

Louise is currently co-investigator on the UK Trustworthy Autonomous Systems programme Verifiability Node and Computational Agent Responsibility project. She is a member of the IEEE Standards working group for Transparency for Autonomous Systems (P7001), and Technical Committee for Verification of Autonomous Systems Roadmap working group.

Рекомендации по теме