Digital Ethics

preview_player
Показать описание
Data is key to the business of insurance companies. The variety and heterogeneity of the data stored on their servers is in-creasing, thus bringing challenges but also new methods and algorithms to light.

Thanks to Big Data new insurance prod-ucts can be developed, for example pay-as-you-drive insurance. Furthermore the process of claims settlement is becoming increasingly automated, and tasks such as fraud management can be handled by AI.

Overall, digitalisation plays an increasingly important role in the insurance industry – and thus for actuaries too. But with the expanding role of data, new methods and tools, as well as new questions and risks arise.

Challenges and questions arise on the user side as well as in relation to company pro-cesses.

Both society and consumers pay particular attention to the misuse of data and data leaks as well as worrying about losing con-trol of their own data. Consequently, data protection and data ethics have become key concerns for insurance companies.

At the same time companies have to be able to guarantee that the data they use is of good quality and without bias.

Data can be flawed and replicate discrimi-nation – and the subsequent algorithms, models and programs can contain inher-ent biases.

Often what happens between input and output of machine-learning algorithms is unknown – it is a black box. Thus it might be difficult to prove, for example, that these algorithms do not discriminate against anyone for gender or age, some-thing that has to be ensured because of various laws and professional standards.

But to work with big data and generate calculations, conclusions and, ultimately, products, companies are required – not least by law – to solve these black-box-mysteries and present documented, re-producible and transparent solutions.

For this reason the new research area “explainable AI” is trying to find ways to handle these issues and to provide insight into the algorithms.

But even if the algorithm is correctly en-coded and used, the results produced may be misleading and wrong conclusions may be drawn. Especially the time lag between gathering the data and applying it for cal-culation can be extremely long, for exam-ple, spanning several decades. Therefore assessment by experienced actuaries is necessary.

In order to address this problem, in 2019 the European Commission published guidelines for trustworthy AI, stating the following seven key requirements:

- human agency and oversight,
- technical robustness and safety,
- privacy and data governance,
- transparency,
- diversity, non-discrimination and fair-ness,
- environmental and societal well-being and
- accountability.

How measures like these guidelines, standards and new laws are applied with-in the industry will determine how far digitalisation can go for insurance and the extent to which consumers and users will accept its outcome.

Actuaries are well prepared to take on these important tasks und will play an important role in this evolution of con-sumer protection and digital ethics.
Рекомендации по теме