Skip to content
TrustworthynessAI_thumbnail-image-v1

Trustworthiness of AI

Landscape of A.I. regulations, standards and publications

TrustworthynessAI_thumbnail-image-v1

What's in this White paper?

Mapping publications to trust requirements
list-icon
Standards and technical reports
list-icon
Legal regulations
list-icon

Abstract

The development of artificial intelligence (AI) systems is progressing rapidly and so is their adoption in many industries, products and services. There is no question that AI is, and will even further, influence our societies and lives. Due to this influence and the progress in the development and adoption of AI systems, trust, ethics and social concerns need to be addressed. AI systems need to be reliable, fair, transparent – they need to be trustworthy. This need is recognized by many organizations from governments, industry and academia. They have discussed and are still discussing how trust in AI systems can be established. 

In this context, numerous white papers, proposals and standards have been published and are still in development. For someone who is just starting to look into this topic, the number of resources can be overwhelming. This document aims to provide a summary and guidance through the jungle of documents about trust in AI. We look at existing standards, standards in development, reports, regulations, audit and test proposals, certification guidelines as well as any other informative white paper. On each of them, we provide a short summary and put them in context to the whole publication landscape. This document should assist the reader in becoming familiar with the topic.