Designing trustworthy AI: The KITKA case study 

This case study reports on how a multidisciplinary project team worked to increase trust in AI.

logos of collaborating companies, in the background a symbol for trustworthy ai

The challenge

As part of the FFG Ideas Lab 4.0, with the question:

“How can we design AI systems and their algorithms as trustworthy as possible, considering ethical principles, so that Austrian companies accept them, recognize their potential, and exploit it?”

The exploratory project KITKA was initiated.

The project team

In addition to the ONTEC AI team as the industry partner, the IHS (Institute for Advanced Studies), the University of Applied Sciences Upper Austria, and the University of Salzburg are involved as scientific partners.

The initial situation

The initial situation of the project is that AI systems have great potential, which is not being fully utilized by most Austrian companies.

A lack of trust in and knowledge about these systems are significant barriers to their adequate use.

The overarching goal of the KITKA project is therefore to increase the transparency of AI systems developed in Austria.

The process

To achieve this, the interdisciplinary project team developed a catalog of criteria.

With the help of additional experts, this catalog was validated for a holistic description and evaluation of AI systems.

In addition to a technical representation of the systems, other perspectives (including ethics, sociology, economics, psychology, data protection, and HCI) were also considered.

The result

During the one-year project, ten AI systems used or offered in Austria were selected and described based on the criteria catalog.

Additionally, a platform was designed to appropriately present this information and make it accessible to any interested company and society in the future.

To the KITKA whitepaper with all details

Conclusion and outlook

The long-term vision of the KITKA team is to make this platform freely accessible to the public.