New Contract

SAFETY-CRITICAL ARTIFICIAL INTELLIGENCE


AIKO is excited to announce that we are part of the SAFEXPLAIN EU project, to increase the reliability of AI software in safety-critical use cases.

Thanks to the European Commission's research and innovation funding programme “Horizon Europe”, a three years project kicked off in October: SAFEXPLAIN. But what is it?

Building trust in Machine Learning and Deep Learning can be problematic since, due to their data-dependent and stochastic nature, it's sometimes complicated to understand the process behind the results. As the popularity of ML and DL has escalated for various reasons, like their effectiveness in data classification and prediction, a need for qualifying and certifying these models years has become evident.

This project will provide a new and flexible approach that will allow the certification and validation of ML and DL technologies in Critical Autonomous AI-based Systems through:

  • architecting transparent DL solutions and libraries that will enable explaining why they satisfy FUSA (Functional Safety) requirements.
  • devise alternative and increasingly complex FUSA design safety patterns for different DL usage levels (i.e. with varying safety requirements) that will allow using DL in any CAIS functionality for varying levels of criticality and fault tolerance.

As part of the consortium, we are proud to announce that we will provide a space-focused use case as one of the first benchmark algorithms to assess safety and explainability through SAFEXPLAIN software stack. Other parties involved in this pioneering project are Barcelona Supercomputing Center (BSC), Ikerlan, Research Institutes of Sweden, Navinfo and Exida.

We will keep you updated!

SAFEXPLAIN official website

  |  1.11.2022
  • Barcelona Supercomputing Centre
  • European Commission