Explainable and Robust AI (AI Data and Robotics Partnership)

< Back

Open Call Reference: HORIZON-CL4-2024-HUMAN-01-06

“Trustworthy AI solutions, need to be robust, safe and reliable when operating in real-world conditions, and need to be able to provide adequate, meaningful and complete explanations when relevant, or insights into causality, account for concerns about fairness, be robust when dealing with such issues in real world conditions, while aligned with rights and obligations around the use of AI systems in Europe.To achieve robust and reliable AI, novel approaches are needed to develop methods and solutions that work under other than model-ideal circumstances, while also having an awareness when these conditions break down. To achieve trustworthiness, AI systems should be sufficiently transparent and capable of explaining how the system has reached a conclusion in a way that is meaningful to the user, while also indicating when the limits of operation have been reached.

-Advance AI-algorithms that can perform safely under a common variety of circumstances reliably in real-world conditions and predict when these operational circumstances are no longer valid
-Advance robustness and explainability for a generality of solutions, while leading to an acceptable loss in accuracy and efficiency, and with known verifiability and reproducibility
-Extend general applicability of explainability and robustness of AI-systems by foundational AI and machine learning research

-Enhanced robustness, performance and reliability of AI systems, including awareness of the limits of operational robustness of the system
-Improved explainability and accountability, transparency and autonomy of AI systems, including awareness of the working conditions of the system”

Follow us