Européen
HORIZON-CL4-2024-HUMAN-03-02 Explainable and Robust AI (RIA)
Trustworthy AI solutions, need to be robust, safe and reliable when operating in real-world conditions, and need to be able to provide adequate, meaningful and complete explanations when relevant, or insights into causality, account for concerns about fairness, be robust when dealing with such issues in real world conditions, while aligned with rights and obligations around the use of AI systems in Europe.
To achieve robust and reliable AI, novel approaches are needed to develop methods and solutions that work under other than model-ideal circumstances, while also having an awareness when these conditions break down. To achieve trustworthiness, AI system should be sufficiently transparent and capable of explaining how the system has reached a conclusion in a way that it is meaningful to the user, enabling safe and secure human-machine interaction, while also indicating when the limits of operation have been reached.
The research should aim at advancing robustness and explainability for a generality of solutions, while leading to an acceptable loss in accuracy and efficiency, and with known verifiability and reproducibility. The focus is on extending the general applicability of explainability and robustness of AI-systems by foundational AI and machine learning research.
AI, Data & Robotics
Projects are expected to contribute to one of the following outcomes:
– Enhanced robustness, performance and reliability of AI systems, including generative AI models, with awareness of the limits of operational robustness of the system.
– Improved explainability and accountability, transparency and autonomy of AI systems, including generative AI models, along with an awareness of the working conditions of the system.
The following methods may be considered but are not necessarily restricted to:
– data-efficient learning, transformers and alternative architectures, self-supervised learning, fine-tuning of foundation models, reinforcement learning, federated and edge-learning, automated machine learning, or any combination thereof for improved robustness and explainability.
– hybrid approaches integrating learning, knowledge and reasoning, neurosymbolic methods, model-based approaches, neuromorphic computing, or other nature-inspired approaches and other forms of hybrid combinations which are generically applicable to robustness and explainability.
– continual learning, active learning, long-term learning and how they can help improve robustness and explainability.
– multi-modal learning, natural language processing, speech recognition and text understanding taking multicultural aspects into account for the purpose of increased operational robustness and the capability to explain alternative formulation.
Activities are expected to start at TRL 2-3 and achieve TRL 4-5 by the end of the project.
A total budget of €15M is foreseen to carry out 2 projects.
To be eligible for funding, applicants must be established in one of the following countries:
– the Member States of the European Union, including their outermost regions
– the Overseas Countries and Territories (OCTs) linked to the Member States
– countries associated to Horizon Europe listed here : https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/common/guidance/list-3rd-country-participation_horizon-euratom_en.pdf
– low- and middle-income countries listed here : https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/common/guidance/list-3rd-country-participation_horizon-euratom_en.pdf
Unless otherwise provided for in the specific call conditions, only legal entities forming a consortium are eligible to participate in actions provided that the consortium includes, as
beneficiaries, three legal entities independent from each other and each established in a different country as follows:
– at least one independent legal entity established in a Member State; and
– at least two other independent legal entities, each established in different Member States or Associated Country