AI systems are becoming increasingly integral to sectors such as healthcare, transportation, security, finance, and education, underscoring the need for decision-making processes that are not only reliable but also adhere to legal, ethical, and rational standards. This presentation introduces a rule-based approach aimed at enhancing the transparency and accountability of AI decisions. By explicitly implementing rules within highly expressive logical frameworks, this approach not only strengthens the capabilities of automated reasoning systems but also facilitates the creation of AI "governors" that verify and ensure compliance in intelligent system decisions.
The discussion will focus on three key applications: the mechanization of non-classical logics to support complex reasoning processes, such as ethical decision-making; the development of AI tools designed for sensitive healthcare environments, ensuring they meet stringent transparency requirements; and the creation of platforms that enable co-creation of trustworthy human-machine interactions. This exploration will demonstrate how structured rule-based methodologies can effectively govern AI functionality across various domains, ensuring trustworthy and compliant decision-making.
Join at: imt.lu/aula1