Por Mattia Petrolo (CFCUL/GI1).
Over the past decade, the widespread integration of computational algorithms across various aspects of human life has led to the emergence of a human-centered approach to Artificial Intelligence (AI). This approach emphasizes the importance of considering the interaction between computational models and human agents, rather than focusing solely on each element in isolation. In my presentation, I aim to address this paradigm by tackling two key aspects. Firstly, I delve into characterizing the interaction between human agents and AI systems in epistemic terms. I explore how AI algorithms may affect and change our epistemic attitudes, such as knowledge and belief, shedding light on critical concerns regarding the reliability, trustworthiness, and transparency of AI systems. In the second part of the talk, I extend this characterization to the specific issue of the epistemic opacity of AI systems. Here, I present an original epistemological definition of algorithmic opacity, employing a tripartite analysis of their components. Building on this foundation, I introduce a formal framework for reasoning about an agent's epistemic attitudes toward an AI system and investigate the conditions for achieving epistemic transparency.
Transmissão via Zoom (pw: 519448).