Lisbon AI Seminar

Is there a Black-Box Issue in AI Medicine?

Sala 8.2.19, Ciências ULisboa (com transmissão online)

Por Steven S. Gouveia (UPorto).

The challenge of comprehending how AI algorithms generate judgments or predictions on the one hand, and how they integrate particular inputs to create a particular result on the other, is usually referred to as the “black-box issue” in AI Medicine. Although this type of technology can produce results that are more accurate and efficient than what we can refer to as “Traditional Medicine,” that is, a medicine made by humans, the internal workings of these models are typically opaque or challenging for humans to interpret. Due to the unique structural characteristics of AI models, it can be challenging for clinicians and other healthcare professionals to understand why an AI model recommends a particular diagnosis or course of treatment. This can raise questions about the model’s validity, safety, and ethical implications. Furthermore, since new medical treatments and diagnoses are frequently based on mechanistic explanations that AI models do not allow access to, a lack of transparency can also delay their discovery. Take into account, for example, an illustration of an AI model built to identify malignant cells in medical imaging. Large datasets of medical images are used to train these models, which then utilize this information to predict the appearance of malignant cells in new, unobserved images. Although these models are remarkably accurate, they frequently function as “black boxes,” which prevent humans from seeing or easily understanding the decisions they make. For instance, it might not be evident what exact traits the model is employing to distinguish between malignant and normal cells, or how the model arrived at a specific diagnosis or treatment suggestion. This can be particularly problematic in healthcare settings where professionals and patients must comprehend the reasoning behind a diagnosis or treatment strategy. Furthermore, it is difficult to identify and stop potential biases and discrimination if the model is opaque, making its correction impossible. This talk aims to determine whether there is, in fact, a “black box” issue in AI medicine and, if so, how we may think of possible solutions to address it.

Transmissão via Zoom (pw: 864198).

CFCUL - Centro de Filosofia das Ciências da Universidade de Lisboa