A journey on a broken-down bus is a good way to glimpse the ethical limits of generative Artificial Intelligence (AI). Any mechanic who has to repair the vehicle has the opportunity to familiarise themselves in advance with all the components and how they work, which makes it easier to identify the source of any fault. It is out of the question to prevent the use of buses or AI, but for some more or less unfathomable reason, many humans continue to expect to repair malfunctioning virtual assistants using the same logic with which motor vehicle breakdowns are resolved. And that is where the problem can take on global dimensions.
‘In a virtual assistant, the number of features and usage scenarios is always open-ended and much greater than that of a bus. That's why it's more difficult to guarantee the reliability of a chatbot than a bus,’ explains António Branco, professor at the Faculty of Sciences of the University of Lisbon (Ciências ULisboa) and coordinator of the NLX Natural Language and Speech Group, which has been developing the virtual assistant (or chatbot, or bot) Evaristo.ai.
Despite discrepancies in reliability, the number of chatbot users has continued to grow in recent times – and it is not surprising that, taking into account the tools available on mobile phones, it has already surpassed the total number of people who travel by bus. This raises another question: who can guarantee that the AI available in the world is not being used to generate malicious bots?
“This whole process started badly, when AI was allowed to learn all the good things and... also all the bad things that appear on social media and the Internet,” replies Luís Correia, professor of Science at ULisboa and researcher at the LASIGE centre.


