Interview Trusted AI – Trustworthiness and Large Language Models Interview with Dr. André Meyer-Vitali and Dr. Simon Ostermann ChatGPT, LLaMA, or Mistral Meyer-Vitali: Generative pre- Trusted AI is characterized by a yield astonishing expertise trained transformers (GPT) are high degree of reliability, security, and, quite obviously, wrong large, highly interconnected mod- transparency, robustness, fairness, answers. What is the reason els, which makes them opaque and and verifiability, all while improving for this unreliability? difficult to control. The underlying the functionality of existing sys- technology – deep learning – does tems. The performance and reliabil- not provide a true understanding of ity of AI systems can be trusted by the problem but merely maps developers, users, and regulators complex statistical relationships. – even in complex socio-technical environments. Ostermann: Not only is the technol- ogy a data-based black box, many Do we need a technological vendors do not make the source reboot? code of their systems available. As Meyer-Vitali: Not entirely, but the a consequence, this lack of insight new generation of AI systems will into parameters, training data, be based on hybrid systems that training methods, and inference do not rely solely on data-driven settings makes it difficult to under- approaches, but rather utilize the stand how these models arrive at full range of AI techniques, includ- their results. ing search, reasoning, planning, and How can Generative AI be made more trustworthy? What to expect from hybrid symbolic AI methods. Dr. André Meyer-Vitali re- Meyer-Vitali: The term “Trusted AI” AI systems? searches hybrid and distri- is used for a new overall approach Meyer-Vitali: The use of neuro- buted artificial intelligen- to advance the development of symbolic models facilitates valida- ce. He is Principal Investi- reliable systems. The goal is a new tion, creates more transparency, gator at CERTAIN, the Cen- # tre for European Research generation of AI that guarantees and promotes greater account- functionality, especially in high-risk ability, whereas causal models on Trustworthy AI. applications. provide understandable explana- 6