How do we build trust in AI? U of T researchers discuss multidisciplinary approaches

Understanding how trust is built between groups of people, institutions and technologies is essential for thinking about how AI systems can be trusted to reliably address human needs while mitigating risks.
While the concept of trust is widely referenced in AI policy discussions, consensus remains elusive on its meaning, its role in governance, and how it should be integrated into AI development—especially in public service contexts.
To address this, the Schwartz Reisman Institute for Technology and Society (SRI) hosted a roundtable discussion on February 11, 2025, as part of the official side events at the AI Action Summit in Paris. Titled Building Trust in AI: A Multifaceted Approach, the discussion centered on insights from an upcoming SRI paper, Trust in Human-Machine Learning Interactions: A Multifaceted Approach, led by SRI Research Lead Beth Coleman, which examines multidisciplinary approaches to fostering trust in human-machine learning interactions.