As FATES would have it: McMaster researcher working to make AI systems safer for users – Faculty of Engineering

As FATES would have it: McMaster researcher working to make AI systems safer for users

From curated playlists and shopping recommendations to resume screening tools and self-driving cars, artificial intelligence (AI) is everywhere.

Despite the prevalence of AI in our everyday lives, however, questions remain about the unintended consequences of their use. For instance, can developers guarantee that the AI assistants they’ve built to screen resumes will select the most qualified candidates? And can the developers of self-driving cars ensure their systems are going to recognize pedestrians before hurting someone?

While the properties of fairness, accountability, transparency, ethics and security/safety – also known as FATES – exist to address these dimensions within AI-based systems, they are not integrated into the development process in a way that makes them accessible to average developers.

And if Big Tech companies like Amazon or Tesla are struggling to ensure there are no unintended consequences to their AI-based systems, explains Sébastien Mosser, an associate professor in the Department of Computing and Software, “imagine the risks inherent in an app coded by non-tech people with the help of ChatGPT that’s pushed to the Play Store/App Store, immediately available to anyone.”

That’s why Mosser, in collaboration with researchers from Université Toulouse Jean Jaurès, as well as the Centre National de la Recherche Scientifique (CNRS) and Maasai research groups from Inria Sophia-Antipolis, Université Côte d’Azur, are exploring ways to make AI-based systems safer for the end user.

“We need to be more cautious about how we handle AI systems,” says Mosser. “I want developers to understand the consequences of their decisions.”

Drawing inspiration from safety-inspired methods used in nuclear, medical and automotive industries, Mosser and his research team are developing a way to transfer these methods to the development of fair AI models.


Developing a fair AI model is a four-step process where developers must:

  1. Understand what fairness means;
  2. Determine what kind of fairness they need for the project;
  3. Implement it correctly; and
  4. Ensure their decision cannot be accidentally reversed a few months after launch.

And at each stage of the process, “there are numerous ways to do it wrong,” says Mosser.

Building an AI model is easy. Building the right one is harder. Building it the right way is even harder. What we’re trying to help software developers do: write AI models that are not harmful to their users.

Sébastien Mosser, Associate Professor, Department of Computing and Software

That’s why his team has divided their project into short- and long-term goals. In the short term, the team hopes to release tools that support software developers when creating AI models. They are currently working on a project called jPipe – an open- source software language that allows people to build argumentation models to justify their models. The system then captures the developer’s justifications and seamlessly integrates them into the development lifecycle so that, as Mosser puts it, “if they deviate from a decision, they’ll know it.” From here, these justifications are turned into reusable assets that can be transferred to less skilled developers.

The team’s long-term objective, however, is to develop a score card for AI models. And this time, the team is looking to snack foods for inspiration.

Picture the ‘nutritional facts’ on your favourite comfort food, explains Mosser. It might come with a sticker on it that says the snack contains 25% of your daily sugar needs.

“Now imagine the same thing – a FATES fact system, for instance – that comes with any app you’re downloading on your smartphone,” says Mosser. “Would you install an app if the label stated that it made zero effort to respect your privacy?”

The recent progress in the field of AI has acted as a sort of “accelerator,” explains Mosser. “What was previously reserved for elite data scientists and engineers is now accessible to almost everyone.”

While the democratization of this technology led to its widespread adoption and made AI use simpler than ever, it also led to systems being developed by people who were not trained to use these technologies correctly.

This realization hit Mosser in 2019 while working with psychologists, psychiatrists, linguists and social workers to develop mental health-related applications. Throughout the process, Mosser explains, he came to realize how easy it was to do things the wrong way and deploy extremely harmful systems, whether intentionally or unintentionally.

“I want developers to understand the consequences of their decisions,” says Mosser. “If a developer makes a harmful decision, it should be a conscious one, and they should be held accountable for it.”


This project is jointly funded by the Natural Sciences and Engineering Research Council (NSERC) and the Agence Nationale de la Recherche (ANR). It is also supported by Hugging Face, a leading actor in the AI community, which hosts numerous open-source AI models on their platform.