International French-speaking students are subjected to treatment that creates the appearance of systemic discrimination. Africans, particularly French speakers, face a disproportionate number of study permit denials in Canada. Artificial intelligence (AI) systems used by federal immigration authorities are blamed for these systemic biases.
The deputy Alexis Brunelle-Duceppe remembered this month that “French-speaking universities are coming […] indicates the number of rejected applications for study. They are not rejected by the universities themselves, but by the federal government. For example, applications from international students were rejected 79% at the Université du Québec à Trois-Rivières and 58% at the Université du Québec à Chicoutimi. As for McGill University, […] we are talking about 9%”.
In February, University of Ottawa Vice-Rector Sanni Yaya noted that “over the past several years, many permit applications processed by Immigration, Refugees and Citizenship Canada have been denied, for often incomprehensible reasons, and have involved unusually long delays.” “However, these are students who have scholarships guaranteed by their institution and a good record. The Vice-Rector rightly wonders whether there is not an implicit bias on the part of the officials responsible for their assessment, who are confident in their intention not to leave Canada once their permit to study has expired.
In short, there is a body of evidence leading to the conclusion that the IT decision-making tools used by federal agencies reinforce systemic discrimination against French-speaking students from Africa.
Bent tools
This mess should challenge us about the prejudices that are reinforced by AI tools. Everyone is concerned because these technologies have become an integral part of everyday life. Phones with facial recognition devices, or household assistants, or even “smart” vacuum cleaners, not to mention the devices embedded in multiple vehicles, run on AI.
Professor Karine Gentelet and student Lily-Cannelle explain Matthew, in an article published on the website of the International Observatory on the Societal Impacts of AI and Digital Technology, that AI technologies, although often presented as neutral, are characterized by the social environment from which they originate. They tend to reproduce and even reinforce prejudice and unequal power flows.
The researchers point out that several studies have shown that when these technologies are not adequately formulated, they exclude racialized populations, or they overrepresent within social categories considered “problematic,” or even underperform when applied to racialized individuals be applied. They can reinforce discriminatory tendencies in various decision-making processes, such as police surveillance, medical diagnoses, court decisions, hiring or school admissions procedures, or the calculation of mortgage interest.
A necessary law
Last June, the Federal Minister for Innovation, Science and Economy made a presentation Invoice C-27 regulate the use of artificial intelligence technologies. The bill intends to impose transparency and accountability obligations on companies that make significant use of AI technologies.
The draft envisages banning certain behaviors related to AI systems that can cause serious harm to people. It contains provisions to hold companies that exploit these technologies accountable. The law would ensure proper management and control of AI systems to prevent physical or psychological harm or economic loss inflicted on individuals.
We also want to avoid biased results that establish an unjustified negative distinction on one or more of the grounds of discrimination prohibited by human rights law. Users of AI technologies would be required to assess and mitigate the risks associated with their systems. The draft law plans to introduce transparency requirements for systems that can have a significant impact on individuals. Those providing AI systems would be required to publish clear explanations of their operating conditions, as well as any decisions, recommendations or predictions they have made.
The discriminatory treatment of many students from French-speaking African countries highlights the systematic biases that need to be identified, analyzed and eliminated. It reminds that the use of AI technologies comes with significant risks of renewing problematic trends in decision-making processes. Dealing with such risks requires legislation that imposes strict transparency and accountability requirements on both companies and public authorities. Above all, we must dispel the myth of the alleged “neutrality” of these technical aids.
To see in the video
Avid beer trailblazer. Friendly student. Tv geek. Coffee junkie. Total writer. Hipster-friendly internet practitioner. Pop culture fanatic.