The latest advances in artificial intelligence (AI) point to new opportunities for our societies. At the same time, they make us think about the responsible use of AI, so much so that in recent weeks several debates have arisen about the normative framework for AI in the world and in Canada in particular, through the Artificial Intelligence and Data Act (LIAD), Part of Bill C-27.
However, the passage of this law does not achieve consensus. For some, especially from the small world of Canadian AI, the adoption of the LIAD is badly needed. In fact, they believe the current version strikes “a fair balance between protecting Canadian people and the needs of innovation.” Suffice it to say at this point that these computer specialists, who also include executives from private companies, have above all economic interests to work towards the speedy passage of this bill.
Other researchers, particularly in the social sciences, argued against not rushing things, arguing that on the contrary the law should be significantly improved before it is passed. My recent comparative analyzes of LIAD, proposed EU AI legislation and the US Algorithmic Liability Bill lead me to the same conclusion. It should also be noted that passing too quickly could be problematic as it is very difficult to change a law once it has come into force.
It is clear that at the moment the European and American measures are tougher than the Canadian LIAD, which is nowhere near the top, write supporters of the current version of the bill. For example, unlike the European AI legislation, the LIAD does not prohibit the creation of social rating systems for individuals on the initiative or on behalf of government agencies.
The second group of researchers reminds us that they too do not condemn the unaffordable systems of remote and real-time biometric recognition. However, both of these practices violate the principle of non-discrimination and the right to dignity. They violate the values of justice and equality and interfere with the freedoms and rights of the people affected. Therefore, it is imperative that LIAD prohibit systems intended for similar practices and take these specificities into account.
The LIAD should also tighten its requirements with regard to the governance of “algorithmic” data. Currently, they mainly focus on anonymizing this data and using anonymized data from AI systems. In comparison, European legislation is concerned with analyzing potential biases and how data is selected, collected and used. The LIAD should at least do the same.
In terms of transparency, LIAD should go further and promote openness, i.e. access, to the data – including learning data – used by the algorithms whose decisions are likely to impact on human rights and/or are ethically questionable, in line with trade secrets (access conditions that can be enacted) and official restrictions.
Machine learning algorithms require upfront training using data, which plays a fundamental role in their ability to reason. It is also important to promote access to the databases that form the basis of this learning and to the conditions of their selection. This would be an effective and legitimate way to monitor the scope of these AI systems and understand any biases.
Transparency by opening up algorithmic processing data, or creating what I call “open algorithmic data”, would ensure that AI systems are part of a dynamic aimed at preventing some significant current and future technological deviations. Ultimately, this represents an opportunity for the Canadian government, which is promoting open data and pro-tech policies, to redefine the rules of the game.
Maintaining Canada’s leadership in responsible AI and protecting the integrity of Canadians while encouraging technological innovation also requires end-to-end regulatory requirements. If Canada really wants to establish itself as one of the frontrunners in terms of the normative framework for technologies and thereby mark this defining moment in history, it must strengthen the LIAD and not adopt its version.
To see in the video
Extreme problem solver. Professional web practitioner. Devoted pop culture enthusiast. Evil tv fan.