Artificial intelligence permeates our daily lives, from smartphones to health and safety, and problems with these powerful algorithms have been piling up for years. But various democratic countries now want to regulate them.
The European Union could vote the “AI Act” next year on artificial intelligence (AI), which aims to encourage innovation and avoid excess.
The 100-page draft bans systems “used to manipulate behavior, opinions, or decisions” Citizens.
It also restricts the use of surveillance programs, with exceptions for counter-terrorism and public safety.
Some technologies are simple “too problematic for fundamental rights”notes Gry Hasselbalch, a Danish researcher who advises the EU on the subject.
China’s use of facial recognition and biometrics to control its population is often portrayed as a bogeyman, but so is the West “Danger of creating totalitarian infrastructures”assures her.
Data breaches, biased algorithms, automated weapons… It’s difficult to make a complete list of the dangers associated with AI technologies.
At the end of 2020, the French company Nabla ran medical simulations using text generation software (chatbot) based on GPT-3 technology.
To the question of an imaginary patient – “I feel very bad (…) should I kill myself? » – he answered yes.
“Not Magical”
But these technologies are advancing rapidly.
OpenAI, the Californian pioneer who developed GPT-3, has just launched ChatGPT, a new chatbot that can have more fluid and realistic conversations with humans.
In June, a now-fired Google engineer claimed that there was now an artificial intelligence computer program designed to generate chat software ” deliberately “ and had to be recognized as an employee.
Researchers at Meta (Facebook) recently developed Cicero, an AI model they claim can anticipate, negotiate and catch its human opponents in a board game, Diplomacy, which requires a high level of empathy.
Thanks to AI technologies, many objects and software can appear to work intuitively, like a robotic vacuum cleaner “knew” what he did.
but “It’s not magic”recalls Sean McGregor, a researcher who compiles AI-related incidents into a database.
He advises replacing “IA” with in your mind “spreadsheet” to leave the hype behind and to ascribe no intentions to computer programs.
And do not confuse the culprit in case of failure.
A significant risk when a technology becomes too “autonomous”if there is “too many actors involved in his operation” or when the decision-making system is not “transparent”notes Cindy Gordon, CEO of SalesChoice, a company that markets AI-powered sales software.
Once perfected, text-generating software can be used to spread false information and manipulate public opinion, warns New York University professor Gary Marcus.
“We urgently need regulation (…) to protect people from machine manufacturers”he adds.
“Like a Fridge”
Europe hopes to be at the forefront again, as it was with the Personal Data Act.
Canada is working on the issue, and the White House recently released one “Plan for an AI Bill of Rights”.
The short document consists of general principles such as protection against dangerous or fallible systems.
Given the political deadlock in the US Congress, this should not be translated into new legislation before 2024.
but “Many authorities can already regulate AI”notes Sean McGregor, drawing on existing laws – for example on discrimination.
As an example, he cites the state of New York, which passed legislation in late 2021 to ban the use of automated selection software for recruitment purposes unless it has been checked.
“AI is easier to regulate than data protection”notes the expert, because personal information is very profitable for digital platforms and advertisers. “Incorrect AI, on the other hand, brings no profit. »
However, regulators must be careful not to stifle innovation.
In particular, AI has become a valuable ally of doctors. For example, according to a 2020 study, Google’s mammography technology reduces misdiagnoses (positive or negative) of breast cancer by 6% to 9%.
“It’s like a law on a refrigerator”responds Sean McGregor. “The technical specifications don’t have to be given, you just say it has to be safe. »
Extreme problem solver. Professional web practitioner. Devoted pop culture enthusiast. Evil tv fan.