PARIS: Artificial intelligence has made its way into every facet of modern life, from driverless cars and “intelligent” vacuum cleaners to cutting-edge methods for diagnosing diseases.
The technology’s proponents claim that it is changing the human experience, but its detractors point out that it runs the risk of giving machines decisions that can change a person’s life.
Regulators in North America and Europe are concerned.
The AI Act, which aims to limit the algorithm’s age, is likely to be enacted by the European Union next year.
A blueprint for an AI Bill of Rights was recently published in the United States, and Canada is also considering legislation.
China’s use of biometric data, facial recognition, and other technology to build a powerful control system has loomed large in the debates.
Gry Hasselbalch, a Danish academic who advises the EU on contentious technology, argued that “totalitarian infrastructures” could also be created in the West.
She stated to AFP, “I see that as a huge threat, regardless of the benefits.”
In any case, before controllers can act, they face the overwhelming errand of characterizing what simulated intelligence really is.
‘Mug’s game’
The AI Bill of Rights’ co-author, Brown University’s Suresh Venkatasubramanian, described AI definition as “a mug’s game.”
He tweeted that the bill should cover any technology that affects people’s rights.
The more difficult approach of attempting to define the vast field is being taken by the 27-nation EU.
The methods that are categorized as AI in its draft law include virtually every automated computer system.
The changing meaning of the term AI is the source of the issue.
It has described efforts to create machines that mimic human thought for decades.
However, in the early 2000s, funding for this research—known as symbolic AI—largely dried up.
AI was reborn as a catch-all term for the number-crunching programs and algorithms they produced with the rise of Silicon Valley’s titans.
This robotization permitted them to target clients with publicizing and content, assisting them with making many billions of dollars.
Meredith Whittaker, a former employee of Google who co-founded the AI Now Institute at New York University, told AFP that AI “was a way for them to make more use of this surveillance data and to mystify what was happening.”
Therefore, the EU and the US have both come to the conclusion that any definition of AI must be as inclusive as possible.
‘Too challenging’
However, the two Western superpowers have largely diverged since then.
The AI Act proposal from the EU is over 100 pages long.
Among its most attractive recommendations are the finished restriction of certain “high-risk” advancements — the sort of biometric observation apparatuses utilized in China.
Additionally, it severely restricts the use of AI tools by immigration officials, law enforcement, and judges.
Some technologies, according to Hasselbach, “simply too challenging to fundamental rights.”
In contrast, the AI Bill of Rights is a succinct set of idealistic principles that include statements such as “you should be protected from unsafe or ineffective systems.”
The White House issued the bill, which is based on existing law.
Due to Congress’ impasse, experts predict that no AI legislation will be passed in the United States until at least 2024.
‘Flesh wound’
Feelings contrast on the benefits of each methodology.
According to New York University professor Gary Marcus, “We desperately need regulation.”
He makes the point that “large language models,” the AI that powers chatbots, translation tools, predictive text software, and many other applications, can be used to spread false information that is harmful.
Rather than the “surveillance business models” that support AI, Whittaker questioned the value of AI legislation.
She stated, “I think you’re putting a band-aid over a flesh wound” if that is not addressed fundamentally.
However, other experts have generally praised the US strategy.
According to researcher Sean McGregor, who maintains a database of technological failures for the AI Incident Database, AI was a better target for regulators than the more abstract concept of privacy.
However, he mentioned the possibility of excessive regulation.
He told AFP that “the authorities that exist can regulate AI,” pointing to the Federal Trade Commission in the United States and HUD, the housing regulator.
However, experts generally agree that the hype and mysticism surrounding AI technology must be dispelled.
McGregor compared AI to a highly sophisticated Excel spreadsheet, stating, “It’s not magical.”
Source: AFP