Europe is trying to get involved in improving AI’s performance
After two years, once everything has been fixed, EU citizens will be protected by law from certain AI-related problems, such as street-level camera surveillance, or state-of-the-art computer software.
This week, Brussels set out his plans being the first in the world to have laws on how to use creative ingenuity, in an attempt to put European ideas at the heart of rapid technology.
Over the past decade, AI has become increasingly important to countries around the world, and two world leaders – the US and China – have taken different approaches.
The Chinese government-led program has led to increased investment in technology, and the rapid deployment of services that have helped the government increase its oversight and control of the population. In the US, AI development has been left to the private sector, which focuses on commercial operations.
“The US and China have always been innovators, as well as leading investors in AI,” said Anu Bradford, a professor of law at EU University.
“But the law seeks to get the EU back on track. He is trying to undermine the idea that the EU should be technically superior and make itself available to China and the US, without violating its European standards or their fundamental rights. ”
EU officials expect the world to follow suit, and Japan and Canada are monitoring the draft.
While the EU wants to re-engage in how governments can use AI, it also wants to encourage developers to try new things.
Officials said they hoped the soundness of the new system would help give confidence to the initiators. “We will be the first contractor to give advice. So if you want to use AI software, go to Europe. You will know what to do and how to do it, “said Thierry Breton, French Commissioner for Digital Affairs at the bloc.
In an effort to become innovative, the mindset acknowledges that regulations are often more complex for small businesses, as well as incentives to help. These include “sandboxes” where developers can use data to test new applications to achieve justice, health and environmental management without fear of severe penalties if errors occur.
In accordance with this order, the Commission published a street maps by increasing funding for the organization, and by including members of the entire group to help train machine learning.
This needs to be addressed by the European Parliament and the member states – two parties who will ratify the law into law. The bill is expected by 2023 initially, according to people who are following it very carefully.
But critics say that, in an effort to support commercial AI, the law does not go far enough to curb the discrimination of AI police, regulating migration by biometric groups of gender, gender and sexual orientation. These have by now been known as “dangerous”, which means that anyone who uses them must inform their users, and clearly explain how algorithms make their decisions – but their use is still permissible, especially by private companies.
Other activities that are high-risk, but not prohibited, include the use of AI in the workplace and personnel management, as is the case with companies including HireVue and Uber, AI that tests and evaluates students, and uses AI to provide and rehabilitate support and services.
Access Now, Brussels’ digital rights group, also said that a complete ban on face-to-face and credit control is only addressed to government officials, excluding companies such as Clearview AI or startup AI such as Lenddo and ZestFinance, whose businesses are located worldwide. .
Others also spoke of the apparent lack of civil liberties. “The whole goal is to control the relationship between donors (those who are developing [AI technologies]) and users (senders). Where do people go? ”Wrote Sarah Chander and Ella Jakubowski from European Digital Rights, a support group, on Twitter. “There seems to be very little way for those affected or injured by AI machines to rehabilitate. This is a great need for government agencies, apartheid groups, consumers and workers.”
On the other hand, opposition groups representing Big Tech’s interests also opposed the proposal, saying it would block the new ones.
The Center for Data Innovation, a intelligence unit funded by the parent organization that receives funding from Apple and Amazon, says the law has thwarted “plans” on the EU’s idea of becoming a global leader in AI and that “new rules will destroy modern technology companies”. .
In particular, it took the problem with an AI ban that “undermines” human systems and the burdens of managing “high-risk” AI systems, such as forced human monitoring, as well as evidence of security and efficiency.
While this has been criticized, the EU worries that if it does not take action now to establish regulations around AI, it will give rise to international technological expertise that conflicts with European principles.
“The Chinese people have been hard at work that worries white people. This is being exported, mainly for the sake of law enforcement and there is a need for more inter-governmental regimes,” Bradford said. “The EU is very concerned that it should play its part in preventing the establishment of institutions that violate international human rights, so there is competition for ethical principles.”
Petra Molnar, a lecturer at the University of York in Canada, agreed, saying the law is much deeper and more humanitarian than previous ideas in the US and Canada.
“There are a lot of hands shaking around AI ethics in the US and Canada but [proposals] not very true. ”
Finally, the EU betrays the notion that the development and marketing of AI is driven by public confidence.
“If we can have a well-run AI that consumers rely on, this also creates market opportunities, because… It will be a competitive advantage over European machines. [as] they are considered trustworthy and superior, ”says Bradford of Columbia University. “You’re not fighting the price alone.”