Tech News

A quick guide to the most important AI law you’ve never heard of

What about outside the EU?

The GDPR, the EU’s data protection regulation, is the bloc’s most famous tech export, and it has been copied everywhere from California to India.

The approach to AI the EU has taken, which targets the riskiest AI, is one that most developed countries agree on. If Europeans could create a coherent way to regulate the technology, it could work as a template for other countries hoping to do so too.

“US companies, in their compliance with the EU AI Act, will also end up raising their standards for American consumers with regard to transparency and accountability,” says Marc Rotenberg, who heads the Center for AI and Digital Policy, a nonprofit that tracks AI policy.

The bill is also being watched closely by the Biden administration. The US is home to some of the world’s biggest AI labs, such as those at Google AI, Meta, and OpenAI, and leads multiple different global rankings in AI research, so the White House wants to know how any regulation might apply to these companies. For now, influential US government figures such as National Security Advisor Jake Sullivan, Secretary of Commerce Gina Raimondo, and Lynne Parker, who is leading the White House’s AI effort, have welcomed Europe’s effort to regulate AI.

“This is a sharp contrast to how the US viewed the development of GDPR, which at the time people in the US said would end the internet, eclipse the sun, and end life on the planet as we know it,” says Rotenberg.

Despite some inevitable caution, the US has good reasons to welcome the legislation. It’s extremely anxious about China’s growing influence in tech. For America, the official stance is that retaining Western dominance of tech is a matter of whether “democratic values” prevail. It wants to keep the EU, a “like-minded ally”Close.

What are the biggest challenges?

Some of the bill’s requirements are technically impossible to comply with at present. The first draft of the bill requires that data sets be free of errors and that humans be able to “fully understand” how AI systems work. The data sets that are used to train AI systems are vast, and having a human check that they are completely error free would require thousands of hours of work, if verifying such a thing were even possible. And today neural networks are so complex even their creators do not fully understand how they arrive at their conclusions.

Tech companies are also deeply uncomfortable about the requirements to give external auditors or regulators access to their source code and algorithms in order to enforce the law.


Source link

Related Articles

Leave a Reply

Back to top button