Tech News

The Commission seeks to investigate your belief in AI

[ad_1]

Harvard University assistant professor Himabindu Lakkaraju studies how reliance helps people to make decisions in the workplace. She is working with about 200 doctors at hospitals in Massachusetts to understand how reliance on AI can change how doctors perceive a patient.

For common illnesses such as the common cold, AI is less effective, since human experts can easily identify it. But Lakkaraju found that AI could help doctors diagnose more complex diseases as well. In his recent work, Lakkaraju and his colleagues presented doctors with prescriptions for nearly 2,000 patients and AI predictions, and then asked them to predict if the patient would have a stroke within six months. He modified the information provided on AI resources, including its accuracy, confidence intervals, and explained how the system works. They found that doctors’ predictions were more accurate when they were given more information about AI.

Lakkaraju says he is happy to see that NIST is trying to test trust, but says the agency should consider how the information can help trust people with AI systems. In experiments, the accuracy of the medical predictions made by physicians declined when physicians were informed without data to inform the decision, meaning that the explanation alone could lead people to rely more on AI.

“Explanations can bring a lot of trust even if they are not relevant, which leads to problems,” he says. “But as soon as you start to explain the numbers that the explanation is good, then people’s confidence is gradually established.”

Some countries are also working to address the question of AI trust. The US is one of 40 countries that have signed up AI principles which emphasizes reliability. A document signed by some 12 European countries states that reliability and expertise go hand in hand, and it is possible reasoning “Two parts of the same coin.”

NIST and OECD, a group of 38 richest nations, are using AI strategy tools to be at high or low risk. The Canadian government created a file on algorithm monitoring 2019 policy for businesses and government agencies. There, AI falls into four categories – not affecting the lives of people or the rights of communities to the greatest extent possible and to the advancement of individuals and groups. Calculating this change takes about 30 minutes. The Canadian method requires manufacturers to notify users of all but the systems at the lowest risk.

European Union MPs think AI rules which would help to define global standards for AI that are considered to be smaller or larger and how to implement the technology. As a European symbol GDPR Secretly, the EU AI approach could lead to the world’s largest companies using artificial intelligence to change their global systems.

The law requires the registration of people at high risk for AI used in a warehouse run by the European Commission. Examples of AI appear to be at high risk for integrated compounds including AI for use in education, operations, or as security equipment in areas such as electricity, gas, or water. The report could be revised before it passes, but the document seeks to ban AI from interacting with citizens and governments and realizing real-time.

The EU report also recommends allowing businesses and researchers to experiment in so-called “sandboxes,” designed to ensure that the policy is “easy to use technology, dedicated to the future, and hard to break.” Earlier this month, Biden officials introduced The National Artificial Intelligence Research Resource Task Force aims to share government research on medical care or self-driving. Final plans may need to be approved by Congress.

Currently, the majority of AI users are made up of AI professionals. Over time, more and more can empower people to avoid AI reliability and move the market to deliver robust, tested, reliable systems. Obviously if they know that AI is being used at all.


Many Great Stories

[ad_2]

Source link

Related Articles

Leave a Reply

Back to top button