Tech News

Deleting invalid data is not enough

The study also shows that Labeled Faces in the Wild (LFW), which was established in 2007 and the first to use offline images, has changed several times over the past 15 years. work in the real world. This is despite the fact that there is a warning on the web site that warns of this.

Recently, the archives were reprinted in another form called SMFRD, which adds masks to the faces of the photographers to help identify their face to the plague. The authors argue that this could lead to problems with new behaviors. Proponents of her case have been working to make the actual transcript of this statement available online.

“This is a very important document, because the human eye has never been preoccupied with the complexity, and the potential for harm or danger, of data,” said Margaret Mitchell, an AI ethics researcher and SS leader in intelligence systems, who was not a participant in the study.

For a long time, local AI culture has assumed that data is available for use, he adds. This paper shows how this can lead to problems in the end. “It’s very important to consider the different rates at which data is stored, as well as the information that contains the available information,” he says.

Preparation

The authors also offer a number of ideas on advanced AI teams. First, manufacturers need to communicate clearly about how to use their information components, through licensing and documentation in detail. They should also set limits on access to their data, either by asking researchers to sign a contract or by asking them to fill out a form, especially if they want to make a splash.

Second, research meetings should establish the conditions under which data should be collected, recorded, and used, and they should make incentives for careful data processing. NeurIPS, the largest AI research community, already has a list of what to do with the right guidelines.

Mitchell shows beyond that. As part of BigScience’s work, the collaboration between AI researchers to create a type of AI that can illuminate and create natural language on rigorous scale, has been experimenting with the idea of ​​creating data-management organizations instead of managing, storing, and deploying it is documented and worked with lawyers, human rights activists, and the general public to ensure that it complies with legal requirements, is only legally collected, and can be removed if anyone chooses to disclose personal information. Such regulatory agencies may not be necessary on all pages – but on archives that may contain biometric information or known or intelligent information.

“Data collection and analysis does not happen to one or two people,” he says. “If you do this carefully, it can be a variety of tasks that require a lot of thought, expertise and a wide range of people.”

In recent years, this work has moved forward with the conviction that data stored securely will help us deal with more industry challenges. It is now clear that creating responsible data is not enough. Those working in AI also need to make a long-term commitment to take care of them and use them effectively.


Source link

Related Articles

Leave a Reply

Back to top button