Tech News

Convert to the same AI

A pandemic that has swept the world over the past year has proven to be a cold, complex analysis of many things — the various stages of preparedness to respond; all theories about health, technology, and science; and greater economic and social inequalities. As the world continues to tackle the covid-19 health crisis, and some places are slowly returning to work, school, travel, and entertainment, it is important to address conflicting efforts to protect public health in the same way and ensure privacy.

These challenges led to temporary changes in employment and cultural practices, as well as greater reliance on technology. It is now more important than ever for companies, governments, and individuals to take precautionary measures and to manage personal information. The rapid and rapid development of artificial intelligence (AI) demonstrates how transformative technologies tend to communicate with individuals and cultural organizations in dangerous or inappropriate ways.

“Our relationship with the rest of the industry is changing dramatically after the epidemic,” said Yoav Schlesinger, AI’s chief operations officer at Salesforce. “There will be dialogue between people, businesses, government, and technology; the way their data flows between both parties will be discussed in the new cultural agreement. ”

AI is working

As the problem of covid-19 first surfaced in early 2020, scientists looked to AI to support a variety of medical applications, such as identifying those seeking vaccines or medical care, helping to identify covid-19 symptoms, and classifying needs as complex – bed-holders and recesses. In particular, it relied on the analytical power of advanced AI systems to produce reliable vaccines and drugs.

While advanced monitoring tools may help to extract information from many sources, the results will not always be the right results. Instead, AI-powered tools and the data they use can improve system bias or system bias. Throughout the epidemic, organizations such as the Centers for Disease Control and Prevention and the World Health Organization have collected large amounts of data, but the data do not accurately represent the number of people most affected and unaffected — including blacks, brown, and traditional people — as well as other diagnostic methods. has made, says Schlesinger.

For example, biometric devices such as Fitbit or Apple Watch show promise in their ability to detect covid-19 signals, such as temperature changes or oxygen concentrations. However, this assessment is based on more or less error and may lead to bias or injustice that has a significant impact on vulnerable people and communities.

“There is research showing green LED light it’s a very difficult time to calculate air pressure and low oxygen levels on dark skin, ”says Schlesinger, referring to the power source. “Therefore it would not work in the same way to obtain covid signals for those with dark and purple skin.”

AI has proven to be very effective in helping to analyze large sets of knowledge. A team at the Viterbi School of Engineering at the University of Southern California has developed an AI approach to help screen those who want a covid-19 vaccine. After identifying those who wanted to have 26, it reduced the share to 11 who could do better. The source for the analysis was the Immune Epitope Database, which includes more than 600,000 genes from more than 3,600 species.

Some researchers from Viterbi use AI to better understand cultures and to better understand the cultures that influence the practices of races and ethnicities. This can have a profound effect on the prices of other people in times of crisis such as the plague, due to religious practices, customs, and other factors that can lead to the spread of viruses.

Leading scientists Kristina Lerman and Fred Morstatter based their research on Moral Doctrine, which defines “ethical” that underpins culture, such as care, justice, honesty, and authority, to help inform individuals and the community.

“Our goal is to create a framework that allows us to better understand the dynamics that drive elections in the deepest sense of the word,” Morstatter said in a statement. a report released by USC. “And in doing so, we provide cultural predictions.”

This study also explores how to use AI efficiently and effectively. “Most people, but not all, are interested in making the world a better place,” says Schlesinger. “Now we have to move on to the next phase – what goals do we want to achieve, and what results do we want to see? How will we try to win, and what will we look like? ”

Demonstrating moral difficulties

It’s important to ask questions about our findings and AI techniques, Schlesinger says. “We talk about achieving justice by knowing. At every stage of this process, you are making valuable judgments or ideas that may affect your results in some way,” he said. “This is a major challenge in AI design, which should be addressed in all areas of public interest.”

One of the challenges to this is in-depth analysis of data that informs AI systems. It is important that you understand the source of the information and its structure, and answer questions such as: How is the information generated? Does it have a variety of participating groups? What is the best way to set these examples as examples to reduce bias and increase fairness?

When people return to work, employers may be too using technologies created with built-in AI, including temperature cameras to detect heat; auditory sensors to detect coughing or loudness, which facilitates the spread of respiratory dots; and video streams to highlight hand washing techniques, body rules, and mask requirements.

Such diagnostic and analysis machines not only have real problems but are also at high risk human rights, privacy, security, and interdependence. Encouraging increased control has been a major factor in causing the epidemic. Government agencies have used surveillance cameras, cell phone information, credit card purchases, and visually impaired residential areas such as airports to help monitor the movements of participants or covid-19s and establish the prevalence of and viruses. moisture.

“The first question that needs to be answered is not that we can do this – but should we?” says Schlesinger. “Testing people to see if they have access to these products without a license raises ethical concerns, even if it means doing so. We need to discuss it as a group if there is a valid reason to do so. ”

What the future holds

Once people get back to things that are just right, it’s time to re-examine the relationship with data and develop new data collection methods, as well as the misuse – and misuse – of data. When designing and deploying AI, experts continue to formulate important ideas of what’s going on and what’s going on, but the basics of what they need to question. Is this legal? Who collected? What are the assumptions? Are they delivered correctly? How can the privacy of citizens and consumers be maintained?

The more AI is used, the more important it is to think about how you can rebuild trust. Using AI to enhance people’s decision-making, not changing people’s minds, is one way.

“There will be a lot of questions about the role that AI should play in the community, its relationships with people, what activities are appropriate for people and what functions are appropriate for AI,” says Schlesinger. “There are other areas where the potential for AI and its potential to enhance human potential will strengthen our resilience and resilience. Where AI is not replacing people, but increasing their efforts, then hope is the next one. ”

There are always situations in which a person needs to be involved in making decisions. “In management companies, for example, such as health, banking, and finance, there has to be a willingness to follow through,” says Schlesinger. “You don’t just send AI to make decisions without the help of a doctor. What we tend to believe is that AI can do this, AI has no sympathy, and probably never will. ”

It is important that data be collected and developed by AI so that it does not expand but reduce inequality. There needs to be a balance between finding AI solutions to improve social and cultural development, promote equitable practices and responses, and just knowing that some problems will require human feedback.

This was created by Insights, the hand of material contained in the MIT Technology Review. It was not written by the authors of the MIT Technology Review.


Source link

Related Articles

Leave a Reply

Back to top button