Tech News

We need to create trust in AI systems in order to be secure

[ad_1]

It is interesting that you are talking about how, in such cases, you have to make the infidelity in the system more secure.

Yes, that’s what you should do. We’re trying to experiment right now on the issue of job rejection. We have no results here, and we are struggling with some moral issues. Because once we discuss this and publish the results, we will have to explain why sometimes you don’t want to give AI a chance to refuse a job. How do you get fired if someone wants it?

But here’s an example with Tesla’s unbelievable thing. Refusal to work would be: I am writing down your trust, which I can do based on the amount of time you have banned or hindered wheel work. Given the inaccurate history, I can show you when you are in the faith. We did not do this, not with Tesla’s data, but with ours. And sometimes, the next time you get in a car, you refuse to work. You can’t use this app for X.

It’s like punishing a teenager for taking a cell phone. You know that teenagers can’t do anything you don’t want them to do if you connect with them.

What are some ways in which you have shown yourself to be more discriminating with the help you render toward other people?

Some of the methods we have analyzed are called AI definitions, where the system describes the risks or uncertainties. Because all these systems are not guaranteed — none of them are 100%. And the system knows if it is not known. As a result, as information can be made in a way that people can understand, so too have people changed their behavior.

For example, I would say that I drive on my own, and I have a lot of my own, and I know that some traffic is more dangerous than others. We’re approaching one of them, I say, “We’re approaching a crossroads where 10 people died last year.” You explain in a way that makes someone go, “Oh, wait, maybe I should know better.”

We’ve already talked about some of the things that worry you the most about relying on these systems. What are some? In the environment, are there any benefits?

Evil is actually connected with prejudice. This is why I always talk about bias and interdependence. Because if I ignore these practices and these systems make decisions that have different outcomes in different groups of people – say, the medical awareness system has differences between women and men – we are now developing machines that magnify the inequalities we have. That is the problem. And when you connect them with things that are linked to health or movement, all of which can lead to life or death, a bad choice can bring about something you can’t cure. That’s why we need to fix it.

The good news is that machine systems are better than ordinary people. I think it’s over though well, but I personally am more willing to connect with AI systems in some places than other people at times. Like, I know this has some drawbacks, but give me the AI. Give me a robot. He has more; that’s right. Especially if you have a novice person. With good results. It may be that the results are not the same.

In addition to your research on robotics and AI, you have been instrumental in increasing the diversity of the field throughout your career. You started a program to mentor high-risk girls 20 years ago, which most people had never thought of in this regard. Why is this important to you, and why is it also important in the field?

It is important to me because I was able to recognize a time in my life when someone gave me the opportunity to apply professional science. I had no idea it was a thing. And that’s why in the future, I never had a problem knowing I could do it. And so I always felt that it was my responsibility to do the same for those who did it for me. Growing up, I realized that there were a lot of people who didn’t look like me in the room. So I realized: Wait, there’s a problem here, because people have no cops, they don’t have a chance, they don’t know this is a thing.

And the reason it is important in the field is that everyone has a different experience. As I think about the interconnectedness of human robots before it even became a thing. It wasn’t because I was smart. It was because I looked at the problem in a different way. And when I talk to someone who has other ideas, it’s like, “Oh, let’s try to combine and find the best for both countries.”

Airbags kill many women and children. Why is that? I say that because someone didn’t come into the room to say, “Hey, why don’t you try this on the ladies in the front?” There is a group of problems that have killed or become dangerous in some groups of people. And I would say that when you come back, it’s because you don’t have enough people to say “Hey, did you think this?” because they speak from their own experience and from their own community and community.

Do you believe that AI and robotic research will change over time? What is your vision for the field?

If you are thinking of writing and programming, it is possible for everyone to be able to do it. There are many organizations now like Code.org. Weapons and ammunition are available. I would like to talk to a student one day when I ask, “Do you know about AI and machine learning?” and they say, “Dr. H, I’ve been doing this since third grade! “I want to be surprised like that, because it would be great. Obviously, then I have to think about my next job, but this is the whole story.

But I think once you have the tools to write with AI and learn from the machine, you can make your own projects, you can make your future, you can make your own answer. That would be my dream.

[ad_2]

Source link

Related Articles

Leave a Reply

Back to top button