Tech News

Can Social Media Help Cope With Media Confidence?

The study found that with a group of only eight people, there was no significant difference between performance and true observation. When the groups rose to 22 people, they began to confront the investigators. (These numbers describe what happened when ordinary people were told the source of the story. When they did not know the source, the crowd acted a little worse.) each other. Political research is really difficult.

It may seem unlikely that intentionally selected groups will be able to advance the role of analysts — especially in light of title, first impression, and publication. But that is the whole idea of ​​mass intelligence: gather enough people, act on their own, and their results will hit experts.

“Our focus on what is going on is that people are reading this and asking themselves, ‘How does this relate to everything else I know?'” Rand said. and you are a much better marker than anyone else. ”

This is not the same as Reddit’s high-end and low-end systems, nor is it Wikipedia’s version of citizen readers. In such cases, small, non-representative groups of users use their own resources to resolve issues, and everyone can see what the others are doing. The wisdom of the masses is evident only when the groups are divided and the people make their own judgments. And relying on statistically mobilized teams, politicians, not a volunteer group, makes the process of researchers more difficult in the game. (This also explains why the test method is different from Twitter’s Bird watch, a flight software that encourages users to post notes explaining why this tweet is misleading.)

The bottom line on this page is simple: Social media platforms such as Facebook and Twitter can use social media to better accomplish their screening tasks without providing accurate information. (Individuals in the study paid $ 9 an hour, which translates to about $ .90 each.) he loves. (As of 2019 Pew research, A Republican strongly believes that critics “tend to be biased.”) Facebook has a long history the same, paying user groups to “use them as investigators to obtain information that may contradict online falsehoods or endorsement of other claims.” But the experiments were designed to inform the activities of the observers who saw them, not to add to them.

Developing awareness is one thing. The most interesting question is how the platforms should be used. Should false stories be banned? What about stories that may not be false in them, but that are misleading or confusing?

The researchers say the platforms should move away from the real / false facts and leave it alone / set it as an option. Instead, they point out that the platforms incorporate “continuous multilingual calculations” into their systems. Instead of having one true / false error, and doing everything on top of it in some way and everything under it, the platforms should include the numbers provided by the people in comparison to how the given link should be mentioned on the user’s feed. In other words, the less people judge a case, the lower the algorithm.


Source link

Related Articles

Leave a Reply

Back to top button