Last October, Twitter promised to reconsider its use of images after users complained that they were useless. On Wednesday, the company released a file of the results of this study. As people began to write critically about their problems, Twitter said initial research showed no evidence of racism or gender equality. Today research paints another picture.
Comparing the preferences of the algorithm between men and women, in addition to their support for whites and blacks, the company found an 8% difference in favor of women and 4% in favor of whites. What they did not see was that the algorithms showed a built-in male pattern. To test this against images of women, Twitter found that the algorithm had only cut about three out of every 100 images at a location other than the face, and in doing so, did not look at other parts of the body. Instead, it tended to focus on things like numbers on game jerseys.
The company said one of the reasons for choosing algorithms is probably their preference for contrasting images but realized that this is not an excuse. “Cutting a learning machine is very wrong because it removes users and prevents them from showing themselves who have their own values, instead focusing only on the part where the image seems to be the most interesting,” the company said. “One of our assumptions is that not everything on Twitter is a good fit, and in this case, how to create an image is a well-made human choice.”
To that end, the company was recently released all photo gallery on Android and iOS, and is said to be planning to make some changes to what the platform will use media in the future.
All sales selected by Engadget are selected by our publishing team, independent of our parent company. Some of our articles include helpful links. If you purchase one of these links, we will be able to make a donation.