Social networking sites such as Facebook, Twitter, and YouTube have provided a platform for people to express their views and share with the world. The reach of these platforms is vast. So, the platform was prone to misuse. They have been used to spread hate speech and entice violence. The investigation regarding Russian meddling during 2016 presidential elections in the U.S. through Facebook and Twitter is ongoing. On the other hand, extremist groups have been using these platforms to recruit people and inspire them to spread violence across the world. Social networking platforms have been largely criticized for not taking enough action to prevent dissemination of hate speech and other violence enticing activities. However, these platforms have come up with new solution to counter extremist content: Counterspeech.
Social media companies including Facebook, Twitter, and YouTube conveyed that they have moved beyond screening and removal of extremist content. They have been working toward forestalling violent messages. Recently, representatives from these social networking companies faced Senate Committee for Commerce, Science and Transportation on Capitol Hill. In the testimony, the representatives informed their efforts are aimed at identifying people who are likely to be targeted by extremist messages and put forward content that would counter those messages.
Facebook informed it has been investing enormously in ‘counterspeech’. It has been working with universities, community groups, and nongovernment organizations to spread positive voices. The firm has increased its efforts in counterspeech to combat hate speech and extremist messages along with deploying language analysis and image matching techniques.
“We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence.” said Monika Bickert, Facebook’s head of global policy management, according to an advance copy of her testimony obtained by CNBC. “That’s why we support a variety of counterspeech efforts.”
Google’s YouTube has been using ‘Redirect Method’ to spread anti-terror messages to people who are inclined to seek extremist content. The video streaming service would identify people seeking extremist and hate content through their search history and offer them content that contradict the propaganda served by extremist groups. Moreover, it has been adopting methods to deal with videos that are offensive but do not violate community guidelines. YouTube has been also removing the re-uploaded content as soon as possible.
Juniper Downs, YouTube’s head of public policy, outlined that YouTube’s new algorithms have been able to eliminate extremist and hateful content faster than before in the past year. Downs added, “Our advances in machine learning let us now take down nearly 70% of violent extremism content within 8 hours of upload and nearly half of it in 2 hours.”
Carlos Monje Jr., Twitter’s director of public policy and philanthropy in the U.S. and Canada, said the company took part in nearly 100 trainings events focused on fighting extremist content since 2015. He added, “It is a cat-and-mouse game and we are constantly evolving to face the challenge.”
These platforms have informed that they have invested more workforce to screen content and artificial intelligence to remove hateful content as soon as possible. Moreover, they created a group, Global Internet Forum to Counter Terrorism, for sharing information on extremist groups. The data contained nearly 40,000 photos and videos in its database. Social media companies have realized the importance of removal hate speech and rolled up their sleeves to face the challenge.