Chief Executive Officers of three tech giants of Silicon Valley will appear before U.S. Congress to testify on misinformation titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation.” Mark Zuckerberg of Facebook, Sundar Pichai of Google, and Jack Dorsey of Twitter are scheduled to appear for the hearing on Thursday, March 25, 2021. The tech leaders will be questioned by two Senate subcommittees and the Energy and Commerce Committee chaired by Democratic legislators over the management, spread, and containment of misinformation on their platforms.
In the recent U.S 2020 general elections, Twitter took a bold decision to suspend and later terminate the account of former U.S President Donald Trump in the wake of the Capitol Hill attack. The company found the information shared by former President Trump as misleading and enticing violence. Although former President Trump had been tweeting controversial statements from this personal Twitter account, it was the first time a tech company held him accountable for violating its policy. Similar action was taken by Facebook against the former President. Now the CEOs will testify before the committee on their misinformation policies.
Facebook, Google, and Twitter to explain their policies on misinformation to Congress
The Capitol Hill attack sent shock waves across America as it was not only viewed as a breaking of law but as an attack on U.S democracy. The lawmakers are most concerned over the fact that the Capitol Hill attack was planned and organized on social media platforms and even during the attack, perpetrators co-ordinated the move on various apps like Facebook and Twitter.
The hearing which was announced in February will be held virtually on Thursday. Insider reports that at the time the committee chairs said,
“For far too long, big tech has failed to acknowledge the role they’ve played in fomenting and elevating blatantly false information to its online audiences. Industry self-regulation has failed. We must begin the work of changing incentives driving social media companies to allow and even promote misinformation and disinformation.”
Experts told Insider in January that Facebook and Twitter are “indirectly involved” in the US Capitol siege since the platforms’ laissez-faire approach to content moderation gave the far-right a place to congregate for years.
In addition, the regulators are also questioning the tech leaders over COVID-19 related misinformation. YouTube owned by Google is a hub of videos with false information on the spread and treatment of coronavirus.
Ahead of the hearing, Facebook explained the measures it has taken to contain misinformation on its platform.
Tackling misinformation actually requires addressing several challenges, including fake accounts, deceptive behavior and misleading and harmful content. As the person responsible for the integrity of our products, I wanted to provide an update on how we approach each of these challenges.
The company claims to block millions of fake accounts daily, “Between October and December of 2020, we disabled more than 1.3 billion of them”, monitoring and disruption of economic structure to crackdown on deceptive behavior like clickbait material, and tackling misinformation the platform has built a global network of 80 independent fact-checkers to review content.
For the most serious kinds of misinformation, such as false claims about COVID-19 and vaccines and content that is intended to suppress voting, we will remove the content. When they rate something as false, we reduce its distribution so fewer people see it and add a warning label with more information for anyone who sees it. We know that when a warning screen is placed on a post, 95 percent of the time people don’t click to view it. We also notify the person who posted it and we reduce the distribution of Pages, Groups and domains that repeatedly share misinformation.