Jack Dorsey, the chief executive of Twitter, has said that banning Donald Trump from the platform was the “right decision” but that it sets a dangerous precedent.
Speaking out for the first time since the social network took the remarkable step of permanently suspending the president’s account following a violent attack on the US Capitol, Dorsey said the company faced “an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety”.
“I do not celebrate or feel pride in our having to ban @realDonaldTrump from Twitter, or how we got here,” Dorsey admitted on Wednesday in an extended Twitter thread. “I feel a ban is a failure of ours, ultimately, to promote healthy conversation. And a time for us to reflect on our operations and the environment around us.”
Dorsey said that it was the right decision for the company but that such actions “fragment the public conversation”.
“They divide us,” he continued. “They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”
Last week Twitter suspended the president, who was impeached for the second time on Wedneday for inciting a mob of his supporters, due to “the risk of further incitement of violence”. The decision comes as other big tech companies, including Facebook, Reddit, Pinterest, and YouTube have suspended Trump’s accounts temporarily and in some cases permanently over the attack.
Silicon Valley has faced a reckoning over its role in spreading disinformation and serving as a platform for planning the insurrection. For years, Dorsey has resisted moderating high-profile users of the platform, arguing that the public has the right to hear from newsworthy figures.
But in 2020 it began to flag tweets from Trump for misinformation, disable the ability to retweet except when adding commentary, and in some cases removed tweets that appeared to incite violence. Twitter had also in the months surrounding the US presidential elections tested a number of policies to limit the spread of hate speech and misinformation.
Still, it faced criticism for failing to address the growing danger posed by Trump’s account, which boiled over after the president incited a mob to storm the Capitol building on 6 January.
Following the violent events, which left five dead, Trump tweeted what appeared to be an explanation or justification for the mob while continuing to push a false narrative that the election was not legitimate, saying: “These are the things and events that happen when a sacred landslide election victory is so unceremoniously and viciously stripped away.”
On Friday, Trump’s account was permanently suspended. The president frantically jumped from account to account, attempting to tweet from @POTUS and his campaign account @TeamTrump before those outlets were restricted for him as well.
Twitter explained its reasoning for removing Trump in an extensive blogpost on Friday evening. It said tweets from Trump could easily be interpreted as encouragement or justification to “replicate the violent acts that took place on January 6, 2021”.
Dorsey underscored in his tweets a need for a new “open decentralized standard for social media”.
“It’s important that we acknowledge this is a time of great uncertainty and struggle for so many around the world,” he said. “Our goal in this moment is to disarm as much as we can, and ensure we are all building towards a greater common understanding, and a more peaceful existence on earth.”
Reuters contributed to this report
Google investigating A.I. researcher, AWU concerned
Google employee’s newly-formed union, known as the Alphabet Workers Union, said it is concerned over Google’s decision to lock Margaret Mitchell, a senior AI ethics researcher, out of her account.
Google locked Mitchell out of her account after it found she was downloading material related to Timnit Gebru, another AI ethics researcher who was forced to leave the company early last month.
The news was first reported Wednesday by Axios, which said Google was investigating Mitchell’s recent actions. Mitchell was reportedly using automated scripts to look through her messages to find examples of discriminatory treatment of Gebru before she was locked out of her account.
“The Alphabet Workers Union (AWU) is concerned by the suspension of the corporate access of Margaret Mitchell, AWU member and lead of the Ethical AI team,” the union wrote in a statement. “This suspension comes on the heels of Google’s firing of former co-lead Timnit Gebru; together these are an attack on the people who are trying to make Google’s technology more ethical.”
Google did not immediately respond to a CNBC request for comment, but a spokesperson told Axios: “Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered.”
They added: “In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today.”
Gebru, a well-known artificial intelligence researcher and technical co-lead of Google’s Ethical AI team, tweeted on Dec. 3 that Google fired her over a disagreement about a research paper that scrutinized bias in artificial intelligence. The researcher, who had been outspoken about the company’s treatment of Black employees, claimed the treatment was indicative of a broader pattern at Google. It led to a wave of support from across the industry, including a petition signed by thousands of Google employees and industry peers.
Alphabet CEO Sundar Pichai emailed employees, apologizing for distrust sown in the company and the industry amid Gebru’s departure, while pledging the company would launch a “review” of what went wrong.
Roughly a week later, Google’s Ethical AI team sent Google executives a list of demands to “rebuild trust” following Gebru’s removal from the company.
The team, which states it advises on research, product and policy, wrote a six-page letter to Pichai, AI chief Jeff Dean and an engineering Vice President Megan Kacholia. The letter, titled “The Future of Ethical AI at Google Research” and seen by CNBC, lists demands of executives, including removing Kacholia from the group’s reporting structure, abstaining from retaliation, and reinstating Gebru at a higher level.
Mitchell founded Google’s Ethical AI team and is one of the co-leads. The AWU described her as a “critical member” of academic and industry communities around the ethical production of AI. She has been with Google for just over four years and is based in Seattle, according to LinkedIn.
“Regardless of the outcome of the company’s investigation, the ongoing targeting of leaders in this organization calls into question Google’s commitment to ethics — in AI and in their business practices,” said the AWU. “Many members of the ethical AI team are AWU members and the membership of our union recognizes the crucial work that they do and stands in solidarity with them in this moment.”
Referring to Google’s statement to Axios, the AWU said it marked a “notable departure from Google’s typical practice of refusing to comment on personnel matters.”
The AWU announced its launch on Jan. 4. Executive Chair Parul Koul and Vice Chair Chewy Shaw co-authored a piece in The New York Times titled: “We built Google. This is not the company we want to work for.”
It made its first stance on Jan. 7, calling on YouTube executives to take stronger action against former President Donald Trump.
The union criticized Google-owned YouTube for not banning Trump’s account from the platform after the pro-Trump riots in Washington, D.C., which resulted in several deaths and scores of injuries. The group called the company’s decision to reactively remove his videos “lackluster” and said the company should ban his account.
— Additional reporting by CNBC’s Jennifer Elias.