Connect with us

Technology

Facebook Employee Leaks Show Betrayal By Company Leadership

Published

on

On July 1, Max Wang, a Boston-based software engineer who was leaving Facebook after more than seven years, shared a video on the company’s internal discussion board that was meant to serve as a warning.

“I think Facebook is hurting people at scale,” he wrote in a note accompanying the video. “If you think so too, maybe give this a watch.”

Most employees on their way out of the “Mark Zuckerberg production” typically post photos of their company badges along with farewell notes thanking their colleagues. Wang opted for a clip of himself speaking directly to the camera. What followed was a 24-minute clear-eyed hammering of Facebook’s leadership and decision-making over the previous year.

The video was a distillation of months of internal strife, protest, and departures that followed the company’s decision to leave untouched a post from President Donald Trump that seemingly called for violence against people protesting the police killing of George Floyd. And while Wang’s message wasn’t necessarily unique, his assessment of the company’s ongoing failure to protect its users — an evaluation informed by his lengthy tenure at the company — provided one of the most stunningly pointed rebukes of Facebook to date.

“We are failing,” he said, criticizing Facebook’s leaders for catering to political concerns at the expense of real-world harm. “And what’s worse, we have enshrined that failure in our policies.”


Obtained by BuzzFeed News

Max Wang, an engineer who worked at Facebook for seven years, posted an internal video message on July 1 as he prepared to leave the company, arguing that “we are failing” and that “we have enshrined that failure in our policies.” BuzzFeed News obtained audio of that message, which has been edited to remove a portion in which Wang thanks former colleagues but is otherwise left intact.

While external criticisms of Facebook, which has roughly 3 billion users across its various social platforms, have persisted since the run-up to the 2016 presidential election, they’ve rarely sparked wide-scale dissent inside the social media giant. As it weathered one scandal after another — Russian election interference, Cambridge Analytica, Rohingya genocide in Myanmar — over the past three and a half years, Facebook’s stock price rose and it continued to recruit and retain top talent. In spite of the occasional internal dustup, employees generally felt the company was doing more good than harm. At the very least, they avoided publicly airing their grievances.

“We are failing, and what’s worse, we have enshrined that failure in our policies.”

“This time, our response feels different,” wrote Facebook engineer Dan Abramov in a June 26 post on Workplace, the company’s internal communications platform. “I’ve taken some [paid time off] to refocus, but I can’t shake the feeling that the company leadership has betrayed the trust my colleagues and I have placed in them.”

Messages like those from Wang and Abramov illustrate how Facebook’s handling of the president’s often divisive posts has caused a sea change in its ranks and led to a crisis of confidence in leadership, according to interviews with current and former employees and dozens of documents obtained by BuzzFeed News. The documents — which include company discussion threads, employee survey results, and recordings of Zuckerberg — reveal that the company was slow to take down ads with white nationalist and Nazi content reported by its own employees. They demonstrate how the company’s public declarations about supporting racial justice causes are at odds with policies forbidding Facebookers from using company resources to support political matters. They show Zuckerberg being publicly accused of misleading his employees. Above all, they portray a fracturing company culture.

Frustrated and angry, employees are now challenging Zuckerberg and leadership at companywide meetings, staging virtual walkouts, and questioning if their work is making the world a better place. The turmoil has reached a point where Facebook’s CEO recently threatened to fire employees who “bully” their colleagues.

As it heads into a US presidential election where its every move will be dissected and analyzed, the social network is facing unprecedented internal dissent as employees worry that the company is wittingly or unwittingly exerting political influence on content decisions related to Trump, and fear that Facebook is undermining democracy.

“Come November, a portion of Facebook users will not trust the outcome of the election because they have been bombarded with messages on Facebook preparing them to not trust it.”

Yaël Eisenstat, Facebook’s former election ads integrity lead, said the employees’ concerns reflect her experience at the company, which she believes is on a dangerous path heading into the election.

“All of these steps are leading up to a situation where, come November, a portion of Facebook users will not trust the outcome of the election because they have been bombarded with messages on Facebook preparing them to not trust it,” she told BuzzFeed News.

She said the company’s policy team in Washington, DC, led by Joel Kaplan, sought to unduly influence decisions made by her team, and the company’s recent failure to take appropriate action on posts from President Trump shows employees are right to be upset and concerned.

“These were very clear examples that didn’t just upset me, they upset Facebook’s employees, they upset the entire civil rights community, they upset Facebook’s advertisers. If you still refuse to listen to all those voices, then you’re proving that your decision-making is being guided by some other voice,” she said.

Do you work at Facebook or another technology company? We’d love to hear from you. Reach out at [email protected] or via one of our tip line channels.

In a broad statement responding to a list of questions for this story, a Facebook spokesperson said the company has a rigorous policy process and is transparent with employees about how decisions are made.

“Content decisions at Facebook are made based on our best, most even, application of the public policies as written. It will always be the case that groups of people, even employees, see these decisions as inconsistent; that’s the nature of applying policies broadly,” the spokesperson said. “That’s why we’ve implemented a rigorous process of both consulting with outside experts when adopting new policies as well as soliciting feedback from employees and why we’ve created an independent oversight board to appeal content policy decisions on Facebook.”

In his note, Abramov, who’s worked at the social network for four years, compared Facebook to a nuclear power plant. Facebook, unlike traditional media sources, can generate “social energy” at a scale never seen before, he said.

“But even getting small details wrong can lead to disastrous consequences,” he wrote. “Social media has enough power to damage the fabric of our society. If you think that’s an overstatement, you aren’t paying attention.”


Facebook

President Donald Trump’s Facebook post from May 29.

“I Know Doing Nothing Is Not Acceptable”

On May 28, as protests against police brutality raged in Minneapolis and around the country, President Donald Trump posted identical messages to his Facebook and Twitter accounts, which have a collective 114 million followers.

“Just spoke to Governor Tim Walz and told him that the Military is with him all the way,” the president wrote that night. “Any difficulty and we will assume control but, when the looting starts, the shooting starts.”

Within a matter of hours, Twitter placed Trump’s post behind a warning label, noting it violated its rules around glorifying violence. Meanwhile, Facebook did nothing. It had decided that the phrase “when the looting starts, the shooting starts” — which has historical ties to racially oppressive police violence — did not constitute a violation of its terms of service.

In explaining the decision the next day, Zuckerberg said that while he had a “visceral negative reaction” to the post, Facebook policies allowed for “discussion around the state use of force.” Moreover, he argued that, in spite of the phrase’s historical context, it was possible that it could have been interpreted to mean the president was simply warning that looting could lead to violence. (Axios later reported that Zuckerberg had personally called Trump the day after the post.)

Employees, already angered by the company’s failure to take action against a post from Trump earlier that May containing mail-in ballot misinformation, revolted. In a Workplace group called “Let’s Fix Facebook (the company),” which has about 10,000 members, an employee started a poll asking colleagues whether they agreed “with our leadership’s decisions this week regarding voting misinformation and posts that may be considered to be inciting violence.” About 1,000 respondents said the company had made the wrong decision on both posts, more than 20 times the number of responses for the third-highest answer, “I’m not sure.”

“There isn’t a neutral position on racism.”

“I don’t know what to do, but I know doing nothing is not acceptable,” Jason Stirman, a Facebook design manager at Facebook, wrote on Twitter that weekend, one of a lengthy stream of dissenting voices. “I’m a FB employee that completely disagrees with Mark’s decision to do nothing about Trump’s recent posts, which clearly incite violence. I’m not alone inside of FB. There isn’t a neutral position on racism.”

The following Monday, hundreds of employees — most working remotely due to the company’s coronavirus policies — changed their Workplace avatars to a white and black fist and called out sick in a digital walkout.

As Facebook grappled with yet another public relations crisis, employee morale plunged. Worker satisfaction metrics — measured by “MicroPulse” surveys that are taken by hundreds of employees every week — fell sharply after the ruling on Trump’s “looting” post, according to data obtained by BuzzFeed News.

On June 1, the day of the walkout, about 45% of employees said they agreed with the statement that Facebook was making the world better — down about 25 percentage points from the week before. That same day, Facebook’s internal surveys showed that around 44% of employees were confident in “Facebook leadership leading the company in the right direction” — a 30 percentage point drop from May 25. Responses to that question have stayed around that lower mark as of earlier this month, according to data seen by BuzzFeed News.


BuzzFeed News

Two charts recreated from by BuzzFeed News show employee satisfaction metrics measured by Facebook’s “MicroPulse” surveys.

Culturally, Facebook has become increasingly divided, with some company loyalists pushing the idea that a silent majority supported the call made on the Trump post.

On Blind, a forum app that requires work emails for people to anonymously discuss their employer, some unnamed employees demanded that those involved in the walkout be fired. One thread focused specifically on Jason Toff, a director of product management at Facebook, who had tweeted following the Trump decision that he was “not proud of how we’re showing up.”

“His major responsibility is to acquire top talent and motivate his team, and that’s impossible to do when you publicly share that you aren’t proud of where you work at,” wrote one Blind user on a thread titled “Fire Jason Toff.”

“If you’re bullying your fellow colleagues into taking a position on something, then we will fire you.”

Other anonymous Facebook employees vilified May Zhou, a software engineer who had asked at an all-hands meeting how many Black employees had been involved in the Trump “looting” post decision. They called her “disrespectful” and a “sjw” — pejorative shorthand for “social justice warrior.” (In response to Zhou’s question, Zuckerberg said only one Black employee in the company’s Austin office had been consulted and that Facebook’s political speech policies needed tweaking.)

Toff declined to speak for this story. Zhou did not return a request for comment.

This ongoing contention and erosion of Facebook’s culture has infuriated Zuckerberg. In a June 11 live Q&A with employees, he pointedly addressed it.

“I’ve been very worried about … the level of disrespect and, in some cases, of vitriol that a lot of people in our internal community are directing towards each other as part of these debates,” he said. “If you’re bullying your fellow colleagues into taking a position on something, then we will fire you.”


Josh Edelson / Getty Images

Facebook employees walk past a sign reading “hack” at Facebook’s corporate headquarters campus in Menlo Park, California, Oct. 23, 2019.

“Our Community Standards Are Fundamentally Broken”

Facebook employees who spoke to BuzzFeed News pointed to the company’s lack of consistency and poor communication around enforcement of its community standards as a key frustration. In late May, following a decision by Twitter to place a misleading post about mail-in ballots from Trump behind a warning label, Zuckerberg appeared on Fox News to chastise his competitor for trying to be an “arbiter of truth.”

“I think in general, private companies probably shouldn’t be — or especially these platform companies — shouldn’t be in the position of doing that,” he said in the interview. Trump had made the same false claim on Facebook, which took no action against it.

Later that week, when Trump made his “looting” statement and Twitter moderated it and Facebook did not, Zuckerberg — the “ultimate decision-maker” according to Facebook’s head of communications — defended his position. “Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician,” he wrote in a post on May 29.

It took just four days for Zuckerberg to change his mind. In comments at a companywide meeting on June 2 that were first reported by Recode, Facebook’s founder said the company was considering adding labels to posts from world leaders that incite violence. He followed that up with a Facebook post three days later, in which he declared “Black lives matter,” and made promises that the company would review policies on content discussing “excessive use of police or state force.”

“Are you at all willing to be wrong here?” 

“What material effect does any of this have?” one employee later asked on Facebook’s Workplace, openly challenging their CEO. “Commitments to review offer nothing material. Has anything changed for you in a meaningful way? Are you at all willing to be wrong here?”

On June 26, nearly a month later, Zuckerberg posted a clarification to his remarks, noting that any post that is determined to be inciting violence will be taken down.

The company’s June 18 decision to remove a Trump campaign ad that featured a triangle symbol used by Nazis to identify political prisoners didn’t make things any better. Speaking at an all-hands meeting hours after the ads were taken down, Zuckerberg said the action was an easy call and evidence of Facebook’s commitment to applying its policies equally.

“This decision was not a particularly close call from my perspective,” he said. “Our position on all this stuff is we want to allow as wide open an aperture of free expression as possible … but if something is over the line, no matter who it is, we will take it down.”

Documents reviewed by BuzzFeed News, however, reveal Facebook’s decision to remove the Trump ads was not as simple as Zuckerberg claimed. In fact, the company failed to act on them until after it faced outside pressure, and despite internal alerts from its own employees.

On the morning before the ads were removed, a heated discussion took place among Facebook employees after at least nine different people said they had reported the content but were told it did not violate company policy.

“I reported it and called it out as both possible hate speech and threat of violence. This is apparently not a violation of our CS because our CS are fundamentally broken,” Natalie Troxel, a user experience employee wrote on Workplace, referencing the company’s community standards.

Kaitlin Sullivan, whose LinkedIn profile says she leads “the Americas branch of Facebook’s content policy team,” replied, saying the ad was still being evaluated and that “the triangle without any more context doesn’t clearly violate the letter there.” It wasn’t “helpful” to describe the company’s process as broken, she added.

“The fact that this has even remained on our platform for this long is troubling.”

“I stand by what I said … A world leader is promoting content on our platform that uses explicit Nazi imagery,” Troxel responded.

“The fact that this has even remained on our platform for this long is troubling,” another employee added.

Troxel and Sullivan did not respond to requests for comment for this story.

A Facebook employee who spoke to BuzzFeed News anonymously for fear of retaliation said they were “flabbergasted” the ads weren’t immediately ruled violative. “There’s a real culture within Facebook to assume good intent,” this person said. “To me, this was a case where you cannot assume good intent for a symbol that could be Nazi imagery.” They also were bothered that Facebook took action on the ad only after receiving questions from the Washington Post — more than 12 hours after it had been flagged by employees.

“It certainly looks like a decision would not have been made the way it was had there not been media pressure and a larger than normal involvement of Facebook employees on those posts and threads on Workplace,” the anonymous employee said.

The incident was yet another example of Facebook declining to act on violative content only to change its mind following public criticism. And once again, it raised questions about how decisions are made, and whether policies are applied consistently or in reaction to negative publicity or political concerns.

Eisenstat, the former Facebook election ads integrity lead, said the company’s failure to act quickly on the Trump ads, and to moderate his “looting” post, are further evidence that political considerations and outside pressure are influencing policy decisions.

“They put political considerations over enforcing their policies to the letter of the law,” she said. “I can say for my time there that more than once the [Washington] policy team weighed in on appeals and decisions that made it clear there was a political consideration factoring into how we were enforcing our policy.”

“They put political considerations over enforcing their policies to the letter of the law.”

A related scenario played out earlier this month over an ad run by a white nationalist Facebook page, “White Wellbeing Australia.” “White people make up just 8% of the world’s population so if you flood all white majority countries with nonwhites you eliminate white children forever,” the ad proclaimed.

Facebook removed the ad and prevented the page from running paid content in the future only after being contacted by BuzzFeed News on Wednesday, July 8. But that move too came after a company employee flagged the ad and was told it wasn’t violative.

“I reported the page and the post and was told that it doesn’t meet the threshold for a content violation,” Facebook project manager Matthew Brennan wrote on Workplace after the publication of BuzzFeed News’ story. “There is no doubt this is a white supremacy group and the post in question is not trying to hide that fact.”

Brennan, who saw the ad appear in his News Feed, said friends and family in Australia raised concerns about the ad over the weekend, leaving him frustrated that “there was nothing that could be done about it.”

“It wasn’t exactly filling me with pride to work at Facebook,” Brennan concluded.

“I get the same from my wife, friends, and family,” another employee responded. “These decisions are going to be on the wrong side of history.”

Brennan did not respond to a request for comment.

In the end, Facebook removed the ad and prevented the white nationalist page from running future ads. The page, however, continues to exist on the social network.


Chip Somodevilla / Getty Images

Zuckerberg testifies before the House Financial Services Committee in the Rayburn House Office Building on Capitol Hill in Washington, DC, Oct. 23, 2019.

“Diversity Is A Huge Problem”

In his farewell video, Wang accused Zuckerberg of “gaslighting” employees and a “bait and switch” during an early June meeting in which the Facebook CEO explained the decision on the Trump “looting” post. Why was Zuckerberg only talking about whether Trump’s comments fit the company’s rules, and not about fixing policies that allowed for threats that could hurt people in the first place, he asked.

“Watching this just felt like someone was sort of slowly swapping out the rug from under my feet,” Wang said. “They were swapping concerns about morals or justice or norms with this concern about consistency and logic, as if it were obviously the case that ‘consistency’ is what mattered most.”

What the departing engineer said echoed what civil rights groups such as Color of Change have been saying since at least 2015: Facebook is more concerned with appearing unbiased than making internal adjustments or correcting policies that permit or enable real-world harm.

“Watching this just felt like someone was sort of slowly swapping out the rug from under my feet.”

In a June 19 companywide meeting, for example, Zuckerberg turned a question about the decision-making influence of Joel Kaplan, Facebook’s vice president of global policy and a former member of President George W. Bush’s administration, into a discussion about the need for ideological diversity. Kaplan, already a controversial figure within Facebook because of his public support for Supreme Court Justice Brett Kavanaugh during his heated 2018 confirmation hearings, has drawn ire from external and internal critics alike, who say that his moves to placate conservative Facebook power users like Trump are driven by politics and not any dedication to policy.

Eisenstat told BuzzFeed News that a member of Kaplan’s Washington policy team had attempted to influence ad enforcement decisions being considered by her team, which she considered highly inappropriate. In one example, her team was evaluating whether to remove an ad placed by a conservative organization.

“But then a policy person chimed in and gave the both-sides argument. They actually wrote something like, ‘There’s bad behavior on both sides.’ And I remember thinking, What does that have to do with anything?” she said.

“When you have the policy folks weighing in heavily on how you are enforcing that, to me, is where it makes it crystal clear this is not a strict letter-of-the-law enforcement. Because if it was, then policy should never intervene,” Eisenstat added.

“They actually wrote something like, ‘There’s bad behavior on both sides.’ And I remember thinking, What does that have to do with anything?

Instead of answering the question about Kaplan’s influence at the meeting, Zuckerberg argued that his vice president brought an important conservative viewpoint to the table. “The population out there in the community we serve tends to be on average ideologically a little bit more conservative than our employee base,” Zuckerberg said. “Maybe ‘a little’ is an understatement.”

For Zuckerberg, Kaplan’s Republican leanings contributed to “a good diversity of views” within the company. But for critics, the idea that moderating calls for violence or hate speech or Nazi imagery is little more than wrangling a political disagreement is disingenuous and prevents the company from taking a stand on civil rights.

“He uses ‘diverse perspective’ as essentially a cover for right-wing thinking when the real problem is dangerous ideologies,” Brandi Collins-Dexter, a senior campaign director at Color of Change, told BuzzFeed News after reading excerpts of Zuckerberg’s comments. “If you are conflating conservatives with white nationalists, that seems like a far deeper problem because that’s what we’re talking about. We’re talking about hate groups and really specific dangerous ideologies and behavior.”

She said Zuckerberg’s assertion that the team making key content and policy decisions is diverse was “a huge problem.”

“The fact that he would look at the people around him on the board and in decision-making positions and pat himself on the back saying it has diversity is a huge problem right there,” Collins-Dexter added, referencing the company’s overwhelmingly white C-suite.

Facebook’s recently completed civil rights audit reiterated concerns that the company’s leadership and decision-making process are falling short. “Many in the civil rights community have become disheartened, frustrated and angry after years of engagement where they implored the company to do more to advance equality and fight discrimination, while also safeguarding free expression,” the auditors wrote after their two-year assessment.

“If you are conflating conservatives with white nationalists, that seems like a far deeper problem.”

In light of that report, Facebook has promised change. The company added diversity and inclusion responsibilities to biannual performance reviews and elevated its chief diversity officer to report to Sheryl Sandberg, Facebook’s chief operating officer. In mid-June, Sandberg said the company would commit $200 million to Black-owned businesses and organizations on top of a $10 million commitment to racial justice organizations made earlier that month.

Even that decision was not easy, according to internal communications. Asked during a June 11 all-hands meeting if Facebook would introduce a donation-matching program presumably for Black justice organizations like many of the company’s Silicon Valley peers, Zuckerberg asked for employees to be realistic because they were in “the middle of a global recession.”

“Our revenue is significantly less than we expected it to be,” he said, a week before Sandberg announced the $200 million commitment.

Employees who have tried to use company resources on racial justice causes have also been frustrated. Every month, each of the company’s full-time employees gets $250 in ad credits to be used on the platform. Some employees, however, found that they were unable to use those ads to help boost civil rights groups that had drawn attention during the police brutality protests that swept the nation earlier this summer.

An internal document provided to BuzzFeed News noted that employee credits may not be used for ads “related to politics or issues of national importance.” For employees in the US, that means issues relating to “civil and social rights,” “environmental politics,” “guns,” “health,” and several other defined categories are off-limits.

“It’s infuriating,” a current employee told BuzzFeed News.


Kyodo News / Getty Images

Zuckerberg speaking at a developers conference in San Jose, California, in April 2019.

Good Politics, Bad Leadership

Outside calls for Facebook to change its policies — like the Stop Hate For Profit ad boycott that now includes Coca-Cola, Starbucks, and Verizon — have further ratcheted up internal tensions.

“Two years ago, I wouldn’t have had conversations with my colleagues where I would be supporting the advertising boycott,” one employee told BuzzFeed News. “But we are having those conversations now.”

This is largely unprecedented for Facebook, a company unaccustomed to widespread internal dissent. But Wang’s departure video, the virtual walkout, and the recent drop in employee satisfaction show Zuckerberg’s approach and fumbling explanations and reversals are beginning to take an internal toll.

“I think Facebook is getting trapped by our ideology of free expression, and the easy temptation of just trying to stay consistent with an ideology,” Wang said in his video.

Wang’s departure thread kicked off a discussion, attracting comments from people like Yann LeCun, Facebook’s head of artificial intelligence. After thanking Wang for his thoughts, the executive pointed employees to his own June 2 post in which he expressed concern that Facebook’s policy and content decisions are not geared toward protecting democracy.

“I would submit that a better underlying principle to content policy is the promotion and defense of liberal democracy.”

“American Democracy is threatened and closer to collapse than most people realize,” LeCun wrote on June 2. “I would submit that a better underlying principle to content policy is the promotion and defense of liberal democracy.”

LeCun did not respond to requests for comment.

Other employees, like Abramov, the engineer, have seized the moment to argue that Facebook has never been neutral, despite leadership’s repeated attempts to convince employees otherwise, and as such needed to make decisions to limit harm. Facebook has proactively taken down nudity, hate speech, and extremist content, while also encouraging people to participate in elections — an act that favors democracy, he wrote.

“As employees, we can’t entertain this illusion,” he said in his June 26 memo titled “Facebook Is Not Neutral.” “There is nothing neutral about connecting people together. It’s literally the opposite of the status quo.”

Zuckerberg seems to disagree. On June 5, he wrote that Facebook errs on the “side of free expression” and made a series of promises that his company would push for racial justice and fight for voter engagement.

The sentiment, while encouraging, arrived unaccompanied by any concrete plans. On Facebook’s internal discussion board, the replies rolled in.

“There is exactly one impressive thing about this post: by apparently committing to nothing at all but acting like you have, you’ve managed to placate some of us,” wrote one employee. “It’s good politics. It’s bad leadership.” ●

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Part human, part machine: is Apple turning us all into cyborgs? | Technology

Published

on

By

At the beginning of the Covid-19 pandemic, Apple engineers embarked on a rare collaboration with Google. The goal was to build a system that could track individual interactions across an entire population, in an effort to get a head start on isolating potentially infectious carriers of a disease that, as the world was discovering, could be spread by asymptomatic patients.

Delivered at breakneck pace, the resulting exposure notification tool has yet to prove its worth. The NHS Covid-19 app uses it, as do others around the world. But lockdowns make interactions rare, limiting the tool’s usefulness, while in a country with uncontrolled spread, it isn’t powerful enough to keep the R number low. In the Goldilocks zone, when conditions are just right, it could save lives.

The NHS Covid-19 app has had its teething problems. It has come under fire for not working on older phones, and for its effect on battery life. But there’s one criticism that has failed to materialise: what happens if you leave home without your phone? Because who does that? The basic assumption that we can track the movement of people by tracking their phones is an accepted fact.

This year has been good for tech companies, and Apple is no exception. The wave of global lockdowns has left us more reliant than ever on our devices. Despite being one of the first large companies to be seriously affected by Covid, as factory shutdowns in China hit its supply chain delaying the launch of the iPhone 12 by a month, Apple’s revenue has continued to break records. It remains the largest publicly traded company in the world by a huge margin: this year its value has grown by 50% to $2tn (£1.5tn) and it is still $400bn larger than Microsoft, the No 2.

It’s hard to think of another product that has come close to the iPhone in sheer physical proximity to our daily lives. Our spectacles, contact lenses and implanted medical devices are among the only things more personal than our phones.

Without us even noticing, Apple has turned us into organisms living symbiotically with technology: part human, part machine. We now outsource our contact books, calendars and to-do lists to devices. We no longer need to remember basic facts about the world; we can call them up on demand. But if you think that carrying around a smartphone – or wearing an Apple Watch that tracks your vitals in real time – isn’t enough to turn you into a cyborg, you may feel differently about what the company has planned next.

A pair of smartglasses, in development for a decade, could be released as soon as 2022, and would have us quite literally seeing the world through Apple’s lens – putting a digital layer between us and the world. Already, activists are worrying about the privacy concerns sparked by a camera on everyone’s face. But deeper questions, about what our relationship should be to a technology that mediates our every interaction with the world, may not even be asked until it’s too late to do anything about the answer.


The word cyborg – short for “cybernetic organism” – was coined in 1960 by Manfred E Clynes and Nathan S Kline, whose research into spaceflight prompted them to explore how incorporating mechanical components could aid in “the task of adapting man’s body to any environment he might choose”. It was a very medicalised concept: the pair imagined embedded pumps dispensing drugs automatically.

In the 1980s, genres such as cyberpunk began to express writers’ fascination with the nascent internet, and wonder how much further it could go. “It was the best we could do at the time,” laughs Bruce Sterling, a US science fiction author and futurist whose Mirrorshades anthology defined the genre for many. Ideas about putting computer chips, machine arms or chromium teeth into animals might have been very cyberpunk, Sterling says, but they didn’t really work. Such implants, he points out, aren’t “biocompatible”. Organic tissue reacts poorly, forming scar tissue, or worse, at the interface. While science fiction pursued a Matrix-style vision of metal jacks embedded in soft flesh, reality took a different path.

“If you’re looking at cyborgs in 2020,” Sterling says, “it’s in the Apple Watch. It’s already a medical monitor, it’s got all these health apps. If you really want to mess with the inside of your body, the watch lets you monitor it much better than anything else.”

The Apple Watch had a shaky start. Despite the company trying to sell it as the second coming of the iPhone, early adopters were more interested in using their new accessory as a fitness tracker than in trying to send a text message from a device far too small to fit a keyboard. So by the second iteration of the watch, Apple changed tack, leaning into the health and fitness aspect of the tech.

Now, your watch can not only measure your heart rate, but scan the electric signals in your body for evidence of arrhythmia; it can measure your blood oxygenation level, warn you if you’re in a noisy environment that could damage your hearing, and even call 999 if you fall over and don’t get up. It can also, like many consumer devices, track your running, swimming, weightlifting or dancercise activity. And, of course, it still puts your emails on your wrist, until you turn that off.

Apple believes that it can succeed where Google Glass failed.



Apple believes that it can succeed where Google Glass failed. Illustration: Steven Gregor/The Guardian

As Sterling points out, for a vast array of health services that we would once have viewed as science fiction, there’s no need for an implanted chip in our head when an expensive watch on our wrist will do just as well.

That’s not to say that the entirety of the cyberpunk vision has been left to the world of fiction. There really are people walking around with robot limbs, after all. And even there, Apple’s influence has starkly affected what that future looks like.

“Apple, I think more than any other brand, truly cares about the user experience. And they test and test and test, and iterate and iterate and iterate. And this is what we’ve taken from them,” says Samantha Payne, the chief operating officer of Bristol’s Open Bionics. The company, which she co-founded in 2014 with CEO Joel Gibbard, makes the Hero Arm, a multi-grip bionic hand. With the rapid development of 3D printer technology, Open Bionics has managed to slash the cost of such advanced prosthetics, which could have cost almost $100,000 10 years ago, to just a few thousand dollars.

Rather than focus on flesh tones and lifelike design, Open Bionics leans into the cyborg imagery. Payne quotes one user describing it as “unapologetically bionic”. “All of the other prosthetics companies give the impression that you should be trying to hide your disability, that you need to try and fit in,” she says. “We are company that’s taking a big stance against that.”

At times, Open Bionics has been almost too successful in that goal. In November, the company launched an arm designed to look like that worn by the main character in the video game Metal Gear Solid V red and black, shiny plastic and, yes, unapologetically bionicand the response was unsettling. “You got loads of science fiction fans saying that they really are considering chopping off their hand,” Payne says.

Some disabled people who rely on technology to live their daily lives feel that cyberpunk imagery can exoticise the very real difficulties they face. And there are also lessons in the way that more prosaic devices can give disabled people what can only be described as superpowers. Take hearing aid users, for example: deaf iPhone owners can not only connect their hearing aids to their phones with Bluetooth, they can even set up their phone as a microphone and move it closer to the person they want to listen to, overcoming the noise of a busy restaurant or crowded lecture theatre. Bionic ears anyone?

“There’s definitely something in the idea of everyone in the world being a cyborg today,” Payne says. “A crazy high number of people in the world have a smartphone, and so all of these people are technologically augmented. It’s definitely taking it a step further when you depend on that technology to be able to perform everyday living; when it’s adorned to your body. But we are all harnessing the vast power of the internet every single day.”


Making devices so compelling that we carry them with us everywhere we go is a mixed blessing for Apple. The iPhone earns it about $150bn a year, more than every other source of revenue combined. In creating the iOS App Store, it has assumed a gatekeeper role with the power to reshape entire industries by carefully defining its terms of service. (Ever wonder why every app is asking for a subscription these days? Because of an Apple decision in 2016. Bad luck if you prefer to pay upfront for software.) But it has also opened itself up to criticism that the company allows, or even encourages, compulsive patterns of behaviour.

Apple co-founder Steve Jobs famously likened personal computers to “bicycles for the mind”, enabling people to do more work for the same amount of effort. That was true of the Macintosh computer in 1984, but modern smartphones are many times more powerful. If we now turn to them every waking hour of the day, is that because of their usefulness, or for more pernicious reasons?

“We don’t want people using their phones all the time,” Apple’s chief executive, Tim Cook, said in 2019. “We’re not motivated to do that from a business point of view, and we’re certainly not from a values point of view.” Later that year, Cook told CBS: “We made the phone to make your life better, and everybody has to decide for his or herself what that means. For me, my simple rule is if I’m looking at the device more than I’m looking into someone’s eyes, I’m doing the wrong thing.”

Apple has introduced features, such as the Screen Time setting, that help people strike that balance: users can now track, and limit, their use of individual apps, or entire categories, as they see fit. Part of the problem is that, while Apple makes the phone, it doesn’t control what people do with it. Facebook needs users to open its app daily, and Apple can only do so much to counter that tendency. If these debates – about screen time, privacy and what companies are doing with our data, our attention – seem like a niche topic of interest now, they will become crucial once Apple’s latest plans become reality. The reason is the company’s worst-kept secret in years: a pair of smartglasses.

It filed a patent in 2006 for a rudimentary version, a headset that would let users see a “peripheral light element” for an “enhanced viewing experience”, able to display notifications in the corner of your vision. That was finally granted in 2013, at the time of Google’s own attempt to convince people about smartglasses. But Google Glass failed commercially, and Apple kept quiet about its intentions in the field.

Recently, the company has intensified its focus on “augmented reality”, technology that overlays a virtual world on the real one. It’s perhaps best known through the video game Pokémon Go, which launched in 2016, superimposing Nintendo’s cute characters on parks, offices and playgrounds. However, Apple insists, it has much greater potential than simply enhancing games. Navigation apps could overlay the directions on top of the real world; shopping services could show you what you would look like wearing the clothes you’re thinking of getting; architects could walk around inside the spaces they have designed before shovels even break ground.

Smartglasses could leave us quite literally seeing the world through Apple’s lens.



Smartglasses could leave us quite literally seeing the world through Apple’s lens. Illustration: Steven Gregor/The Guardian

With each new iPhone launch, Apple’s demonstrated new breakthroughs in the technology, such as “Lidar” support in new iPhones and iPads, a tech (think radar with lasers) that lets them accurately measure the physical space they are in. Then, at the end of 2019, it all slotted into place: a Bloomberg report suggested that the company hadn’t given up on smartglasses in the wake of Google Glass’s failure, but had spent five years honing the concept. The pandemic put paid to a target of getting hardware on the shelves in 2020, but the company is still hoping to make an announcement next year for a 2022 launch, Bloomberg suggested.

Apple’s plans cover two devices, codenamed N301 and N421. The former is designed to feature “ultra-high-resolution screens that will make it almost impossible for a user to differentiate the virtual world from the real one”, according to Bloomberg’s Mark Gurman. This is a product with an appeal far beyond the hardcore gamers who have adopted existing VR headsets: you might put it on to enjoy lifelike, immersive entertainment, or to do creative work that can make the most of the technology, but would probably take it off to have lunch, for instance.

N421 is where the real ambitions lie. Expected in 2023, it’s described only as “a lightweight pair of glasses using AR”. But, argues Mark Pesce in his book Augmented Reality, this would be the culmination of the “mirrorshades” dreamed up by the cyberpunks in the 80s, using the iPhone as the brains of the device and “keeping the displays themselves light and comfortable”. Wearing it all day, every day, the idea of a world without a digital layer between you and reality would eventually fade into memory – just as living without immediate access to the internet has for so many right now.

Apple isn’t the first to try to build such a device, says Rupantar Guha of the analysts GlobalData, who has been following the trend in smartglasses from a business standpoint for years, but it could lead the wave that makes it relevant. “The public perception of smartglasses has struggled to recover from the high-profile failure of Google Glass, but big tech still sees potential in the technology.” Guha cites the recent launch of Amazon Echo Frames – sunglasses you can talk to, because they have got the Alexa digital assistant built in – and Google’s purchase of the smartglasses maker North in June 2020. “Apple and Facebook are planning to launch consumer smartglasses over the next two years, and will expect to succeed where their predecessors could not,” Guha adds.

If Apple pulls off that launch, then the cyberpunk – and cyborg – future will have arrived. It’s not hard to imagine the concerns, as cultural questions clash with technological: should kids take off their glasses in the classroom, just as we now require them to keep phones in their lockers? Will we need to carve out lens-free time in our evenings to enjoy old-fashioned, healthy activities such as watching TV or playing video games?

“It’s a fool’s errand to imagine every use of AR before we have the hardware in our hands,” writes the developer Adrian Hon, who was called on by Google to write games for their smartglasses a decade ago. “Yet there’s one use of AR glasses that few are talking about but will be world-changing: scraping data from everything we see.” This “worldscraping” would be a big tech dream – and a privacy activist’s nightmare. A pair of smartglasses turns people into walking CCTV cameras, and the data that a canny company could gather from that is mindboggling. Every time someone browsed a supermarket, their smartglasses would be recording real-time pricing data, stock levels and browsing habits; every time they opened up a newspaper, their glasses would know which stories they read, which adverts they looked at and which celebrity beach pictures their gaze lingered on.

“We won’t be able to opt out from wearing AR glasses in 2035 any more than we can opt out of owning smartphones today,” Hon writes. “Billions have no choice but to use them for basic tasks like education, banking, communication and accessing government services. In just a few years time, AR glasses do the same, but faster and better.”

Apple would argue that, if any company is to control such a powerful technology, it ought to. The company declined to speak on the record for this story, but it has invested time and money in making the case that it can be trusted not to abuse its power. The company points to its comparatively simple business model: make things, and sell them for a lot of money. It isn’t Google or Facebook, trying to monetise personal data, or Amazon, trying to replace the high street – it’s just a company that happens to make a £1,000 phone that it can sell to 150 million people a year.

But whether we trust Apple might be beside the point, if we don’t yet know whether we can trust ourselves. It took eight years from the launch of the iPhone for screen time controls to follow. What will human interaction look like eight years after smartglasses become ubiquitous? Our cyborg present sneaked up on us as our phones became glued to our hands. Are we going to sleepwalk into our cyborg future in the same way?


Source link

Continue Reading

Technology

OANN suspended from YouTube after promoting a sham cure for Covid-19 | YouTube

Published

on

By

YouTube has suspended the conservative news outlet One America News Network from posting new videos for a week and from making money off of its existing videos after it promoted a sham cure for Covid-19.

The video was removed under YouTube’s policies to prevent the spread of Covid-19 misinformation, which prohibit saying there is a guaranteed cure to the virus. OANN has been suspended for “repeated violations” of this policy, said YouTube spokesperson Ivy Choi.

“Since early in this pandemic, we’ve worked to prevent the spread of harmful misinformation associated with Covid-19 on YouTube,” Choi said.

YouTube’s Covid-specific misinformation policies prohibit content that disputes the existence of the virus, discourages someone from seeking medical treatment for Covid, disputes guidance from local health authorities on the pandemic, or offers unsubstantiated medical advice or treatment.

Under these policies, an offending account will receive one warning for posting misinformation and then three strikes before it is permanently removed from the platform. The strikes carry progressively more severe penalties, including de-monetization. OANN previously received a warning for “similarly violating our Covid-19 misinformation policy,” according to YouTube.

The company said it has manually reviewed and removed 200,000 videos related to dangerous or misleading Covid-19 information since February 2020, including for example a widely condemned viral video published by rightwing media outlet Breitbart featuring dubious claims from people identifying themselves as doctors, telling people not to wear masks.

While the suspension from posting new videos is temporary, YouTube says the de-monitization of all OANN content will be permanent, unless the network addresses its issues.

Tuesday’s removal comes after four Democratic senators sent a letter Tuesday to YouTube’s chief executive officer, Susan Wojcicki, pressing the company to do more to crack down on election-related misinformation.

Meanwhile, after breaking with longtime ally Fox News, Donald Trump has urged his supporters to turn to news outlets such as Newsmax and OANN. These outlets openly support Trump and, without evidence, cast doubt on the validity of the election of Joe Biden. YouTube said it does not consider OANN an “authoritative news source”, meaning under its policies the account will not surface high up in search results for broad queries about Covid-19 nor be promoted in recommendations.


Source link

Continue Reading

Technology

HP (HPQ) earnings Q4 2020

Published

on

By

Enrique Lores, CEO, HP

Scott Mlyn | CNBC

HP shares rose as much as 9% in extended trading on Tuesday after the PC maker reported fiscal fourth-quarter earnings that beat analysts’ estimates and provided an optimistic earnings forecast.

  • Earnings: 62 cents per share, adjusted, vs. 52 cents per share as expected by analysts, according to Refinitiv.
  • Revenue: $15.3 billion vs. $14.7 billion as expected by analysts, according to Refinitiv.

Revenue declined for the fourth consecutive quarter on an annualized basis. It fell about 1% in the quarter, which ended on Oct. 31, according to a statement.

HP is forecasting 64 cents to 70 cents in adjusted earnings per share in the fiscal first quarter, higher than the Refinitiv consensus of 54 cents.

The company’s largest business segment, Personal Systems, which includes PC notebooks and desktops, delivered $10.4 billion in revenue, flat year over year and below the $10.5 billion consensus among analysts polled by FactSet. Within that unit, sales of notebooks rose 18% to $7.41 billion, but the overall segment was pulled down by desktop and workstation declines.

HP more than doubled unit sales of and revenue from Chromebook PCs running Google’s Chrome OS operating system, said Marie Myers, HP’s chief transformation officer and acting chief financial officer, on a conference call with analysts. She replaced Steve Fieler, who left in the quarter to join Google.

Also on Tuesday, PC maker Dell reported fiscal third-quarter results and said sales of consumer devices, including PCs, were up 14% from a year earlier in the quarter that ended on Oct. 30.

Excluding the after-hours jump, HP shares are up 6% since the start of the year, while the S&P 500 has gained about 13% over the same period.

This is breaking news. Please check back for updates.

WATCH: Cramer breaks down the runs in housing, work-from-home and cloud stocks


Source link

Continue Reading

Breaking News

Shares