Connect with us


Microsoft’s GitHub has become magnet for thorny issues like RIAA



In 2018, Microsoft made one of its priciest acquisitions ever, spending $7.5 billion on code-sharing site GitHub. It wasn’t the cleanest fit. GitHub is used by over 50 million developers who tend to be outspoken, including when it comes to things they dislike about Microsoft.

The deal continues to pose unexpected challenges, like a recent spat with the Recording Industry Association of America. In October, the RIAA asked GitHub to take down youtube-dl, a piece of open-source software that enables people to download videos from YouTube and other online services.

The software disappeared from the internet, and users objected.

One GitHub user, on the site, described the incident as “a shame for GitHub” and said “that Microsoft acquisition was really a mistake.” Another called for Microsoft to resign from the RIAA, an organization that consists primarily of record labels and musicians. The removal by GitHub so angered yet another user that the person responded by posting part of GitHub’s own proprietary software on the area of the site where digital copyright takedown requests are reported.

The code was adjusted by the person who maintained the project so that it was no longer in violation of the RIAA. The company then brought youtube-dl back online and announced a new process for handling similar claims.

Like fellow tech giants Amazon, Apple and Google, Microsoft faces all sorts of challenges related to its bigness, whether from its many rivals, millions of customers, profit-hungry investors or politicians concerned about competition. GitHub, as a storehouse of open-source projects and a virtual lifeline for programmers, creates tension of a different sort.

Some problems GitHub can solve by adhering to the demands of protesting users. Others are more sensitive, like the company’s work with U.S. Immigration and Customs Enforcement.

GitHub has refused to cut ties with ICE, leading employees to resign after the agency renewed its contract to use GitHub software. Key GitHub users published an open letter late last year insisting that GitHub end the contract, citing the agency’s separation of children from their parents and other activities. Hundreds of GitHub’s own workers signed an internal petition to have GitHub stop work with ICE last year, too, said two former employees who were not authorized to talk about internal affairs.

GitHub did not respond to a request for comment.

In addressing the ICE issue, GitHub expressed opposition to family separation. The company said it doesn’t have a services agreement with the agency, provides no consulting work and “has no visibility into how this software is being used, other than presumably for software development and version control.”

Microsoft has faced criticism, separate from GitHub, for its work providing cloud services to ICE, even though the company said in 2018 that it was “dismayed” by the practice of family separation.

For GitHub, the latest incident involving the video downloading tool has provided an opportunity for users to reignite the ICE controversy. Former GitHub engineer Zach Holman responded to an explanation provided by Nat Friedman, the company’s CEO, by bringing up the past incident.

Friedman’s tweets often receive replies to the effect of “Drop ICE.”

“The whole thing permeates everything they do now,” said Holman, who left GitHub in 2015 and now invests in start-ups. He said the easiest resolution would be to end the contract, which Friedman has described as “not financially material for our company.”

Earlier this year, GitHub was among the technology companies that showed support for the Black community following the killing in May of George Floyd while in police custody, and the nationwide protests that ensued.

A few GitHub users suggested that the company could rename part of its service so that “master,” a racially sensitive word, could be retired. The term referred to the primary area where developers store their code.

GitHub announced a plan to do exactly that one week later, changing the name to “main.” Even with good intentions, the company welcomed a fresh batch of comments about the ICE contract.

Holman summed it up this way: “How do I reconcile your position with ICE and what you’re saying about support for diversity in tech?”

WATCH: The rise of open-source software

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Facebook Pauses Ads For Gun Accessories And Military Gear After Complaints From Lawmakers And Employees




Following complaints from Senators and employees, Facebook on Saturday said it was temporarily halting ads for gun accessories and military gear in the US through next week’s inauguration of President-elect Joe Biden.

The move follows a BuzzFeed News story that revealed the world’s largest social network displayed ads for gun holsters, body armor, and other military-related paraphernalia in the News Feeds of people who had engaged with content about the attempted coup at the US Capitol building earlier this month.

“Out of an abundance of caution, we are temporarily banning ads promoting weapons accessories and protective equipment in the U.S. until at least January 22nd,” Facebook spokesperson Liz Bourgeois. “We appreciate our employees, BuzzFeed’s reporting, and policymakers for raising this concern.”

While Facebook already prohibited advertisements for guns, ammunition and weapon enhancements like silencers, the company has still allowed ads for accessories including holsters and attachable flashlights and laser sights. Facebook users and employees noticed an uptick in ads for these products and military gear, including armored vests, on their newsfeeds in the days following a riot at the Capitol that left five people dead.

Ads for tactical gear were also shown to people who followed right-wing extremist pages or groups on the social network. Research from the Tech Transparency Project, a nonprofit industry watchdog group, showed that a Facebook account set up by the organization to follow pages belonging to right-wing extremists and militant organizations was regularly served ads for military gear in between posts casting doubt on the presidential election and praising the assault on the US Capitol.

Developing …

Source link

Continue Reading


‘Law unto themselves’: the Australian battle to curb Facebook and Twitter’s power | Australia news




Nationals MP Anne Webster and Labor MP Sharon Claydon are less concerned with why Donald Trump was taken off social media, and more concerned with what platforms such as Facebook are doing to stop online defamation and abuse.

Webster and Claydon are the co-chairs of the Parliamentary Friends of Making Social Media Safe, a group to “highlight the environment of social media and the risks associated” and to make the platforms more accountable. It now boasts more than 50 members thanks partly to Twitter and Facebook’s response to last week’s attack on the US Capitol.

For Webster, it’s personal. After winning a defamation case against a conspiracy theorist who falsely accused her of being “a member of a secretive paedophile network”, she wants Facebook treated as a publisher.

The decision of Twitter and other social media platforms to first remove posts and then suspend Trump’s account prompted outrage among some conservatives, including National MP George Christensen and Liberal MP Craig Kelly.

The outspoken pair both favour changes to stop social media platforms from censoring any lawful content created by their users – a push in the direction of more free speech and less responsibility for content on the part of the platforms.

Webster tells Guardian Australia although she’s glad the Trump controversy and the Chinese foreign ministry tweet accusing Australia of war crimes in Afghanistan had “put fire under the debate” there is now a broader discussion about the regulation of social media to be had.

Webster says social media companies “are a law unto themselves, largely”. Her defamation case “cost me dearly, both financially and emotionally” and Webster said most aggrieved people are not able to afford to fight defamatory posts in court.

The legal position on social media defamation is unclear. University of Sydney professor David Rolph, a defamation law expert, says that “in principle” the social media companies can be liable.

Just as media companies were held liable for comments on their Facebook page in the Dylan Voller case because they were “responsible for everything that flows” from having setting up a public page, “that analysis might extend to the social media platform itself”, Rolph says.

He says there are also “problems of jurisdiction and enforcement” in taking on overseas based companies, so plaintiffs rarely go after the internet giants, and a possible immunity in the Broadcasting Services Act if social media can argue that they are an “internet content host”.

Webster says in her case Facebook’s handling was “appalling – it took months” and was only prompted by her taking legal action.

“Freedom of speech must be valued but it shouldn’t give people the right to incite a riot or lie about people.

“Social media companies have profited from online conversations but there are rights and responsibilities … If they’re not held responsible the number of falsehoods will increase at the rate of knots.”

Nationals MP Anne Webster won a defamation case against a conspiracy theorist who falsely accused her of being ‘a member of a secretive paedophile network’.
Nationals MP Anne Webster won a defamation case against a conspiracy theorist who falsely accused her of being ‘a member of a secretive paedophile network’. Photograph: Mike Bowers/The Guardian

Mia Garlick, Facebook’s director of policy in Australia and New Zealand, has told a parliamentary committee the company did geo-block some posts from Webster’s accuser and the account was removed after repeated breaches of community standards. She blamed “additional legal complexities in that case” for the delay.

Claydon got involved due to her constituents’ experiences of “online harassment, posting intimate photos, cyber-stalking, and of women who were found by family violence perpetrators through social media platforms”.

“I had a growing interest because there were posts and pages that allowed the abuse of women – and when people complained they fell into a deep dark void somewhere, and the complaints didn’t really go anywhere.”

According to Claydon, users agree not to peddle hate speech, incite violence, or deliberately spread dangerous misinformation – so the platforms are not doing anything wrong by removing users who breach the terms, such as Trump.

For Claydon, the de-platforming of Trump raises the question “why it took four years when he’s clearly in breach of their terms” – and the fact social media platforms have found courage only on the eve of a new presidency shows the limits of self-regulation.

“They regard themselves as big global entities, and are not particularly accountable to anyone,” she says.

According to the e-safety commissioner, 14% of Australians have faced online hate speech. Claydon wants to build cross-party support to prevent social media becoming “a dangerous weapon for half our citizens”, rather than “let those with the biggest mouths rush out and determine the shape” of the reform conversation.

Despite calls from Christensen to swing back in the direction of free speech, creating a safer space is also the direction the government is heading in.

In December, the communications minister, Paul Fletcher, released a draft online safety bill proposing to give the e-safety commissioner powers to order the take-down of harmful content.

The e-safety commissioner, Julie Inman Grant, has said the bill would ensure moderation of social media is applied “fairly and consistently” but does not address concerns from some in the Coalition about de-platforming.

The legislation would be the first of its kind to tackle not just illegal content “but also serious online harms, including image-based abuse, youth cyberbullying and … serious adult cyber-abuse with the intent to harm”.

There is also a voluntary code on disinformation, to be devised by the social media giants and enforced by the Australian Communications and Media Authority, expected to be finalised by mid-year.

While senior Coalition figures including the acting prime minister, Michael McCormack and the deputy Liberal leader, Josh Frydenberg, expressed disquiet at Trump’s removal, there were no suggestions the government would change course to accommodate Christensen’s call to abolish community standards in favour of anything but unlawful speech goes.

Fletcher has signalled he is cold on the idea of going beyond the existing package, arguing that it already creates “a public regulatory framework within which decisions about removing content are made by social media platforms (and, if necessary, can be overridden by decisions of government)”.

One common strand in reform calls is that participants want to see greater transparency around decisions that are made to block posts or remove users.

Labor MP Sharon Claydon says the de-platforming of Donald Trump only on the eve of a new presidency shows the limits of self-regulation.
Labor MP Sharon Claydon says the de-platforming of Donald Trump only on the eve of a new presidency shows the limits of self-regulation. Photograph: Andre M Chang/ZUMA Wire/REX/Shutterstock

The Australian Competition and Consumer Commission chairman, Rod Sims, who led the digital platforms review, has said given the degree of control they exercise on what we see and read “we definitely need the government to get to grips with this; we can’t just leave it with the digital platforms”.

The e-safety commissioner says the platforms “aren’t always transparent in how they enforce and apply these policies and it’s not always clear why they may remove one piece of content and not another”.

Transparency would be improved by the online safety bill’s basic online safety expectations, which would “set out the expectation on platforms to reflect community standards, as well as fairly and consistently implementing appropriate reporting and moderation on their sites”, she tells Guardian Australia.

“This could include, for example, the rules that platforms currently apply to ensuring the safety of their users online, including from threats of violence.”

Liberal MP Trent Zimmerman supported the platforms’ decision to remove Trump, who he accused of “stoking the flames” of a threat to the peaceful transition of power in the US.

Yet the episode demonstrated the “inconsistent standards being applied” as Trump was removed while “many authoritarian leaders remain able to use these platforms for their propaganda”.

“We need clear, transparent rules. And it would be helpful to clarify what avenues there are to seek explanation or appeal those decisions.”

Despite unease at the highest levels of the Australian government about de-platforming, the prevailing mood is still for more – not less – regulation.

For those such as Webster or Claydon’s constituents, basic enforcement of existing standards would be an improvement.

Source link

Continue Reading


How Parler deplatforming shows power of Amazon, cloud providers




Andy Jassy, CEO of Amazon Web Services.


Getting kicked off Amazon Web Services is rare, but it has enormous consequences.

It happened this week, when Amazon dropped Parler, a social network that gained traction from conservatives after Twitter banned President Donald Trump and housed content that encouraged violence. Parler filed suit against Amazon in federal district court in an attempt to stop Amazon from suspending Parler’s account, and Amazon pushed back, requesting that the court deny Parler’s motion.

The incident demonstrates a type of power that Amazon wields almost uniquely because so many companies rely on it to deliver computing and data storage. Amazon controlled 45% of the cloud infrastructure in 2019, more than any other company, according to estimates from technology research company Gartner. The app survived without being listed in Apple and Google’s app stores, but getting sent away from Amazon’s cloud has left Parler absent from the internet for days.

Parler’s engineering team had built software that drew on computing resources from Amazon Web Services, and the company had been in talks with Amazon about adopting proprietary AWS database and artificial intelligence services, the company said in a district court filing on Wednesday.

It would take time to figure out how to perform similar functions on Parler’s own servers or a cloud other than AWS. And in the case of Parler, time is critical, because it came as the service was gaining attention and new users following Twitter’s Trump ban.

Parler’s engineers could learn to use other computing infrastructure, or the company could hire developers who already have that knowledge. But because no cloud provider is as popular as Amazon, people skilled in, say, Oracle’s cloud aren’t as as easy to find as those who know how to build on AWS.

The warnings were there

The swiftness with which Amazon acted shouldn’t come as a shock. Companies have been disclosing details about their deals with Amazon that warn of these kinds of sudden discontinuations for years.

In 2010, DNA sequencing company Complete Genomics said that “an interruption of services by Amazon Web Services, on whom we rely to deliver finished genomic data to our customers, would result in our customers not receiving their data on time.”

Gaming company Zynga warned about how its AWS foundation could quickly vanish when it filed the prospectus for its initial public offering in 2011. At the time, AWS hosted half of the traffic for Zynga’s games, such as FarmVille and Words with Friends, the company said.

“AWS may terminate the agreement without cause by providing 180 days prior written notice, and may terminate the agreement with 30 days prior written notice for cause, including any material default or breach of the agreement by us that we do not cure within the 30-day period,” Zynga said.

AWS can even terminate or suspend its agreement with a customer immediately under certain circumstances as it did in 2010 with Wikileaks, pointing to violations of AWS’ terms of service.

Parler started using AWS in 2018, long after the Wikileaks incident and the first corporate disclosures about the possibility of cloud interruptions.

When AWS told Parler it planned to suspend Parler’s AWS account, it said Parler had violated the terms repeatedly, including by not owning or controlling the rights to its content.

Over the course of several weeks, AWS alerted Parler to instances of user content that encouraged violence, Amazon said in a court filing. More of that content surfaced after protesters stormed the Capitol building in Washington on Jan. 6, interrupting Congress’ confirmation of the Electoral College results from the 2020 presidential election. AWS conveyed that Parler wasn’t doing enough to speedily remove that sort of information from its social network.

Parler could have protected itself more. Large AWS customers can sign up for more extensive agreements, which allow more customers time to get into compliance if they wind up breaking rules.

Gartner analyst Lydia Leong spelled out this difference in a blog post: “Thirty days is a common timeframe specified as a cure period in contracts (and is the cure period in the AWS standard Enterprise Agreement), but cloud provider click-through agreements (such as the AWS Customer Agreement) do not normally have a cure period, allowing immediate action to be taken at the provider’s discretion,” she wrote.

Other cloud providers have their own terms their customers must follow. AWS now has millions of customers, though, and it holds more of the cloud infrastructure market than any other provider. As a result, many organizations could be exposed to the sort of treatment Parler received, rare as it is, if they don’t behave in accordance with Amazon’s standards.

Parler recognized the drawbacks of being beholden to a cloud provider, but ultimately, the flexibility clouds offer was too appealing to ignore. “I’m personally very anti-cloud and anti-centralization, though AWS has its place for high-burst traffic,” Alexander Blair, Parler’s technology chief, wrote in a post on the service.

Parler and Amazon did not immediately respond to requests for comment.

WATCH: Apple pulls Parler from App Store amid crackdown on violent posts

Source link

Continue Reading

Breaking News