Connect with us

Technology

Chinese driverless car start-up WeRide raises $310 million in funding

Published

on

A car equipped with WeRide autonomous driving technology in Guangzhou, China.

WeRide

GUANGZHOU, China — Chinese driverless car start-up WeRide has raised $310 million in a new round of funding as it pushes to commercialize its technology.

Yutong Group, a Chinese company that manufactures commercial vehicles including electric buses, lead the funding round.

A number of other new investors jumped on board while existing venture capital firms also participated.

WeRide did not disclose its valuation and declined to comment when contacted by CNBC.

The Guangzhou-based firm is among a handful of companies in China vying to become a leader in driverless car technology. WeRide focuses on building the technology that enables vehicles to be autonomous rather than actually manufacturing automobiles.

WeRide’s funding round follows a cash injection into rival Pony.ai in November which saw the latter’s valuation top $5 billion. Other companies including search giant Baidu, start-up AutoX and ride-hailing firm Didi, are competing in the same space.

In a press release, Tony Han, CEO of WeRide said the funding and new investors will give the company the “strategic resources to fulfill the goal of commercializing self-driving technology.”

In late 2019, WeRide launched a robotaxi project in Guangzhou and has since expanded to allow users to hail a ride through Alibaba’s Amap app. The company says it will launch trial operations for “Mini Robobuses” on Friday.

WeRide’s CEO Han previously told CNBC that he predicts the large-scale application of robotaxis will take place between 2023 and 2025. He said that WeRide will begin to make money from the business in 2025. 


Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Facebook News rolls in UK as tech giants start paying for journalism

Published

on

By

The logos of Facebook and Google apps displayed on a tablet.

Denis Charlet | AFP via Getty Images

LONDON — Facebook announced it will start rolling out its Facebook News product in the U.K. on Tuesday, and will pay publishers for their content.

Facebook News is a dedicated section within the Facebook app that features curated and personalized news from hundreds of national, local and lifestyle publications.

The product, which competes with Apple News, launched in the U.S. last June and the U.K. is the second country to get access to it.

Facebook claims the product delivers “informative, reliable and relevant news” to users “while also highlighting original and authoritative reporting on pressing topics.”

Jesper Doub, Facebook’s European director of news partnerships, said in a blog post on Tuesday: “This is the beginning of a series of international investments in news.”

He added: “The product is a multi-year investment that puts original journalism in front of new audiences as well as providing publishers with more advertising and subscription opportunities to build sustainable businesses for the future.”

Facebook announced the U.K. launch of Facebook News in November, saying it would feature content from media partners including Conde Nast, Hearst, The Economist, and Guardian Media Group.

On Tuesday, Facebook said it has now signed up Channel 4 News, Daily Mail Group, DC Thomson, Financial Times, Sky News and Telegraph Media Group.

Some content that is normally behind a paywall is free to view on Facebook News, which is expected to launch in more countries this year.

“We’ll continue to learn, listen and improve Facebook News as it rolls out across the U.K. and into other markets, including France and Germany, where we are in active negotiations with partners,” said Doub.

Tech giants like Facebook and Google are under increasing pressure to pay media companies for their content.

A Facebook spokesperson told CNBC that the company will pay certain U.K. publications to feature their content in Facebook News, but he was unable to reveal how much.

“We will pay some publishers to participate in Facebook News,” he said. “We’re paying for content which is not already on the platform in order to achieve a diverse set of coverage across a range of topic areas.”

He added: “Monetization for the majority of publishers appearing in Facebook News will be similar to monetization via other Facebook tabs, from referral traffic to your sites or ads in Instant Articles, pushing people to hit a paywall.”

Google’s battle

Last week, Google signed a deal to pay French publishing companies and news agencies for their content.

The agreement comes after several months of talks between Google France and the media groups, which are represented by France’s Alliance de la Presse d’Information Generale lobby.

Google said it would negotiate individual licenses with members of the alliance that cover related rights and open access to a new mobile service from the company called News Showcase.

The search giant said last year that it would pay news publishers for the first time, a change of tack from the internet giant which for years had refused to do so. The company agreed to a raft of initial deals in Germany, Australia and Brazil, and now appears to be extending that to France.

But when the Australian Government proposed a new law that would force Google and Facebook to pay news publishers for the right to link to their content, Google threatened to pull its widely used search engine from the country.

“Coupled with the unmanageable financial and operational risk if this version of the Code were to become law, it would give us no real choice but to stop making Google Search available in Australia,” Mel Silva, managing director for Google Australia and New Zealand, told a senate committee last week.

Scott Morrison, the Australian prime minister, told a press conference “we don’t respond to threats.”

— Additional reporting by CNBC’s Ryan Browne.


Source link

Continue Reading

Technology

Apple taps hardware chief to lead new mystery project

Published

on

By

Steve Proehl | Corbis Unreleased | Getty Images

Apple promoted its hardware chief, Dan Riccio, to a new role “focusing on a new project.” He will report to CEO Tim Cook, Apple announced on Monday.

Riccio was previously the company’s senior vice president for hardware engineering, signing off on the physical aspects and electrical engineering of Apple products, including iPhones. He’s been on Apple’s “executive team” reporting to Cook since 2012.

Apple did not mention what project Riccio would work on in its announcement.

“Next up, I’m looking forward to doing what I love most — focusing all my time and energy at Apple on creating something new and wonderful that I couldn’t be more excited about,” Riccio said in a statement.

Apple rarely discusses future products, but in recent years, the tech giant has been working on unreleased electric cars as well as virtual reality and augmented reality headsets.

John Ternus will take over for Riccio. Ternus was previously a VP at Apple and his public profile has been growing in recent years. Last year, he was a key presenter of the company’s transition from Intel processors to its own M1 processors for laptops at a livestreamed launch event.


Source link

Continue Reading

Technology

These Brave Corporations Did What No Social Platforms Could Do, And I’m Weeping

Published

on

By

There’s this cliché in crime movies where the ace FBI agent steps under the yellow caution tape surrounding the scene of a murder and tells the bumbling local police, “OK, boys, we’ll take it from here.”

For over a decade now, when it comes to content moderation, social media platforms have played the cop — accidentally shooting themselves in the dick with their own gun, letting the bad guys operate with impunity, doling out mere speeding tickets to Mafia capos, and barely bothering to dust the donut crumbs off themselves when law-abiding citizens come in to file a noise complaint.

Facebook, YouTube, and Twitter have failed over and over to stamp out hate groups, disinformation, and the QAnon mass delusion, allowing them to fester and metastasize into our politics and culture. The mob that stormed the Capitol was a manifestation of this failure: organized online, bloated on disinformation smoothies gavage-fed to them via “up next” sidebars, and whipped into a frenzy by the poster in chief everyone knew the mods wouldn’t ever touch. That there were some people who were immune to the platforms’ moderation was common knowledge; the companies spent years designing contorted “community standards,” endlessly writing and rewriting their content moderation guidelines, and establishing supreme courts to review, approve, and legitimize each decision.

And then the FBI stepped under that yellow tape.

In the end, it was the big-money brands that had never dirtied themselves with the thankless and dismal task of moderating posts and banning users that stepped in. Capitalism drained the fever swamp.

The right to free speech is fundamental, but it is not absolute or — crucially — free from consequences. This is something Amazon, Apple, and Google have made definitively clear in acting the way they have. Which makes it all the more lol that the platforms whose business is content have struggled for so long. No one wants the decisions about what we see online to be made by opaque corporations. But this is what happened, and where we are right now.

The companies that run the infrastructure of social media pulled out their seldom-used banhammers and swung mightily. When it became clear that Parler, a “free speech” alternative to Twitter, had been a gathering place for some who participated in the storming of the Capitol and had continued to host discussions of violent threats against politicians and tech executives, Apple quickly removed it from the App Store, and Google removed it from its Google Play storefront. The same day, Amazon terminated Parler’s cloud hosting service, effectively knocking it offline. (Parler tried to take Amazon to court, but a judge tossed its case.) Apple and Amazon aren’t social platforms — and while they do some light content moderation in places like product reviews, this is not what they do.

It’s worth noting that these companies only seem to leap into action following high-profile violence, like a murder at the hands of a mob of violent extremists.

Other companies that are not Facebook, Google, or Twitter quickly followed suit. Fearing its rentals might be used by insurrectionists, Airbnb blocked all stays in the DC area during Joe Biden’s inauguration. It also said its political fundraising group was halting campaign donations to lawmakers who had voted against certifying the election results. Other companies did the same: AT&T, American Express, Hallmark, Nike, Blue Cross Blue Shield, Cisco, Coca-Cola, Microsoft, and dozens more. Did you catch that? Hallmark!

Financial services firms were also quick to act after the Jan. 6 riot. Stripe, the bloodless online payment processor used by many e-commerce sites, dumped the official Donald Trump website. GoFundMe banned fundraisers for travel to Trump rallies. E-commerce platforms PayPal and Shopify booted the Trump campaign and associated sites that were promoting lies about the election. Financial services companies may not be in the moderation game, but they are beholden to stockholders and their bottom line. Because of that, they acted swiftly to deal a bigger body blow to Trump’s power than Facebook and Twitter could do in a thousand disclaimers about election results.

The Great Deplatforming of 2021 that saw removals of Trump from social sites and Parler from app stores isn’t the first time financial firms have done in days the moderation that platforms failed to do for years. In December, it was revealed that Pornhub was hosting nonconsensual pornography, some of which included child sex abuse materials. Visa and Mastercard pulled their payment processing from the site. Pornhub had *for years* done an appalling job of policing its platform for such material, complaining it was difficult to eradicate. But after the credit card companies acted, it quickly did just that, removing any content not posted by a verified account.

Tech companies that aren’t social platforms have also taken sweeping steps against extremist content in the recent past. In 2019, Cloudflare, a web hosting platform, dropped 8chan after discovering its association with a gunman who killed 23 people at a Walmart in El Paso, Texas. In response to the violent far-right protest in Charlottesville, Virginia, in August 2017, Cloudflare stopped hosting hate sites like the Daily Stormer. Apple Pay and PayPal have terminated their services for a number of hate groups. Squarespace booted hate sites built on it, as did GoDaddy, which would later kick the social platform Gab off its service.

It’s worth noting that these companies only seem to leap into action following high-profile violence, like a killing committed by a mob of extremists. Mere public pressure or petulant, whiny news stories don’t move the dial.


That these companies have likely spent little time considering the free speech nuances of content moderation is, uh, not ideal. There are troubling implications. Groups like the Electronic Frontier Foundation have warned against allowing companies Visa or Cloudflare too much power over what is allowed to exist on the open internet.

The Great Deplatforming was a response to a singular and extreme event: Trump’s incitement of the Capitol attack. As journalist Casey Newton pointed out in his newsletter, Platformer, it was notable how quickly the full stack of the tech companies reacted. We shouldn’t assume that Amazon will just start taking down any site because it did it this time. This was truly an unprecedented event. On the other hand, do we dare think for a moment that Bad Shit won’t keep happening? Buddy, bad things are going to happen. Worse things. Things we can’t even imagine yet!

They’ve created their own wonk-filled supreme courts where the judges make six figures to do 15 hours of work per week to argue over what kind of nipples are banned.

Some of you will inevitably note that there’s a common variation on the “OK, boys, we’ll take it from here” trope:

The underappreciated but smart, highly capable, and principled town sheriff intent on solving the case on their own, FBI be damned. But that fails as a metaphor here because Facebook, YouTube, TikTok, and Twitter certainly haven’t proven themselves to be highly capable when it comes to content moderation (see earlier description of shooting oneself in the dick with their own gun, repeatedly, as if they had many Hydra-like dicks that kept regrowing when shot). Twitter’s booting of various hate and misinformation peddlers this past year came after more than a decade of widespread and widely known harassment and abuse. TikTok, the newest and possibly most vital platform, hasn’t quite figured out its moderation strategy yet, and it seems to fluctuate between deleting videos that are critical of China and allowing sketchy ads. There’s something almost comical about YouTube issuing a “strike” on Trump’s account as if he’s Logan Paul in the Japanese “suicide forest.” And Facebook? Well, Facebook is Facebook.

The first six cases basically read like a greatest hits of Facebook content moderation controversies: hate speech, hate speech, hate speech, female nipples, Nazis and COVID health misinfo. https://t.co/JwByovVT1S


Twitter

Long before Facebook, Twitter, and YouTube were excusing their moderation failures with lines like “there’s always more work to be done” and “if you only knew about all the stuff we remove before you see it,” Something Awful, the influential message board from the early internet, managed to create a healthy community by aggressively banning bozos. As the site’s founder, Rich “Lowtax” Kyanka, told the Outline in 2017, the big platforms might have had an easier time of it if they’d done the same thing, instead of chasing growth at any cost:

We can ban you if it’s too hot in the room, we can ban you if we had a bad day, we can ban you if our finger slips and hits the ban button. And that way people know that if they’re doing something and it’s not technically breaking any rules but they’re obviously trying to push shit as far as they can, we can still ban them. But, unlike Twitter, we actually have what’s called the Leper’s Colony, which says what they did and has their track record. Twitter just says, “You’re gone.”

That it took the events of Jan. 6 and five deaths to finally ban Trump from social platforms is, frankly, shameful, especially given the elaborate and endlessly tweaked justifications from these social sites for permitting posts that are unmistakably, conspicuously malignant. They’ve created their own wonk-filled supreme courts where the judges make six figures to do 15 hours of work per week to argue over what kind of nipples are banned. They have created incomprehensible bibles of moderation rules for throngs of underpaid, outsourced workers who are treated horribly. They’ve written manifestos about plans for “healthy conversations.” They flip-flop over whether to ban neo-Nazis or remonetize the channel for an anti-gay hate-monger. They respond to threats to democracy and public health with ”the more you know”–style labels and information “hubs.” They have worked their heads so far up their asses that they’ve forgotten they can just smash that “ban” button.

Is it admirable that Amazon, Apple, et al., stepped in to do the moderation work that Facebook, YouTube, and Twitter have failed to do for so long? Not necessarily! Big yikes!

But that’s what happened. Drano works to unclog my shower, but my landlord tells me it ruins the whole pipe system. I don’t expect the plumbing system of the internet to improve; there will always be more monster turds clogging it up. Happy flushing ●



Source link

Continue Reading

Breaking News

Shares