Nearly one third of the seven million Australians who downloaded the Covidsafe app have not updated to the most recent version, as new figures show the government spend on the contact tracing app has risen to $14m.
The Covidsafe contact tracing app relies on as many people as possible running it, but new data reveals that more than two million users do not have the most up-to-date version.
In the nine months since the federal government launched the app, it has only identified 17 close contacts that were not found through manual contact tracing methods, all of them in New South Wales.
Part of the reason for the app’s lack of success in identifying contacts could be that almost one third could be using an older version, if they are still using the app. Since April, there have been a dozen software updates pushed out to fix bugs, security flaws, and make the app work more effectively on iPhones when the screen is locked.
The Digital Transformation Agency – which is responsible for the development of the app – told a Senate estimates committee in response to questions on notice that as of the end of October last year, 66% of users had the most up-to-date version, including 63% of iPhone users, and 74% of Android users.
Guardian Australia has asked DTA whether this figure had changed in the past few months. The Covidsafe app routinely sends push notifications telling users to “improve the performance of the app” by opening the app and ensuring the phone is connected to the internet.
Labor’s acting shadow communications minister, Tim Watts, said the government should have been urging people to update the app from the start.
“Instead they rushed the rollout because Scott Morrison wanted an announcement to beat the Apple and Google solutions and refuse to admit they bungled it,” he said.
“Seven million Australians did their bit and downloaded the app, but 2.1 million of them are currently left exposed in the sun because Scott Morrison doesn’t want to look bad politically.”
The agency was also asked in Senate estimates how many of the 7.1 million who downloaded the app initially continued to run it on their phones, but the agency claimed it did not record such information.
“At present, the DTA does not measure active users because, as required under the Privacy Impact Assessment, the DTA does not track information about which users have the app open and running, or where any users are located.”
However, the DTA does get regular analytics and usage data reports, including information on the number of users running the app. Last year DTA refused a freedom of information request from Guardian Australia for this information on the grounds it would compromise public safety.
Guardian Australia appealed the decision, and has asked DTA whether it misled the Senate by claiming it does not have the answer.
The app has so far cost $7.1m to develop and run, with the largest share of the money going to Amazon which was paid $2.4m for professional services, development and ongoing usage fees. Technology consultants Shine Solutions has received $1.9m, while tech companies Delv earned $1.7m, Cevo $1m, and Boston Consulting Group $800,000.
The department of health also told Senate estimates that it has spent $6.9 m on advertising the app, taking the total cost of the app to $14m.
When the app was launched, prime minister Scott Morrison referred to the app as a form of “sunscreen” necessary to have in place before easing restrictions. However, contact tracing apps have had limited success across the world. Many jurisdictions, including in Australia have adopted QR code venue check-ins as a better way to keep track of where people have been.
Many tech experts have argued that the Australian government should switch to an Apple-Google exposure notification framework to fix issues with Covidsafe, but the government has so far resisted.
Government services minister Stuart Robert said the government was concerned the Apple/Google version would bypass contact tracers, because the app’s default is to alert close contacts directly.
The framework can, however, be modified to allow the contact details of users – with their consent – to be provided to contact tracers in the event they’re identified as a close contact.
The BBC reported earlier this week that only about two in five people in Scotland who tested positive to Covid-19 and had the Protect Scotland app – which uses the Apple/Google exposure notification framework – had chosen to alert their close contacts through the app.
Google investigating A.I. researcher, AWU concerned
Google employee’s newly-formed union, known as the Alphabet Workers Union, said it is concerned over Google’s decision to lock Margaret Mitchell, a senior AI ethics researcher, out of her account.
Google locked Mitchell out of her account after it found she was downloading material related to Timnit Gebru, another AI ethics researcher who was forced to leave the company early last month.
The news was first reported Wednesday by Axios, which said Google was investigating Mitchell’s recent actions. Mitchell was reportedly using automated scripts to look through her messages to find examples of discriminatory treatment of Gebru before she was locked out of her account.
“The Alphabet Workers Union (AWU) is concerned by the suspension of the corporate access of Margaret Mitchell, AWU member and lead of the Ethical AI team,” the union wrote in a statement. “This suspension comes on the heels of Google’s firing of former co-lead Timnit Gebru; together these are an attack on the people who are trying to make Google’s technology more ethical.”
Google did not immediately respond to a CNBC request for comment, but a spokesperson told Axios: “Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered.”
They added: “In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today.”
Gebru, a well-known artificial intelligence researcher and technical co-lead of Google’s Ethical AI team, tweeted on Dec. 3 that Google fired her over a disagreement about a research paper that scrutinized bias in artificial intelligence. The researcher, who had been outspoken about the company’s treatment of Black employees, claimed the treatment was indicative of a broader pattern at Google. It led to a wave of support from across the industry, including a petition signed by thousands of Google employees and industry peers.
Alphabet CEO Sundar Pichai emailed employees, apologizing for distrust sown in the company and the industry amid Gebru’s departure, while pledging the company would launch a “review” of what went wrong.
Roughly a week later, Google’s Ethical AI team sent Google executives a list of demands to “rebuild trust” following Gebru’s removal from the company.
The team, which states it advises on research, product and policy, wrote a six-page letter to Pichai, AI chief Jeff Dean and an engineering Vice President Megan Kacholia. The letter, titled “The Future of Ethical AI at Google Research” and seen by CNBC, lists demands of executives, including removing Kacholia from the group’s reporting structure, abstaining from retaliation, and reinstating Gebru at a higher level.
Mitchell founded Google’s Ethical AI team and is one of the co-leads. The AWU described her as a “critical member” of academic and industry communities around the ethical production of AI. She has been with Google for just over four years and is based in Seattle, according to LinkedIn.
“Regardless of the outcome of the company’s investigation, the ongoing targeting of leaders in this organization calls into question Google’s commitment to ethics — in AI and in their business practices,” said the AWU. “Many members of the ethical AI team are AWU members and the membership of our union recognizes the crucial work that they do and stands in solidarity with them in this moment.”
Referring to Google’s statement to Axios, the AWU said it marked a “notable departure from Google’s typical practice of refusing to comment on personnel matters.”
The AWU announced its launch on Jan. 4. Executive Chair Parul Koul and Vice Chair Chewy Shaw co-authored a piece in The New York Times titled: “We built Google. This is not the company we want to work for.”
It made its first stance on Jan. 7, calling on YouTube executives to take stronger action against former President Donald Trump.
The union criticized Google-owned YouTube for not banning Trump’s account from the platform after the pro-Trump riots in Washington, D.C., which resulted in several deaths and scores of injuries. The group called the company’s decision to reactively remove his videos “lackluster” and said the company should ban his account.
— Additional reporting by CNBC’s Jennifer Elias.