Site logo

Police seize on COVID-19 tech to expand global surveillance

Majd Ramlawi was serving coffee in Jerusalem’s Old City when a chilling text message appeared on his phone.

“You have been spotted as having participated in acts of violence in the Al-Aqsa Mosque,” it read in Arabic. “We will hold you accountable.”

Ramlawi, then 19, was among hundreds of people who civil rights attorneys estimate got the text last year, at the height of one of the most turbulent recent periods in the Holy Land. Many, including Ramlawi, say they only lived or worked in the neighbourhood, and had nothing to do with the unrest.

What he didn’t know was that the feared internal security agency, the Shin Bet, was using mass surveillance technology mobilised for coronavirus contact tracing, against Israeli residents and citizens for purposes entirely unrelated to COVID-19.

In the pandemic’s bewildering early days, millions worldwide believed government officials who said they needed confidential data for new tech tools that could help stop coronavirus’ spread. In return, governments got a firehose of individuals’ private health details, photographs that captured their facial measurements and their home addresses.

Now, from Beijing to Jerusalem to Hyderabad, India, and Perth, Australia, The Associated Press has found that authorities used these technologies and data to halt travel for activists and ordinary people, harass marginalised communities and link people’s health information to other surveillance and law enforcement tools.

In some cases, data were shared with spy agencies.

The issue has taken on fresh urgency almost three years into the pandemic as China’s ultra-strict zero-COVID policies recently ignited the sharpest public rebuke of the country’s authoritarian leadership since the pro-democracy protests in Tiananmen Square in 1989.

For more than a year, AP journalists interviewed sources and pored over thousands of documents to trace how technologies marketed to “flatten the curve” were put to other uses. Just as the balance between privacy and national security shifted after the September 11, 2001 terrorist attacks, COVID-19 has given officials justification to embed tracking tools in society that have lasted long after lockdowns.

“Any intervention that increases state power to monitor individuals has a long tail and is a ratcheting system,” said John Scott-Railton, a senior researcher at the Toronto-based Internet watchdog, Citizen Lab, “Once you get it, is very unlikely it will ever go away.”

Code red

In China, the last major country in the world to enforce strict COVID-19 lockdowns, citizens have been required to install cellphone apps to move about freely in most cities. Drawing from telecommunications data and PCR test results, the apps produce individual QR codes that change from green to yellow or red, depending on a person’s health status.

The apps and lockdowns are part of China’s sweeping pandemic prevention policies that have pushed the public to a breaking point.

Over the past few years, Chinese citizens have needed a green code to board domestic flights or trains, and in some cities even to enter the supermarket or to get on to a bus. If they were found to have been in close contact with someone who tested positive for COVID-19, or if the government imposed a local quarantine, the code would turn red, and they were stuck at home.

There’s evidence that the health codes have been used to stifle dissent.

In early September, former wealth manager Yang Jiahao bought a train ticket to Beijing, where he planned to lodge various complaints with the central government.

The night before, a woman, he described as a handler, invited him to dinner. Handlers are usually hired by state security as part of “stability maintenance” operations and can require people to meet or travel when authorities worry they could cause trouble.

Yang had a meal with the handler, and the next morning Guangzhou health authorities reported a COVID-19 case less than a kilometre from where they dined, he said.

Based on city regulations, Yang’s code should have turned yellow, requiring him to take a few COVID tests to show he was negative.

Instead, the app turned red, even though tests showed that he didn’t have COVID. Yang was ordered to quarantine and a paper seal was placed on his door.

“They can do whatever they want,” he said.

An officer at the Huangcun station of the Guangzhou police referred comment to city-level authorities on Yang’s case, saying he required proof that the caller was from the AP. Guangzhou’s Public Security Bureau and the city’s Center for Disease Control and Prevention did not respond to faxed requests for comment.

In another show of how the apps can control lives, in June, a group of bank customers were effectively corralled by the health codes when they tried going to Henan’s provincial capital in Zhengzhou to protest being unable to access their online bank accounts.

A notice said the problem was due to a system upgrade. But the customers soon found out the real reason; a police investigation into stockholders in the parent bank had rendered CN?40 billion in funds inaccessible, according to local media reports. Frustrated after months of complaints, a group of customers decided to hold a protest in Zhengzhou at the provincial banking commission.

Customer Xu Zhihao uploaded his itinerary to get the Henan province health code after he tested negative for COVID-19 in his coastal city of Tianjin, just south of Beijing. As he got off the train in Zhengzhou, Xu was asked to scan his QR code at the station, and immediately it turned red. The train station employee called security and took him to a police booth.

Xu said police took him to the basement to quarantine. Three other people joined him, and all four realised that they had come to get their money back.

“They had set the net in place, waiting for us,” Xu said.

From a group chat, Xu and others learned that many protesters had met a similar fate, at the high-speed rail train station, at the airport and even on the highway. A government inquiry later found that red codes were given to 1,317 people, many of whom had planned to protest.

China’s National Health Commission, which has led the COVID response, did not reply to a fax requesting comment. The Henan provincial government did not respond either.

In February, police in northeastern Heilongjiang province sought to upgrade their local health code so they could search PCR test results for anyone in China, in real time, according to procurement documents provided exclusively by ChinaFile, a digital magazine published by the Asia Society.

A company whose parent is government-owned won the non-competitive bid to connect that app to a national database of PCR data run by the State Council, China’s Cabinet, fulfilling a national directive, the documents show. The same company, Beijing Beiming Digital Technology, also claims on its Website that it has developed more than 30 pandemic apps.

“It’s the governance model, the philosophy behind it is to strengthen social control through technology. It’s strengthened by the health app, and it’s definitely going to stay after COVID is over,” said Yaqiu Wang, a senior researcher with Human Rights Watch, “I think it’s very, very powerful.”

‘360? surveillance’

Technologies designed to combat COVID-19 were redirected by law enforcement and intelligence services in other democracies as governments expanded their digital arsenals amid the pandemic.

In India, facial recognition and artificial intelligence technology exploded after Prime Minister Narendra Modi’s right-wing Hindu nationalist Bharatiya Janata Party swept into power in 2014, becoming a tool for police to monitor mass gatherings. The country is seeking to build what will be among the world’s largest facial recognition networks.

As the pandemic took hold in early 2020, state and central governments tasked local police with enforcing mask mandates. Fines of up to US$25, as much as 12 days’ pay for some labourers and unaffordable for the nearly 230 million people estimated to be living in poverty in India, were introduced in some places.

In the south-central city of Hyderabad, police started taking pictures of people flaunting the mask mandate or simply wearing masks haphazardly.

Police Commissioner C.V. Anand said the city has spent hundreds of millions of dollars in recent years on patrol vehicles, CCTV cameras, facial recognition and geo-tracking applications and several hundred facial recognition cameras, among other technologies powered by algorithms or machine learning.

Inside Hyderabad’s command and control centre, officers showed an AP reporter how they run CCTV camera footage through facial recognition software that scans images against a database of offenders.

“When (companies) decide to invest in a city, they first look at the law-and-order situation,” Anand said, defending the use of such tools as absolutely necessary. “People here are aware of what the technologies can do, and there is wholesome support for it.”

By May 2020, the police chief of Telangana state tweeted about his department rolling out AI-based software using CCTV to zero-in on people not wearing masks. The tweet included photos of the software overlaying coloured rectangles on the maskless faces of unsuspecting locals.

More than a year later, police tweeted images of themselves using hand-held tablets to scan people’s faces using facial recognition software, according to a post from the official Twitter handle of the station house officer in the Amberpet neighbourhood.

Police said the tablets, which can take ordinary photographs or link them to a facial recognition database of criminals, were a useful way for officers to catch and fine mask offenders.

SQ Masood, a social activist who has led government transparency campaigns in Hyderabad, sees more at stake. Masood and his father-in-law were seemingly stopped at random by police in Shahran market, a predominantly Muslim area, during a COVID-19 surge last year. Masood said officers told him to remove his mask so they could photograph him with a tablet.

“I told them I won’t remove my mask. They then asked me why not, and I told them I will not remove my mask.” He said they photographed him with it in place. Back home, Masood went from bewildered to anxious: Where and how was this photo to be used? Would it be added to the police’s facial recognition database?

Now he’s suing in the Telangana High Court to find out why his photo was taken and to limit the widespread use of facial recognition. His case could set the tone for India’s growing ambition to combine emerging technology with law enforcement in the world’s largest democracy, experts said.

India lacks a data protection law and even existing proposals won’t regulate surveillance technologies if they become law, said Apar Gupta, executive director of the New Delhi-based Internet Freedom Foundation, which is helping to represent Masood.

Police responded to Masood’s lawsuit and denied using facial recognition in his case, saying that his photograph was not scanned against any database and that facial recognition is only used during the investigation of a crime or suspected crime, when it can be run against CCTV footage.

In two separate AP interviews, local police demonstrated both how the TSCOP app carried by police on the street can compare a person’s photograph to a facial recognition database of criminals, and how from the Command and Control Center police can use facial recognition analysis to compare stored mugshots of criminals to video gathered from CCTV cameras.

Masood’s lawyers are working on a response and awaiting a hearing date.

‘The new normal’

What use will ultimately be made of the data collected and tools developed during the height of the pandemic remains an open question. But recent uses in Australia and the United States may offer a glimpse.

During two years of strict border controls, Australia’s conservative former Prime Minister Scott Morrison took the extraordinary step of appointing himself minister of five departments, including the Department of Health.

Authorities introduced both national and state-level apps to notify people when they had been in the vicinity of someone who tested positive for the virus.

But the apps were also used in other ways. Australia’s intelligence agencies were caught “incidentally” collecting data from the national COVIDSafe app. News of the breach surfaced in a November 2020 report by the Inspector-General of Intelligence and Security, which said there was no evidence that the data was decrypted, accessed or used. The national app was cancelled in August by a new administration as a waste of money: it had identified only two positive COVID-19 cases that wouldn’t have been found otherwise.

At the local level, people used apps to tap their phones against a site’s QR code, logging their individual ID so that if a COVID-19 outbreak occurred, they could be contacted.

The data sometimes was used for other purposes. Australian law enforcement co-opted the state-level QR check-in data as a sort of electronic dragnet to investigate crimes.

After biker gang boss Nick Martin was shot and killed at a speedway in Perth, police accessed QR code check-in data from the health apps of 2,439 drag racing fans who attended the December 2020 race. It included names, phone numbers and arrival times.

Police accessed the information despite Western Australia Premier Mark McGowan’s promise on Facebook that the COVID-related data would only be accessible to contact-tracing personnel at the Department of Health. The murder was eventually solved using entirely traditional policing tactics, including footprint matching, cellphone tracking and ultimately a confession.

Western Australia police didn’t respond to requests for comment. Queensland and Victoria law enforcement also sought the public’s QR check-in data in connection with investigations. Police in both states did not address AP questions regarding why they sought the data, and lawmakers in Queensland and Victoria have since tightened the rules on police access to QR check-in information.

In the United States, which relied on a hodge-podge of state and local quarantine orders to ensure compliance with COVID rules, the federal government took the opportunity to build out its surveillance toolkit, including two contracts in 2020 worth US$24.9 million to the data mining and surveillance company Palantir Technologies Inc to support the US Department of Health and Human Services’ pandemic response.

Documents obtained by the immigrant rights group Just Futures Law under the Freedom of Information Act and shared with the AP showed that US federal officials contemplated how to share data that went far beyond COVID-19.

The possibilities included integrating “identifiable patient data”, such as mental health, substance use and behavioural health information from group homes, shelters, jails, detox facilities and schools.

The US Centers for Disease Control does not use any of that individual-level information in the platform CDC now manages, said Kevin Griffis, a department spokesman. Griffis said he could not comment on discussions that occurred under the previous administration.

The protocols appeared to lack information safeguards or usage restrictions, said Paromita Shah, Just Futures Law’s executive director.

“What the pandemic did was blow up an industry of mass collection of biometric and biographical data,” Shah said. “So, few things were off the table.”

Last year, the CDC purchased detailed cellphone location data revealing people’s daily whereabouts, nationwide. “Mobility insights” data from at least 20 million devices could be used to “project how much worse things would have been without the bans,” such as stay-at-home orders and business closures, according to a July 2021 contract obtained by the non-profit group Tech Inquiry and shared with the AP.

The contract shows data broker Cuebiq provided a “device ID”, which typically ties information to individual cell phones. The CDC also could use the information to examine the effect of closing borders, an emergency measure ordered by the Trump administration and continued by US President Joe Biden, despite top scientists’ objections that there was no evidence the action would slow the coronavirus.

CDC spokeswoman Kristen Nordlund said the agency acquired aggregated, anonymous data with extensive privacy protections for public health research, but did not address questions about whether the agency was still using the data. Cuebiq did not immediately respond to a request for comment.

For Scott-Railton, that sets a dangerous precedent.

“What COVID did was accelerate state use of these tools and that data and normalise it, so it fit a narrative about there being a public benefit,” he said. “Now the question is, are we going to be capable of having a reckoning around the use of this data, or is this the new normal?”

AP

Read More

Comments

  • No comments yet.
  • Add a comment