By AI Trends Staff
Efforts in contact tracing to try to control the spread of the Covid-19 virus had been going on before Google and Apple in early April announced their partnership on contact tracing technology. However, the two tech giants have proposed a way to share data while keeping user privacy central to the design.
Recent news out of Singapore may point the way to how this is likely to go, pointing in the direction of the surveillance state.
The pursuit of effective contact tracing embodies a confluence of issues around AI and surveillance, data privacy and public safety, and the roles of government and industry.
Most contact tracing apps installed on smartphones use Bluetooth radio technology to record when other phones with the same app are detected nearby When a user shows symptoms or tests positive for Covid-19, alerts can be sent to all those in proximity over the previous week or two, along with suggestions for how to respond. The thinking is this will help people get back to work.
Apple and Google in an announcement on April 10, proposed strict privacy rules around contact tracing technology. Data would be held only on devices, with no central data repository or tools, according to an account in Forbes. The tech giants have also banned location pings based on the tracing information.
Health services in the UK, France and elsewhere believe this more protective approach prevents analysis of infection hotspots and rates. In this discussion, the conflicting desires of wanting to guard data privacy and also stem the spread of the virus, play out.
Data modelers have said up to 80% of smartphone owners in any country need to be sharing data for the tracing technology to work. This is an extremely unlikely rate of adoption.
Singapore in March launched its TraceTogether Bluetooth app, and has reached 20% to 25% of its population, not close to enough. For the app to be effective, 75% of the country’s 5.5 million people need to sign up.
In response Singapore on May 12 launched SafeEntry, which will be required of residents who want to visit their workplaces, schools, stores, hotels and healthcare facilities. Those who fail to check in or check-in visitors face penalties. The check-in will be either by a national form of ID or a QR code on the smartphone. Recorded SafeEntry data will include names, IDs, phone number and times of entry and exit from locations.
This is effective contact tracing that trades off personal privacy and, some would suggest, implements the surveillance state.
The COVIDSafe tracing app in Australia, similar to Singapore’s TraceTogether, is hosting its data on Amazon AWS, leading to fears the data repository could be accessed by the US government. Australia has six million users of COVIDSafe; the goal is 10 million, 40% of the population.
Private Enterprise Sees Opportunity, Government Pursues Oversight
Private firms with AI expertise are scrambling to help out with contact tracing, perhaps altruistically and perhaps to make a buck, or both.
One of them is ClearviewAI, which has received attention for its technology that scrapes images from social media sites, and makes them available to facial recognition systems used by law enforcement agencies. CEO Hoan Ton-That acknowledged recently that his company is in discussions with federal and state agencies to help with contact tracing to stem the virus spread, according to an account from NBCnews.com.
“We mainly serve government and law enforcement, and we’ve seen more demand for solutions around coronavirus and how to help things like tracing people who may have had the virus,” Ton-That stated. “What we understand now is we’re in the stage where if we are to open up the economy in a way that’s safe for everybody, that we need to be able to test quickly and also trace the people who have been infected and find out who they’ve been in contact with.”
Sounds good, and a red flag went up for Sen. Edward Markey, D-Massachusetts, who sent a letter asking that Clearview AI reveal the state and federal agencies it was engaging in discussions about contact tracing, and outline any role the company sees itself playing in such a program.
“Clearview has failed to demonstrate that it can be trusted to protect Americans’ privacy,” stated Markey, a member of the Senate Commerce, Science and Transportation Committee, in a statement. “I’m concerned that if this company becomes involved in our nation’s response to the coronavirus pandemic, its invasive technology will become normalized, and that could spell the end of our ability to move anonymously and freely in public.”
Clearview is formulating a response.
Must Trust AI, for Contact Tracing to Work
If the public is willing to sacrifice some privacy, industry must make a commitment to ethical AI, suggests Simon Greenman, co-founder and partner in Best Practice AI, consultants who work with businesses seeking a competitive advantage from AI.
Writing recently for the World Economic Forum, Greenman stated, “Where privacy is curtailed, it’s important that all dimensions of AI ethics are considered to maintain public trust in its use over the medium to long-term. If organizations hope to ensure the public’s continued participation, they must ensure the data being willingly offered in the spirit of offering a social good is treated with the utmost responsibility.”
A trustworthy AI governance architecture to support contact tracing would include: time limits, with data deleted after a set time; use limits, so that use of the personal data is restricted to contact tracing or quarantine enforcement; fairness, so that all citizens are treated equally, especially important since certain minorities might be disproportionately affected by the virus; transparency, in the types of personal data in use and for what purpose; accountability, to named figures in the government or agencies; and oversight, to ensure AI is implemented responsibly.
Greenman added these thoughts, “Appropriate ethical AI architecture can ensure that we leverage the best that AI can offer to the present situation without exploiting an anxious public’s desire to find fast solutions. Good AI governance was needed long before COVID-19 arrived. Now, it’s that much more critical.”