Hard questions for policy-makers about digital contact tracing
Canada has a reputation as a second- or third-mover in a lot of policy arenas, to the frustration of some and the admiration of others. When it comes to COVID-19 digital contact tracing, it would be helpful to hold the line on that approach. But if Canada does want to prioritize any work on digital contact tracing, it should be the public policy work needed to guide its potential use, which is much more challenging than the technical mechanics of any app.
This piece lays out some of the fundamental questions that governments should be answering when designing this type of policy. Ultimately, no matter what the final decision is about the use of digital contact tracing, it shouldn’t be confused with the political decisions and investments needed to prioritize public health. Apps don’t do that.
History and context
News reports have revealed that some Canadian governments are considering deploying digital contact-tracing apps, despite the fact that they have not yet proven to be significantly effective in any of the several countries that have used them so far. These apps are intended to complement the established, effective practice of contact tracing that public health institutions use to identify, test and, where applicable, treat patients. The theory is that if health authorities have the information and capacity to respond quickly to potential new cases, it could help us safely manage the re-emergence from lockdown.
Before we go too far in exploring the nuances of any related policy, though, let’s be clear about the basics:
there are no successful examples of re-emergence from lockdown without measurable increases in infections — with or without contact-tracing technologies;
there are potentially large amounts of asymptomatic transmission — meaning that no contact-tracing system can work without strong limits on movement;
the largest known indicators of COVID-19 deaths are health-system capacity, protective gear and worker safety — and there are a number of health systems that report shortages or vulnerabilities on all three fronts; and
every large-scale deployer of contact tracing technologies — from Singapore to South Dakota, Iceland to Australia — has said that the technology hasn’t made much difference.
That’s the backdrop for what we face today — the dangerous and political balances of running an open society under pandemic lockdown. That backdrop demands policy focus on the substantive decisions and trade-offs to be made. This is the pressing policy work.
A defensible policy approach for digital contact-tracing apps will openly and publicly ask and answer questions about the proper conditions for their use, and the cessation of their use. At a high level, these include: What do we want use of the app to achieve? How will we know when it’s working? How will we know when it’s being abused? How will governments make sure compromises made during extraordinary circumstances don’t become normalized when they’re over? And how will governments use the information gleaned from a health crisis to shape access to the economy and social support programs? Two particular policy lenses that help further frame these considerations in more detail are efficacy and equity.
Questions of efficacy
Digital contact-tracing technology has been deployed in several countries to limited, if any, effect. In places like South Korea, Iceland and Singapore — the early adopters of contact tracing — the leaders of those programs have suggested that the technology hasn’t been particularly helpful. More importantly, Hong Kong is the only place to have returned to public life without a significant increase in infections or a return to lockdown. In context, it’s clear that the technology is experimental and marginally useful at best.
Canada’s chief public health officer, Dr. Theresa Tam, has, encouragingly, raised questions about the efficacy of digital contact tracing, citing false positives as one of several potential problems that can happen: for example, “when you just happen to, you know, maybe drive by, swing by, pass someone and suddenly your phone goes bing. Those kinds of characteristics (are) not what you want to have, because that would alarm a whole bunch of people. They may then show up to be tested when in fact they were not at risk.”
If an app were to be used to alert people to a possible exposure to COVID-19, policy questions about efficacy would include, first and foremost: Does this tool work? What is the track record of this tool elsewhere? What information does the app contribute to the public health response that might justify its deployment? Does the public health-care system have the capacity to support it? Does it have adequate testing available? Does it have the capacity, across a range of support types, to treat an influx of new potential but unconfirmed patients? Whose pre-existing health care needs might be compromised by growing waves of requests from unconfirmed patients? How does the app affect other public health communications and guidelines? Does it create a false sense of confidence in the public? How does its use affect our understanding of disease transmission and research? Are those needs or hopes being over-indexed through thinking that granular and highly sensitive data collection will resolve bigger problems?
Questions of equity
Then there is the matter of the known harms and disparities in technology adoption and use among marginalized groups. Policy questions related to equity include: If major outbreaks are known to be occurring in settings such as long-term care homes, factories and prisons, what do contact-tracing apps add to the public health response? Then, who has access to smartphone technology and the requisite digital literacy to use apps such as these? How will public authorities compensate for these disparities so they don’t repeat the mistakes that other governments have made, assuming that marginalized groups will have the same access, rates of infections, or means of recovery?
Racialized and low-income communities continually suffer the ongoing impacts of historical oppression, placing them at higher risk for COVID-19, so what is the government rationale for using public resources to invest in a tool that may exclude them right from the start? Or one that may increase their risk, or expose them to further harm in the future, through attaching the tool to a different use, such as deconfinement?
We have seen socio-economic divides play out in a wide range of COVID-19 responses. As such, details about government plans should be publicly available to make sure that data-driven approaches don’t replicate things that have gone wrong in public interest technologies for years, such as providing lower rates of access to public services by managing access through digital tools.
What role do governments expect the app, and the risk scores or notifications it creates, to play in their larger decision-making around the pandemic? What future uses do they see for these apps, and how would any change to these uses be agreed upon? This is also why it’s critical not to treat apps as stand-alone objects, but rather as infrastructures that can support multiple uses.
Are political leaders, who are clearly under pressure to “reopen the economy,” ready to set hard numerical targets that define the deployment of this technology, as well as how and when to de-escalate or remove its use? Workers, rather than sick people, may ultimately become the target users of these (unreliable) apps as mechanisms to support their return to work, potentially increasing their exposure to the disease through the use of a novel technology that has known efficacy problems.
One of the main reasons we need rigour in the development of policy that defines how governments make large, sweeping decisions — like when to lock down, test and trace people — is to ensure that we’re framing response efforts around the public interest, not the imperfect information we get through any single source. Contact-tracing apps are designed to inform public health authorities about the presence of the virus; those authorities are then supposed to make clear, defensible decisions about when and why to take rights-infringing measures (such as forcing people back to — or out of — work, food or community). We don’t want that to be framed by the limits of any app, let alone a totally untested one. We know apps introduce bias through adoption. A number of civic technologies have proven that they tend to amplify the voices of those who are already connected. Here, those digital divides are not only matters of equity, they’re matters of mortality.
It was less than encouraging, for example, to hear Prime Minister Justin Trudeau share the following comments on digital contact tracing apps last month: “We have a number of proposals and companies working on different models that might be applicable to Canada. But as we move forward on taking decisions, we’re going to keep in mind that Canadians put a very high value on their privacy, on their data security.” The troubling inference from this statement was the narrowness of its scope: that the deployment of these apps rests on issues related to privacy and security. In such a narrow frame, one can imagine an increase in public demand for the use of these apps by those that aren’t at risk of being harmed by them, and without adequate public knowledge about their efficacy. Instead, we should be having a much fuller and richer public discussion about efficacy, rights, other uses for these technologies, and more.
In closing
It’s critically important for the policy community to do what it does best in this challenging context: bring a broad range of expertise and perspectives to defining, prioritizing and achieving public-interest health outcomes. If technologies present a credible opportunity, then we should be able to explain that credibility and that opportunity. Once the goalposts are clear, then we have to get to the even harder work of understanding how different implementations and follow-on supports affect different communities, and how to plan for escalation and de-escalation if the decision is made to deploy these apps. The goal is not a good tech deployment; the goal is a healthy, free public.
Most importantly, Canada can learn from first-mover countries about how not to deploy digital contact tracing. It should not try to work out the technology options before it has answers to questions about policy options. Even though there are likely ample offers from tech vendors to support contact tracing with their tools, it’s the government’s leadership on policy that is urgently required. Technology and data can and will be used to many good ends in this pandemic, but contact tracing apps don’t necessarily have to be part of it. Their use is not inevitable, though some are making it seem so.
Canada prides itself on being part of the open government movement. This should involve being open about its thinking and decision-making processes along the way through this pandemic, including milestones and decision points related to any use of these apps, should they be deployed. By taking the time to document a full policy approach to the potential use of these apps prior to any decision about deployment, and making it public for scrutiny, governments can lean into the ongoing development of public trust — trust they cannot afford to lose as they continue to adapt to ever-changing information about the disease and a raft of other unknowns.