How COVID-19 Has Forced Us to Adopt a Long-Term Perspective on a Short-Term Solution
When challenging and unprecedented situations occur, the initial reaction is born out of survival instincts. These short-term solutions often negate a calculated set of long-term considerations. In the battle against COVID-19, creation of technology and AI-driven solutions to assist in efforts to reduce the virus’ spread has been rampant. However, the reactive approach may warrant a more broad-based ethical debate.
Though the subject of privacy and ethics in the realm of AI-driven advancements is not new, the dialogue recently increased in frequency due to the notion of contact tracing. In an effort to reduce the spread of the virus, technologies that provide contact-tracing capabilities have proliferated on a global scale, most notably those deployed via mobile phones. Countries have taken a range of approaches, many of which have been constrained by the country’s approach to the ethics around data privacy. For example, Singapore’s TraceTogether leverages Bluetooth Relative Signal Strength Indicator (RSSI) to link interactions between individuals that have the application installed on their respective mobile devices. The objective was to assist the country’s Ministry of Health in retracing 14 days of activity should an individual contract COVID-19. However, success of the application relies on a subscription rate of 75% of the nation’s population and currently has an uptake of only 17%, resulting in an inadequate level of effectiveness at the expense of infringing upon privacy rights of the nation’s citizens. Singapore is not alone in its development, as other countries (Germany, Italy, Israel) are also exploring similar use of AI in contact-tracing applications. Many of these countries situated in Europe are leveraging the recently established Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) as a guideline during application development. The PEPP-PT aims to standardize the structure of contact-tracing applications by taking a centralized approach to data collection while simultaneously limiting the normalization and overuse of location-tracking tools during the pandemic.
Which begs the question – where do we draw the proverbial line between safeguarding the public and surveilling the public? How can we reap the benefits that AI technologies provide with respect to predicting transmission patterns while still ensuring that the input data leveraged is obtained consensually and used only for specified purposes?
To answer these questions, we must reinforce the objective, or specified purpose, that is the maintenance of public safety and global health. With this objective in mind, we must move to establish a set of controls and parameters around the collection and use of input data. These include:
- Defined Retention Periods. Data collected as a part of COVID-19 contact-tracing efforts should be used and retained only within the context of the pandemic. Once the pandemic is over, and public health concerns return to status quo, data collected no longer needs to be kept. Establishing set retention periods and communicating these retention periods via privacy policies is imperative in establishing a layer of trust and transparency among the general public.
- Documented Handling Procedures. Similar to retention periods, the purpose and procedures for obtaining personal data must be clearly stated. This should be supported by a detailed map or data flow of the data’s lifecycle to ensure it is used only by the necessary parties and only for relevant purposes. The primary objective of maintaining public health must limit the manner in which data is further collected or used.
- Effective Governance. Whether the approach taken is centralized or decentralized, privacy must be governed and monitored for the benefit of global citizens. This extends to providing a layer of transparency when it comes to the first two controls, effectuated at both an organizational and federal or national government level. Though technical advancements are integral in helping to flatten the curve, human assistance and expertise must support any machine solutions developed. Human intervention as a part of AI governance will play an integral role in ensuring that the line between safety and surveillance is carefully toed.
- Equality as a Cornerstone. All data inputs considered within the application’s algorithm must be treated equally, with no bias exerted based on socio-economic, demographic, or personal attributes. Though already viewed as a capstone of AI governance, this becomes increasingly important in a crisis scenario with a plethora of unknown variables, and rapid development that may lack ethical oversight.
The following recommendations, though opposing at first glance, are complementary. The aim is to promote a balance between public safety and maintaining good hygiene when it comes to data privacy and AI governance.
- Continue to Evaluate Governance: While the guidelines listed above have become increasingly pertinent during the COVID-19 pandemic, there is a need for more permanent regulatory structure around AI and data privacy. Info-Tech recently evaluated the maturing relationship between AI and data privacy measures, and in response to the call for public consultation by the Office of the Privacy Commissioner of Canada, submitted a response to the proposals centered around increasing governance of responsible AI use. Our feedback to the OPC’s proposal included specific callouts to human involvement in automated decision making, providing the right to explanation and increased transparency, and application of Privacy by Design and Human Rights by Design in the AI integration process.
- Re-Evaluate Temporary Measures: Many aspects of crisis response, by nature, are acute. While some operational changes made during this pandemic may become standard once status quo resumes, many will not. The collection and processing of personal data in the context of contact-tracing technology falls within the latter category. This sentiment is echoed in the statement “there’s a strong argument that much of what we build for this pandemic should have a sunset clause—in particular when it comes to the private, intimate, and community data we might collect.”
We find ourselves toeing the ultimate fine line of public safety vs. public surveillance. Any misstep by our present actions could have profound and lasting impact. We must approach today’s issues with the mindset that current measures should only support the immediate need for crisis response. In doing so, we strive to avoid opportunistic long-term changes to privacy governance that shift away from supporting the rights of the individual, and lead instead toward mass collection and consumption of the public’s personal information.
Resources
MIT Technology Review – We need mass surveillance to fight covid-19—but it doesn’t have to be creepy
World Economic Forum – Governments must build trust in AI to Fight COVID-19
World Economic Forum – AI can help with the COVID-19 crisis
Related Info-Tech Materials
InfoTech’s COVID-19 Resource Center