Until the onslaught of Covid-19 paralyzed the world, shuttering businesses and killing tens of thousands, two of the most pressing ethical flashpoints at the forefront of Artificial Intelligence (“AI”) were how to effectively (a) protect individual privacy and (b) eliminate bias in AI and its myriad applications.
Now, however, in the race to save lives and resuscitate the economy, those ethical priorities have been forced to the backseat as AI researchers rush to create algorithms and pursue solutions in the battle against Covid-19 that require the suspension of privacy or the acceptance of bias (or both).
To be sure, rapid advances in AI have long run up against the strong headwind of individual rights, including the right of privacy and the right to institutional decisions made by AI, such as loan and healthcare insurance approvals, that are free of bias.
But in the fog of war on the pandemic and the universal rush to diagnose, treat, and cure Covid-19, there appears to be a more collective tolerance of diminished individual rights.
For example, despite the clear risk of “selection bias” — when the data used to develop algorithms does not accurately reflect the population sought to be analyzed – AI developers, scrambling in the race to find a way to “flatten the curve,” have been left with no choice but to use such biased data to detect Covid-19 in lung scans. More specifically, because the largest current dataset comes from Chinese patients infected in the early weeks of the Covid-19 outbreak, it is unclear at best whether that data will accurately apply to patients from the diverse racial, ethnic, economic, and environmental backgrounds found in the United States.
At the same time, hospitals are using AI to identify those individuals likely to develop Covid-19’s most severe symptoms. Facial scanning and recognition AI is being used to check patients’ temperatures and fitness wearables data is being used to identify and track individual cases, as well as potential clusters of cases. Only a few weeks ago, those uses of AI would have been described by privacy advocates as an assault on our individual rights. Today, nary a peep.
In the not-too-distant past, fueled by the dystopian portrayals of AI and robots in movies and books (Terminator, Blade Runner, I-Robot, the Hal9000 from 2001: A Space Odyssey, and Robopocolypse), the public did not have a positive impression of AI. But now, more than half of the general global population views AI in a positive light.
The extent to which “benevolent” uses of AI, such as those being applied to solve the Covid-19 pandemic, will impact the public’s embrace of AI will likely hinge on whether they ultimately prove effective. The balance between a person’s willingness to tolerate the loss of certain individual rights and embrace of AI varies from country-to-country and culture-to-culture.
Given the small subset of time, it’s too early to tell. But it is not at all unreasonable to predict that AI’s effectiveness in forecasting and diagnosing, if not entirely curing, the coronavirus will permanently alter the ongoing ethical debate.
About the contributor:
Mitchell is a commercial litigation and trial attorney and a Senior Partner at Zumpano Patricios & Popok, PLLC. He has extensive experience handling matters around the world in a variety of industries, including technology, artificial intelligence, robotics, investment banking, commercial banking, healthcare, fashion, entertainment, and hospitality, and across myriad substantive areas. He is a recognized authority on the legal and ethical issues in artificial intelligence and robotics.
You can learn more about the law firm of Zumpano Patricios & Popok, PLLC here.