We acknowledge that the UBC Vancouver campus is situated on the traditional, ancestral, and unceded territory of the xʷməθkʷəy̓əm (Musqueam).

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
post

Big Data, Artificial Intelligence, and the Ethics of Passive Surveillance in Health

Jul 16, 2020 |

 

A constant flow of technological advances within society has brought solutions to health care monitoring and delivery, but also presents challenges and questions of privacy. Ethicists play a crucial role in guiding the responsible development and implementation of these advances, allowing for the benefits of innovation while identifying and mitigating unintended consequences.

 

Last month Dr. Anita Ho, Associate Professor at the School of Population and Public Health (SPPH), delivered a plenary address to the World Congress of Bioethics where she discussed the ethical considerations of how large amounts of data are collected and used in business, health care, society, and government. In particular, her research looks at Artificial Intelligence (AI) – where machines and computers are programmed to mimic human decision-making processes – and passive surveillance, where large amounts of data are collected and later analyzed.

Dr. Anita Ho

Dr. Ho conducts her research at the W. Maurice Young Centre for Applied Ethics, an interdisciplinary research centre within SPPH that plays a crucial role in complementing health research within the School. “The discipline of ethics allows us to step back from the way we do things with regard to the provision of health,” explains Dr. David Silver, Director of the Centre. “From this distance we can not only critically evaluate what we do, but we can also imagine more humane and empowering ways to deliver health to individuals, societies and the entire world.”

There are countless examples of technology improving health care delivery, through tele-health, mobile phone applications (apps), and even biometrically secure pharmaceutical vending machines. A great deal of good has come from advances made within the last several decades, but it is not without risks and consequences. How do we decide who can collect or access health data? How can they use it? What obligations exist to inform and educate the public about where their information ends up? Who has the authority to decide?

In her address to the World Congress of Bioethics, Dr. Ho spoke about the potential of big data and artificial intelligence to exacerbate existing inequalities, including systemic racism. In a follow up conversation, she explains that the way data is used is often a reflection of society. “When we think about health monitoring, whether it is at the individual level or at the public health level, we have to think about the broader social structure and social culture it exists within,” Dr. Ho explains. “There are different impacts on diverse populations, particularly on marginalized and racialized populations.” Because our society is a product of discriminatory systems and institutions, the use of data is difficult to separate from these structures.

Oftentimes, the consequences of passive surveillance are a result of oblivious good intentions. For example, some online virtual doorbell systems encourage users to upload videos of suspicious activity to a neighbourhood watch platform, promising to improve safety and security. Who is identified as suspicious, however, is shrouded in layers of racism and systemic discrimination.

This is where ethicists come in. By identifying potential areas of concern, they can guide implementation and prevent new technology from exacerbating marginalization, and work to address inequality. Another SPPH faculty member and one of Dr. Ho’s collaborators, Dr. LianPing (Mint) Ti is working on a project to ensure that passive surveillance to detect and prevent overdoses is done with the input and informed consent of affected communities, and to avoid further criminalization of those who are already marginalized.

Inequities within society are reflected in health outcomes, and we see that in the data being collected. Without accounting for these existing inequities, race-based data can be misleading or used for nefarious arguments. For example, in places collecting data on race like the United States, COVID-19 research is showing that Latinx and African Americans have been affected at a much higher rate. This is not due to an intrinsic susceptibility but rather due to discrimination that has contributed to poorer baseline health, reduced access to care, and created overall health inequity.

There are ethical considerations that play out in the use of this data, such as in contact tracing. Dr. Ho argues it can be problematic if the surveillance is accompanied by an enforcement approach towards those who are unable to follow advice or isolate due to factors like precarious employment or under-housing, which contributed to the unfair disease burden in the first place. “We may think contact tracing applies to everyone. But when racialized populations end up being tracked and restricted more often, they may bear an unfair burden of surveillance.”

As governments roll out contact tracing phone applications, questions have been raised about the types of data being collected and how it will be used. In the race to manage the pandemic, some governments are placing privacy concerns second to their public health strategy. “There are some countries that have noticed once they roll out their app, they have violated their self-imposed rules,” Dr. Ho adds.

These concerns have led to hesitancy among some to provide access to that data, even to governments and health authorities. “Some people feel they don’t want contact tracing apps because they worry about their privacy,” explains Dr. Ho. “The contact tracing procedures may be very invasive or they collect highly personal and private data.”

For Dr. Ho, the discrepancy between individuals who are active on a range of social media platforms, which collect and sell large amounts of personal data, but who are hesitant or unwilling to download a government contact-tracing app for public health, is especially interesting. “People are ready and willing to share photos and personal information about who they were with or where they have been with private companies – maybe because they are not aware about what they do with it – but when it comes to public health agencies wanting to collect and use that data for population health, people feel that they don’t trust the government with that same information that they may be posting on Facebook,” she explains.

While governments and organizations, especially those involved in health care or health research, are held to strict privacy regulations and requirements for participants to understand how and where their information will be used, data collected by private companies through recreational apps are held to a different – and lower – standard.

When it comes to these apps, whether it is social media or direct-to-consumer genetic testing, most will agree without ever looking at the terms. “It’s not clear that they know the risks. Companies market their products so well and make it so difficult to find out what kind of information they have shared, so most people don’t really know,” Dr. Ho explains. “Shoshana Zuboff’s book Surveillance Capitalism argues that when a service is free it is because our data are the product to be sold.”

Dr. Ho believes that if we want to be able to collect and use public health data for purposes like contact tracing, people need to trust the government to handle our data with care. “If we want public health surveillance at a broad level, we need to clean up how we are allowing other forms of surveillance to happen at the societal level,” she argues.  “We need to reassess how we think about surveillance culture. Otherwise, when people don’t trust certain [private] entities, they assume even public health entities are illegitimate in collecting data. How we think about data – even by direct-to-consumer products – can affect how people think about public health data collection.”

As countries around the world grapple with the pandemic, they need accurate data to plan health resources and control the spread of the virus. If they fail to heed the advice of ethicists in making big data decisions, however, they will struggle to convince their citizens that the risks of being tracked and surveilled are worth the public health payoff.

By Elizabeth Samuels