York College professor launches study into ethical AI practices related to COVID-19
Purchase patterns. Facial recognition. Nationwide surveillance. When Tamara Schwartz learned some of the practices that China had adopted to conduct contact tracing for COVID-19, she was both amazed and concerned.
The Assistant Professor of Business Administration and Cybersecurity at York College of Pennsylvania watched how China used artificial intelligence as a tool, but she worried about the loss of privacy and protection for those it exposed. “The ethical implications of that kind of surveillance really struck me,” she says.
As an Air Force veteran, Schwartz has studied artificial intelligence capabilities for global situational awareness since 2007. She reconnected with some of her colleagues—technologists from the Mass High Tech community in Boston—to share what she’d learned about China’s practices, and thanks to an idea, they’re taking a closer look at how AI can be used more ethically to track the coronavirus.
A small community
Schwartz was walking out of her dentist’s office in Gettysburg when she wondered if entrepreneurs like the dentist would benefit from technology that looked at how to manage business operations in a pandemic. She reached out to her former active duty colleagues and started building the framework for a study.
She wanted to look at the technology capabilities of companies such as Facebook and Amazon, who have the ability to track many of the things that China was doing related to COVID-19. She wondered if that data could be collected in an ethical way.
“It was trouble because I saw the potential for people signing away their privacy out of fear and frustration with the pandemic, but once you sign away that privacy you can’t ever get it back,” she says. “I wondered if it was possible to build the kind of capability that could protect people’s privacy and respect their civil rights and liberties.”
It led Schwartz to ask: how can we do virus tracing rather than contact tracing?
Contact is about tracing people, Schwartz says. Instead, she wanted to create case-based risk maps that show where the virus accumulates in the community based on following patterns of life and other tools that can predict symptoms. All of it could be tracked using tools on a phone, but she wanted to find a way to do it ethically.
She decided to break the study into two parts. The first part would include interviewing area businesses, schools, and churches to determine information that would be helpful to navigate the pandemic, especially as potential case spikes could come in the future, she says. The second part of the study plans to take that information and build an app that is based on ethical privacy practices while still helping to navigate outbreaks.
Developing ethical AI
There are two main things that Schwartz considers elements of ethical AI. The first is making sure the user always has a chance to opt in. In most cases, apps require the user to opt out of tracking. Another piece of ethical AI is not to exploit that data for purposes that the user doesn’t know about, she says. Users often don’t know how technology companies track and then sell their data to third parties.
The application her team hopes to build later this year would look at using GPS tracking purely for location purposes, without any personal information from the user. This gives researchers the ability to change their thinking from tracking people to tracking the virus.
“The culture of Silicon Valley is to move fast and fail fast, and you cannot make ethical decisions and go as fast as possible,” Schwartz says. “This whole ethical piece is where we’re trying to focus our learning. I’ve told my tech team even if we built nothing, we’ve already learned a lot about what it takes to build ethically.”