Advertisement

Health IT: Exploring the Role of Technology in Healthcare - Episode 4

The Acquisition of Data and the Issue of Privacy

Published on: 

Simon D. Murray, MD: Some people think that MRI scans can be used like genetics, to learn about people. I’m not so sure, because MRIs change, genes don’t change all that much. But that brings me to the issue of genomics, and the huge impact of AI on genomics. And how that’s going to affect medicine.

Eric Daimler, PhD, MS: I use that as yet another opportunity to invite people into the systems view. We today are making decisions today about how we use or restrict those innovations. It’s important that we’re part of that conversation. When we as a society started introducing cars into our civilization, and you certainly think the original inventors of the car thought about what that could do for transportation, they would have no idea that we would have strip malls, suburbs, and urban sprawl.

SM: They were thinking about horse manure.

ED: That’s the canonical example. About what it was going to solve. But it introduces new problems. I think people need to be engaged in this conversation today to say “what if?” so we can put guidelines and guardrails around these technologies so we can introduce them in ways we consider to be safe. The scary part of some of these propositions is that we’re going to have companies that know more about us than we do. We have companies that know more about my physiology than I do, so they know I crave sugar and fat and they market to me Doritos. They’re almost addictive, you know, “I can’t resist Doritos.” We will have those sort of behavior triggers around which companies will have more information than we do around the data and the interpretation of the data. More than we should even be expected to know.

SM: We had no idea ten years ago about Google, about Facebook, about data mining. They gave all this cool stuff for free. And we thought this was really neat. But they were gathering data, perfectly legally. You remember free 411? When you first had cell phones you had to pay for 411. Then there was a service called free 411. You could make 411 calls for free. That was Google testing out their voice activation. They did it 2 or 3 years and stopped.

ED: These sort of issues permeate our world. I don’t care so much if I am manipulated into buying dark chocolate. And Google doesn’t care so much if their ad placement to me is flawed. But there are many more high consequence contexts for the application of AI. Drug databases are 1, airplanes are another. The context under which we think of AI is really important. I care about my privacy at Google, my safety less so. At Boeing, the opposite. I also care about the context. I care about my password to a news site not a lot. The password to my bank, quite a bit.

And then we have the timing of our understanding. One reason regulation doesn’t work so well is our understanding of internet data in 2016, very different than today. And we can expect it will be different a few years from now. So regulation isn’t really the mechanism by which we have a dynamic conversation around the guardrails for these technologies.

SM: The way I see it, AI is going to be a bit like the arms race. There’s going to be a few countries that will race to develop AI, not just medical applications, there are military applications, transportation, there’s engineering and manufacturing. It’s going to come down to China, the United States, maybe Russia, using this technology all along. It’s going to be this nuclear arms race. Do you think so?

ED: This idea was popularized by Kai-Fu Lee, we shared a doctoral advisor and I think he’s fantastic, his writing is fantastic, as an executive and an investor he’s demonstrably effective. On that issue I can agree in a narrow domain. If you’re particularly talking about learning algorithms, that’s fair enough. Data matters, datas really the whole thing. As we were talking about earlier with Google, their experiments acquiring data and all the many ways we can acquire data without us knowing is a big deal.

I think there are more stumbling blocks for a society than we may appreciate to embrace AI. We don’t want resistance to AI. I’ll give you an example. The end user agreements that we see pop up on our devices, multiple times a day, I doubt you’ve read one. I’ve never, I’ve glanced at them and I’ve never read one end to end. That’s tough for non-lawyers or people who get bored. Here’s the scary proposition: those agreements can be violated without you knowing. They’re particularly pernicious with regard to children. It’s actually illegal to be advertising to children on their mobile devices. How would we know if they’re violating it? You have no idea for your kid’s safety or for you. This is a societal conversation. If a society gets sufficiently burned, or sufficiently upset about that possibility of abusing the privacy of children, there may be a resistance to the adoption of AI. And then who wins in that circumstance is not a black and white scenario. We need to think of this as a total systems, we need a degree of social systems intelligence. Where did the data come from? How are we using it? What’s the output? This manifests itself in many many ways.

Transcript edited for clarity.


Advertisement
Advertisement