Ethics for digital health companies

Shubs Upadhyay (00:00)
To a lot of people, ethics is kind of a,nebulous fuzzy concept So hey, we've got this course, the WHO have a course, you do it and you're like, tick, like ethics is done. And actually, people have really good intentions around it. And yes, this is our mindset. This is how we're going to do it. We're going to create these ethical principles. I'm going to kind of

Let's go for it. This is great. and then what happens, I think, which is the key thing is these intentions bump up against like business realities. So product prioritization conversations.

hey, this is not a business priority right now,

how can people with good intentions negotiate these things in a really good way?

Jess (00:38)
Yeah, so I think for me, ethics is primarily a way of thinking. And the best way I think you can sort of clearly understand it, actually is a metaphor that my prof. Luciano Floridi uses all the time. I think it's probably the best one. If you think about it, law being the rules of the game and ethics being how you win. So ethics is really that sprinkling on the top.

of how you make sure that something doesn't really massively backfire in a way that would be damaging to society, but also damaging to your business model Within ethics, then there tend to be...

The main way in which people get into it, at least in this domain, in the space of sort of digital health or technology anyway, is by talking through different principles, the most common of which are beneficence, non-maleficence, autonomy, justice, and explainability. So very, very briefly, beneficence is essentially do good. Non-maleficence is effectively do no harm. Autonomy is...

protect the individual's right to determine their own life. Justice is a complicated one and people get really controversial about what that means. They sometimes they think it means diversity, sometimes they think it means fairness. There are upwards of 33 different definitions of fairness. And so it's very complicated, but normally in the space of health and I know bearing in mind where we want to go in this chat, there's sort of two pillars of

of justice tend to be sort of equality and equity. Equality basically meaning everybody is the same. We make sure everybody gets the same sort of outcomes or really we're talking about same access and equity meaning everybody gets what they need in order to achieve the same outcome. And those things are sort of two different sides of it, but that's really what you want in that sort of pillar of justice. And then explainability, the sort of fifth one.

is really about, you understand what an algorithm is doing? And again, there are different ways within interpreting that, but that is essentially what it means. Then there are some sort of subsidiary terms that often get banded around accountability and transparency are probably two of the main ones. Accountability is essentially who do I put the point of the finger at if something goes wrong?

And transparency is really the mechanism that enables that accountability. So how do I know who was the decision maker at different points in time in order to enable accountability? And then the sort of last two that I think often come up in these types of conversations is really validation and calibration and sometimes evaluation. So validation would be essentially

Does the thing do what it says it does? Have we tested an algorithm outside of a lab? And bearing in mind, a lab is essentially a computer in a different environment and sees that it actually works. Calibration is normally what we're talking about, local calibration. So does something then, what are the tweaks that need to be made in order to make something work in a specific local context? And evaluation would then be.

what has happened as a result of that algorithm being used in real time. So those are very quick fire definitions of the most common concepts that I think come up in the conversations about AI ethics and digital health.

Shubs Upadhyay (04:07)
Thanks, Jess. That's really useful and I think important to kind of just level set everyone.

what's your take on how we can get ethical principles?

Just like from being like a nice to have but to like, how do you make this a business priority for a vendor?

Jess (04:22)
So I think there are a few different ways of talking about it. The first one is the reason why people, think, especially vendors and especially SMEs, are sometimes reticent. And I know this because a few years ago we went out and surveyed people and we spoke to them and did interviews. And I was basically like, what is it that prevents you from doing these things?

And the number one sort of comment was it's expensive. And then it eats into the bottom line. To a certain extent, I have empathy for that. And I think from the perspective we mentioned briefly before, with everything being about ROI, I do think there is this underlying fear that we are in the middle of an AI bubble that is at some point going to burst.

and all of the investment is going to drop out of the bottom. And I think therefore people are keen to avoid that by turning around big and fairly rapid returns on investments. And that return on investment tends to be interpreted as economic return, not just not sort of a wider, more social definition of value. So I do understand that and I don't.

I'm not coming in and trying to bang people over the heads and be like, you must do this. And like, you can't operate like a normal business. That would be not good. I also should say that I don't believe in role-based ethical probity. I don't have as much as I am a sort of massive champion for the public sector and sort of civil society and have spent the vast majority of the last 12 years championing the NHS. I don't believe that public sector

automatically good, private sector automatically bad. I think there are actual blurs between those two lines. And we shouldn't immediately be wrapping people on the wrist just because they want to make money for their investors. Where I think there are ways in which we can change the conversation is in a couple of words. The first one is

I think we have to get away from this narrative that the idea that ethics is just an extra thing. You can build ethics into your everyday practices without it necessarily being seen as an extra thing. For example, there are lots of good practices that are just in actual good software design and doing good pragmatic software design that you can see as being

ways of supporting ethical principles. If we are talking about things like making sure you have representative data sets and making sure that things perform equally for everyone, there is a way of seeing that as being really annoying and like, gosh, this is extra expensive. I have to go and collect data. There is a different way of seeing it, of that being like, well, this means I get to sell my product to more people.

Yes, maybe that's annoying to have to go through and jump through those extra hoops, but those are also actually principles that just come from best practice in software design. All we are doing is now expanding them to say that we also want that to have a degree of sort of social element on that.

So that's sort of number one. And often what I talk about is how do you get to the MVEP? So how do you get to the minimum viable ethical product? Yeah, then we can level up. Then we can go through and jump through hoops. But actually, what are the things that really just mean we don't treat things as a sort of exceptionalism, AI exceptionalism. We don't just let them get away with it just because it's AI. And so you build those concepts into

your everyday business as usual, rather than seeing it as an extra category of thing that you need to do. The second thing then is, yes, I do think policy has an enormous role to play in this space. And this is the point I made, I think, briefly at the end of my post on LinkedIn about the AI opportunities action plan was, yes, obviously the proof is ultimately in the pudding.

most of the time what's going to matter is what does the government actually do? So what does this translate into in terms of where does money flow? Where do we see the development of standards or guidelines or procurement practices? All of these things are signals or they're what you might call more formally levers that enable a sort of way of

changing people's behavior who are operating in the environment. But it's not just about that. Rhetoric actually has an enormous influence. Sometimes people sort of roll their eyes when I say things like that and say things like that. Jess, you're sounding too much like an annoying social scientist. What are you talking about? But rhetoric has an enormous amount of shaping power. Because if you think about it, that is how people go in and pitch.

Shubs Upadhyay (08:49)
You

Jess (08:56)
That is how people try and win contracts with the government. That is how you get grants as an academic researcher. That is how you sell your product in terms of advertising spiel. All of that is rhetoric. And I think we have lost good shaping rhetoric from the government with regards to saying that these things are what we want to see.

And the third one, if I just want to be, if we want to be super skeptical, at the moment, we've framed this entire conversation really about it being a nice thing. So like go and have ethics and go and have sort of good principles and practices in order to achieve better outcomes. And that's very positive way of framing it.

the more skeptical, cynical way of framing it is this also stops you from having egg on your face. And this comes back to my definite, my sort of point I made at the beginning of that difference between the rules of the game and winning. The clearest example of this I can give is with the government's repeated mistakes, first of all, with care.data and GPDPR. So for those who listening who don't know very, very briefly,

Shubs Upadhyay (09:50)
Winning,

Jess (10:05)
There was a project back in 2013 that the government wanted to pool everybody's electronic health records and make them accessible to innovators. There was massive public backlash. They paused the whole thing. Then in 2021, they decided to try and do the exact same thing and the exact same result occurred. And now we are seeing the exact same conversation coming up in this idea of let's create a national data library. Now, let's think about ethics versus the law. All of those projects entirely legal.

There is nothing in there that is saying that is illegal. They can 100 % do all of that. Where they failed is in what people who are social scientists like myself call the social license. They failed to understand that actually there was a reticence from a social perspective to that idea. And now understanding that social reticence, that is where ethical thinking comes in and ethical foresight analysis can help play a role.

So if you really just want to strip it right back to the sort of cynical context, it is also about making sure that companies and governments understand that if you do these types of thinking, the types of activities that I'm saying, you build them into your everyday practices, you don't see them as being extra costs. You ultimately see them as risk mitigation. It's a risk mitigating strategy to prevent you getting egg on your face.

and chilling effects and then people disinvesting in you because they think that your product is failure and nobody wants to buy it.

Shubs Upadhyay (11:39)
I really like that. And actually, if I relate that to the times within my own experience the times when it really worked was when we used it in, know, pre-mortems, like risk storming, know, sessions where clinicians were talking with designers and engineers about, hey, we're going to be deploying in this context, applying, from an equity as a quality pillar perspective, how does this apply?

What could go wrong here? Therefore, what do we need to do? I see a line towards being able to operationalize and justify, like internally, you want to justify it as an ROI Like, why, do we need to spend effort on this? because this within the machinations of a company that's like, might be leaning into

building good quality products through a QMS and through regulation, etc. It makes sense, right? And I take that away that actually, the reason it's a business priority is we're trying to make sure that we create a great product that the users really like using clinicians really like and trust.

I'm a product manager, I'm a designer, I'm a, clinical safety officer, a founder. I'm like, you know, this really resonates, but what can I take away in action?

Like, what can I take to my everyday work and decision making? Do you have a few key nuggets for them?

Jess (12:48)
Yeah, actually it is probably the same for everyone. Biggest sort of summary takeaway is don't fall foul of AI exceptionalism. So anything that you would apply to any other form of technology, apply it to AI and don't believe that it's not necessary. There is no one on earth I know who would be a clinical safety officer who would buy a blood pressure cuff without knowing.

Shubs Upadhyay (12:52)
OK, great.

Jess (13:13)
it was calibrated knowing it was tested, So why would I then do that if I was buying an AI or a digital health solution? And that applies the same then applies also to the policy folk.

on the regulation thing, the point really is, frame regulation as an enabler. Regulation at the moment has a really bad rep.

this is like Taylor Swift in 2017. this is not, there is no evil thing inherent in reputation, in the reputation of regulation, right?

Shubs Upadhyay (13:38)
Hehehehe

regulation.

Jess (13:43)
we can think of regulation as being an inherent good. What we can design is regulatory friendly innovation rather than innovation friendly regulation and make sure that the regulation is designed in a way that it is an enabler, not a stifler. But that is all in the design of the regulation itself, not in the concept of regulation existing.

Yes

Shubs Upadhyay (14:04)
How can people find you to see the work that you're doing and potentially reach out to collaborate with you, if that's what you're looking for, by the way.

Jess (14:11)
always, always looking for collaborations. I exist in a lot of places on the internet. I no longer exist on this website, formerly known as Twitter, but I am on Blue Sky. I am on LinkedIn and my email is public. So it's just jessica.morley at yale.edu. You can contact me through any of those different channels.

Shubs Upadhyay (14:31)
And you kind of have, you post articles regularly. So you've got this kind of, you've got your health data nerd stuff that you post as well, which is very informative. So yeah, I've learned a lot from you over the years and the things that you rewrite about and speak about. So I really look forward to the stuff that you're going to be doing at Yale. I wanted to quickly mention my quick takeaways that I've taken. So we've generally gone backwards in terms of like the state of the industry.

Jess (14:53)
Mm-hmm.

Shubs Upadhyay (14:57)
where three to five years ago we were really pro-ethical and this was seen as a strategic advantage, both from a government perspective, like layer perspective, but also from a vendor perspective. You see a shift now because of the macroeconomic climate. Things like focus very much and like the AI opportunity being a case, like a case, a good case study is very much focused on like, we need this to support economic growth.

Second one, build ethics into your everyday kind of product thinking for vendors. Build it into your everyday decision making of like how we're building a good product and can delight our customers and users. Three, think critically about, especially if you're a decision maker of like, know, why is, you know, what's the, I think the one I took away the most was like, what are the bigger health outcomes we want to achieve?

is AI really the right thing for this? who decided this? also, there was something around what resources might be used instead? What's the opportunity cost of this? I think that was an important one. And I think the fourth one, which is a good takeaway generally for mindset is

be sceptically optimistic. Is that a good summary of the key takeaways?

Jess (16:08)
Yes.

Definitely. Thanks so much for the conversation.

Shubs Upadhyay (16:13)
Jess, it's been really, really insightful. I look forward to seeing work that you're doing. I'm looking forward to following more of your hot takes. Thank you, for your insights and helping us to kind of have more critical thinking into our everyday work. Thank you so much.

Ethics for digital health companies
Broadcast by