Regulatory strategy for founders and policy makers
Hugh Harvey (00:00)
But literally everything we do in our life is regulated. You get in a car, you get on a train, you get on a plane, you open a bank account, you buy food, you wear clothes. They're all regulated to some extent.
And healthcare is one of the most highly regulated sectors in the entire world. And pretty much anything you want to do in medicine is regulated.
The main aspect of it is the cybersecurity risk. Medical health data sells in the black market for more than financial data.
The attack surface vector, especially under these generative AI models, is huge, vast, and frankly, completely unknown. And it won't take long before the bad actors come pressing against these systems trying to get access to that data.
So you need be very honest when you're going out for investment or grant funding about how much one of this is going to cost. I see too often that founders raise a million, somewhere between a million and five million and their regulatory budget is like 10,000 pounds and you're just like, well, you're not going to get there.
It's like saying you're going to go to learn to drive, but then not wanting to pay for your lessons and it's it's not going to happen.
Shubhanan Upadhyay (01:09)
Welcome to the Global Perspectives on Digital Health podcast. Today we have the inimitable Hugh Harvey on the podcast of Hardian Health. Global regulatory aficionado someone who can give us a really
broad picture on where regulation's at. There's so much going on right now in this space. in the US with everything that's happening with the FDA, lots and lots of layoffs.
What does this mean for regulation? How are things shifting? How do things differ across the world in terms of risk appetites and approaches to regulation?
And whilst he has a lot of expertise on what's going on in the FDA
and the EU and the MHRA, I think he has a lot to say about other contexts as well. So I'll definitely be grilling him on this. It's gonna be a stonker of an episode. You do not wanna miss it. Let's get into it. I'm looking forward to seeing what and hearing about what Hugh is gonna be able to share with us. Let's get into it.
Shubhanan Upadhyay (02:06)
Hugh Harvey of Hardian Health. Welcome to the Global Perspectives on Digital Health podcast. I've been looking forward to this conversation for a while. So thank you for taking the time to join me.
Hugh Harvey (02:15)
No worries Shubs good to catch up and we should probably tell the listeners that we've known each other for a very long time haven't we?
Shubhanan Upadhyay (02:23)
I had on my list to talk about how we know each other. So why don't we talk about that? we went to med school together.
Hugh Harvey (02:29)
We did. How many years ago was that now? Twenty...
Shubhanan Upadhyay (02:32)
I was thinking about
it, was like 20 years ago, 20 plus years ago. So yeah, Hugh and I were at med school together. We did a lot of music together. We did a few open mic kind of band type of nights. As you can see behind him, Hugh is very talented musically, plays the guitar, plays the piano, sings a bit, right?
Hugh Harvey (02:49)
Not as well as you Shubs. You have voice of an angel.
Shubhanan Upadhyay (02:52)
It was a long time ago, but yeah, I've tried to dabble a bit. So yeah, great memories of doing that with you. And that's a good segue into like since then, tell us a bit about yourself and kind of how you've got into this position of being a regulatory connoisseur.
Hugh Harvey (03:10)
Yeah,
so yeah, went to med school, we were at Imperial together. I then did my foundation years down on the South Coast and then I became a radiologist and did the specialist training. I CCT'd in 2014 and then went into academia for two years where I was studying sort of image manipulation techniques, used a bit of machine learning at the time to do segmentation on MRI scans.
I then went out into industry. worked at Babylon Health for a year. I then became a consultant radiologist only for three months before I joined another company called Kheiron Medical where we got Europe's first CE mark for a deep learning based device, final breast cancer on mammograms. So I was clinical director there for two and a half years. And then I set up Hardian health where I have been helping people with
AI medical devices, originally just sort of the radiology lot because there are hundreds, if not a thousand at least now, radiology AI devices, regulatory clean on market. But Hardian has now been helping people in all sorts. We're doing a lot of digital mental health tools now, cardiology, pathology, respiratory.
and even a couple of large language models which is proving to test the boundaries of regulation, who would have thought. But it's exciting.
Shubhanan Upadhyay (04:26)
No.
100%.
And if you don't follow Hugh, you should. Hugh pops up everywhere, especially on LinkedIn, particularly reminding people that Hey, that's probably a medical device, people. So I saw you even commented on something from Yann LeCun as well. And yeah, absolutely. Hugh is very...
outspoken on some of the attitudes towards regulation, particularly pushing back often on the automatic jumping to conclusion that regulation stifles innovation. So I'm looking forward to hearing your take on that as well. I mean, there's so much going on in the world in terms of the approach to regulations. You mentioned LLMs.
geopolitical shifts that are happening, which are also kind of turning the tide. And also with things going on with big tech, et cetera. So I think it would be good to get from someone with your knowledge, extensive knowledge, really like a what's the big picture? What's Hugh's state of like where the world is at from a regulatory perspective?
Hugh Harvey (05:25)
Yeah, I mean, it's a big question. Regulation covers everything we do in our lives and people forget this, which is why, as you say, I'm on LinkedIn reminding people the whole time. But literally everything we do in our life is regulated. You get in a car, you get on a train, you get on a plane, you open a bank account, you buy food, you wear clothes. They're all regulated to some extent. Like it's almost impossible in modern Western society.
to not have anything that's regulated. The reason you know the food you eat is safe to eat is because it's regulated, right? And healthcare is one of the most highly regulated sectors in the entire world. And pretty much anything you want to do in medicine is regulated. From doctors and nurses to physiotherapists to allied healthcare professionals, to the software that they use and the objects that they use, to the drugs that we prescribe, everything is regulated.
I'm a massive advocate for innovation.
and very excited about what AI in particular can offer to medicine. But we can't forget that it's all regulated. And I've set up myself in a niche of just trying to help people get through that sometimes painful process of getting that evidence to the right level that people can then say, yes, we trust this. This is clinical grade.
It can be used safely, effectively. It's cyber secure, which is an increasing problem. And get this innovation to market. So it's not a question of me sitting here being grumpy that people are innovating. I love it. I think that's absolutely fantastic. But you have to innovate responsibly and meet that safety bar that everything else in life has to meet.
Shubhanan Upadhyay (06:56)
Yeah, absolutely. In particular there are extraordinary claims that are made and especially with LLMs. And, you know, as with any medical intervention, kind of balancing benefits and risks, that's like essentially what it falls down on. And with great claims, you need great evidence, right? So tell us a bit about that.
Hugh Harvey (07:12)
Yeah.
Well, didn't Uncle Peter say that in Spider-Man? Or something?
Shubhanan Upadhyay (07:16)
I think so,
A version of that, right?
Hugh Harvey (07:20)
Yeah, exactly. so look, LLMs are incredibly exciting. They've taken the world by storm. They are the first sort of generalist software that humanity has created. But if you are going to claim that it should be used in a clinical setting, unfortunately for you, it is also regulated. Just as any other piece of even simple logical software would be regulated.
LLMs even more so because of their great power. And in fact, the greater the power and the greater the benefit you're claiming, the flip side is the greater the risks as well, and the greater the evidence you must generate. And they just all go around in that.
Shubhanan Upadhyay (07:55)
to flip that ratio,
right? To make that thing generally a medicine of like, okay, well there is great risk, but the upside, which is proven, or you can demonstrably show, outweighs the risks that you also have shown that you mitigate, right?
Hugh Harvey (08:12)
Absolutely, and regulation when you boil it down to its absolute core principle is a benefit-risk ratio. And if you think about anything that we do in medicine, prescribe a drug, do an operation, inject somebody with something, I don't know, put a hip prosthesis in, or even order an x-ray, they all have benefits, obviously, and everyone gets excited about the benefits as we should, but they also all have risks.
Shubhanan Upadhyay (08:28)
even like order an X-ray, right?
Hugh Harvey (08:38)
A medicine can cause side effects or even death. A hip prosthesis can get infected. An x-ray gives you radiation, which gives you a small lifetime risk of cancer. Using a piece of AI software has great benefits, obviously. And those are the things that everyone gets excited about, but they also have risks. These risks are primarily cybersecurity risks, the data leakage, but also inaccuracies, which could lead to inaccurate medical decisions being made about your care. So...
those benefits have to be shown to outweigh the risks. And you cannot show that without measuring both the benefits and the risks. so marketing in AI, especially in healthcare, tends to be quite hyperbolic. Everyone talks about how fantastic something is.
And I really wish that people would be having more mature conversations around, well, actually, we've shown is that the benefits outweigh the risks, and you're open and transparent about what those risks are and how you've measured them. Because that engenders trust, and ultimately that will lead to more sales if that's what you're after in the medical sector. Doctors will not use something based on the sales pitch alone.
Shubhanan Upadhyay (09:45)
Absolutely. And doctors get risk and know how to manage a risk sink right? Or health systems do. I think one of the things that's hard then within that is that because the pace of change has been so rapid, it's hard to get a handle on the risks. With x-rays, with other technologies, historically, we've had a lot of time to think about how risks interact with the environment or with the system, and then build up mitigations as a system.
What's, what's hard in this is everything's changing so much and the whole like risk landscape is also therefore changing and the applications are so wide or wide ranging and in different contexts. And this is to me why also what contributes to well, regulation isn't also one thing, right? Different regions, different settings, different places have different approaches to regulation
I guess there's a spectrum. How do you see that in terms of thinking about different organizations approaches, if you think about them, the big picture, you've got the FDA, you've got the MHRA, you've got the EU MDR, which have their own kind of place on those spectrums. And also are the
most vocal or known about globally, maybe aside from like the IMDRF, which tries to harmonize some of this. Do you have a take on how these different places think or approach regulation,
Hugh Harvey (11:01)
Yeah, I do. It's a balance. It's a really tough balance because if you under regulate, then people come to harm. And if you over regulate them, innovation is very, very slow to get to market. And if you think about it as a kind of pressure gauge, it often swings from one side to the other slowly, but it does swing. And I think
With the introduction of things like the European AI Act and the European Medical Device Regulations, the MDR, the pressure is very much higher in Europe than it is in America, which is currently only since the Trump administration came in, much more on a deregulatory path and trying to release that pressure. But with that release of pressure, the counter problem is that you're increasing that potential risk of harm.
Now, obviously, advocates for deregulation will say, you know, we'll deal with that when it happens or the harms are obviously less likely to happen. Whereas the Europeans are much more cautious, which is why it potentially feels a bit safer in Europe in the medical AI market than it does necessarily in America. And, you know, we're seeing reports even today as we speak of FDA staff being laid off with extremely short notice, which is incredibly frustrating.
Not only for those staff members obviously for losing their jobs But also for the companies who are in the middle of dealing with them and trying to get their submissions through this Literally nobody there to now support them and get them to market and It's it's it's unknown how far the Americans will go currently in that deregulatory
regime. I don't think they will completely disband the FDA and just let drug makers and medical device manufacturers run completely rampant. If they did, that would be an absolute travesty and a step back into medieval ages and the time of the snake oil salesman.
but they may well go far enough that, know, particularly with medical AI systems, they may turn a blind eye to a lot of these things. And then it will be the American people who will feel the brunt of that. The main aspect of it is the cybersecurity risk. Medical health data sells in the black market for more than financial data.
The attack surface vector, especially under these generative AI models, is huge, vast, and frankly, completely unknown. And it won't take long before the bad actors come pressing against these systems trying to get access to that data.
And without the regular parties hitting the bar, it could become quite scary.
Shubhanan Upadhyay (13:28)
I mean, I almost think.
I mean, I'm imagining in terms of parallels of like, you defund public health and you get the effects of something like COVID, like a big like black swan event. And I know you've talked about this in the past, that it's going to take something really terrible to happen to make people, put the brakes on, maybe, right?
Hugh Harvey (13:41)
Hmm.
Well, yeah, there's
a saying that regulations are written in blood. I mean, the reason that the FDA even exists is because of thalidomide. We all know the story there. For those who don't, it's a drug that was given to treat high blood pressure, but it was never tested in pregnant women. And unfortunately, if you took it while pregnant, your baby had a very high chance of having very severe limb abnormalities. Obviously, AI can't cause physical defects like that, but it can cause...
errors in your medical treatment. It can cause errors in your medical record. It can cause errors in clinical decision making and it poses a huge cybersecurity risk. So your data isn't necessarily 100 % safe unless, you know, people have put effort and time to pay attention to their cybersecurity. And yeah, so regulations tend to be written in blood. When thalidomide happened, that's when the FDA was basically set up.
and it was the world's first centralized regulatory authority for medicines. And obviously its scope has expanded over the years to include devices and now including software as well. It also includes lab developed tests. So these are tests that people build in their own laboratories, not at home, but within hospital laboratories. Now that is currently being challenged in the US courts as well.
And there's even been a bill I've seen, I believe it was in Texas, to allow AI to prescribe drugs if it has the appropriate authorization. And then the other lawyers are arguing, well, the FDA won't have legal oversight of AI prescribing drugs because that's not a medical device. That's just a prescription service, they're arguing. So you could get to a point in America where AI is almost in this kind of wild west doing all of these tasks that a registered, regulated human would do.
but for some reason the software is not regulated, which is kind of a cognitive dissonance in a sense. But you know, they could get away with it. It could be that AI is perfect and works better than humans and we don't ever have any errors, but I highly, highly doubt it.
Shubhanan Upadhyay (15:52)
You've given us a really good overview on the FDA, we talked about the EU. In terms of a global picture, you've got the International Medical Device Regulator Forum, which is a more international harmonizer of standards. But I'm trying to think.
about kind of low and middle income country context, underserved populations, et cetera. I think Africa have a medical device regulator forum as well. Do have any visibility on kind of their approaches or the way they're thinking about it, the way they're looking at what's going on in the EU and the FDA?
Hugh Harvey (16:28)
Yeah, so in Africa it's a very interesting situation. In 2017, I believe, WHO did some research to look at the levels of medical device regulation across the continent. And I'm going to guesstimate the facts, or partially remember them from the back of my head. Sorry if they're not 100 % accurate. But around 40 % of African countries have actual medical device regulations. The rest have nothing at all.
And I think the WHO split the African countries into sort of three categories, one, two, and three, where really there's only one sort of level one country which has proper medical device regulations, a medical device authority, notified bodies, et cetera. And that's South Africa. And then you have a group in the middle who have sort of some kind of medical device regulation.
Essentially, they have a government agency which is normally in charge of medical devices. They tend to lean on European or American authorizations and have some reliance kind of program or procedure. And then you have the third category, which have literally no medical device regulation whatsoever. It's kind of a free for all. And those tiers, unsurprisingly, are connected to the economic wealth and the stability of the country.
Shubhanan Upadhyay (17:26)
Yep. Yep.
Hugh Harvey (17:41)
So the longer the country's been independent, wealthier it is, the more likely it is to have solid medical device regulations. Which just goes to show that actually having regulation, good regulation that works, is actually a bit of a luxury. And we kind of forget about that in here in Europe and I suppose in America as well, is that having regulation is something that is obviously a bureaucratic overhead. There's a cost to set it up, but it improves the health and safety of your...
people in your country. And so it is a little bit of a luxury to have and we shouldn't just look at regulations as, they're stifling innovation. Actually, they're saving lives demonstrably. And we should be thankful that they're there so that we know that everything we use is is not necessarily the case.
Shubhanan Upadhyay (18:28)
Yeah, thanks for that overview. It's of course it's unsurprising that a country like South Africa will be kind of in that first group. I was going to kind of then link it to, well, the International Medical Device Regulator Forum as well is,
at least in my mind, quite underrepresented in terms of low, middle income country contexts. I wanted to try and explore why. mean, the reasons I could think of were, well, actually related, actually very much related to what you just talked about. And actually that to be, to be at the table there, like, do you have the right expertise in the country So that, that for me seems like one barrier.
Do you have any more? Do you see anything else in terms of why, you know, why I think maybe it's only Brazil that are in there that are a middle.
Hugh Harvey (19:14)
Yeah, there's also criteria around the levels of government corruption and things like that as well. But yeah, in general, it's related to the maturity of your own central medical device regulations within the country. And America plays a massive part in the IMDRF and
They actually left for a bit after the Trump administration came in. I believe they're back at the table now, whether or not they're back in their full capacity or more as an advisory one, I'm not entirely clear, but it certainly has caused shifts and people do rely on the expertise of the FDA in helping setting the bar internationally. And it's interesting, you see the kind of the geopolitics play out within the regulatory environment.
Shubhanan Upadhyay (19:34)
Yeah.
Hugh Harvey (19:56)
Brexit had a massive effect here in the UK on the medical device regulations. We went from overnight being a member of the European Union and we relied on the European Medicines Agency to kind of set the standards and then suddenly overnight the MHRA became the sole regulator in the UK and they didn't have capacity.
or the staffing really to do that overnight. And now we're in this kind of gray regulatory area in the UK where we still accept international or CE marketing from Europe, but we have our own system, the UK CA mark, which very few people have taken up because we're such a small market. So I'm a big fan of globalization in terms of medical device regulations with the one caveat that obviously devices should be shown to be appropriate for different populations because
not all humans are the same. But I think if you have the evidence to show that, think streamlining international regulations is by far the best thing we could do for the health of the planet. But this is a problem in many other sectors that people forget. In food, we have different regulations across the world. You can eat chlorinated chicken in America, which you can't eat over here. And so if we were going to harm that,
Shubhanan Upadhyay (21:06)
the UK
stifling chlorination of chicken. So, this balance between definitely universal core principles need to be harmonized and globalized. Are they not already would be a question as well.
And then how much of it needs to be very much local. You've talked about like disease patterns in global population in different contexts. And there's so much more right to do with how clinical workflows might work in those settings or like other cultural needs, accessibility, et cetera, all the other, those types of things around it. Do you have a way to think or a model of thinking around like what the...
what the balance is between what needs to be core and universal and how much of it needs to be locally nuanced.
Hugh Harvey (21:47)
So I think there's three things, one of which is becoming quite well globally harmonized, and that's quality management systems. For those who aren't involved in medical device regulation, it sounds incredibly boring, but actually it's the core of how to bring a device to market. And quality management systems exist in every regulated sector. Again, this is not unique to medicine, but there are specific rules around quality management systems for medical devices.
And this ensures that companies who are building medical devices do things in a quality assured way, which is traceable, transparent, and all their suppliers are vetted and secured and things like that. And quality management systems rely on an ISO standard, which is an international standards organization standard called 13485.
And this is now accepted or at least incorporated into the FDA as well as accepted widely across the rest of the world. So pretty much all the IMDRF member states accept ISO 13485 quality management system certification. So quality management is essentially globally certified. And in fact, there is, and it has been for a decade now, a process called Medical Device Single Audit Program or MDSAP, if you want the uncomfortable.
acronym and the MDSAP pathway essentially means that you can get your QMS certified for the majority of the large member states. So it's accepted in Europe, Brazil, Japan, Australia and I believe Canada as well. So that is one thing. The second thing is the level of clinical evidence required. Now this varies depending on the risk classification, but risk classifications are different around the world.
Shubhanan Upadhyay (23:23)
Mm-hmm.
So this is, yeah, this is what we talking about earlier.
Hugh Harvey (23:29)
So the IMDRF have their four point risk classification. The FDA only use three. Europe kind of has three and a half because class two is split into 2A and 2B same here in the UK. And then others have four.
risk classifications like the IMDRF. So I'd like to see a bit more harmonization on that because that helps people decide what level of clinical evaluations they need to do as well. It'd be beautiful if you could do one study, maybe have it in different jurisdictions. You have people in England, Europe, America, maybe Africa. And then you get one risk classification and one certification. That'd be absolutely brilliant to
there's
a third, the third thing is the enforcement of the regulations as well. This isn't, this isn't at all globally harmonized, you know, the punishment for getting things wrong is entirely different in different countries. So you can get away with a lot in some areas and not a lot in others. So the European AI act has the most extreme punishments. I believe you can be fined up to 35 million euros.
if you have a breach of the EU AI Act. Whereas in America, you can go to jail if you defy the medical device regulations. And that has happened with some traditional medical device manufacturers.
Shubhanan Upadhyay (24:42)
And you've talked about something that's like vendor slash founder, manufacturer facing, which is, this would be great for, you know, if you're approaching this. So I'm just going to bookmark that because I want to come back to that. But before we go in, while we're staying on the kind of policy regulator side of things, everyone talks about what vendors and manufacturers could do better. From your view on everything and supporting
people who are building in this space and being that bridge between kind of getting them over the line in terms of regulation. Is there anything that regulators or notified bodies could do better from what you've observed?
Hugh Harvey (25:14)
Yes. Where
Shubhanan Upadhyay (25:15)
Is this a whole another episode?
Hugh Harvey (25:17)
I can write a book on this. What could regulators do better? mean, just be more proactive and more transparent. It took about...
two years after the invention of LLMs for any regulator to make any public statement about whether or not they would be considered medical devices. We finally had that from the MHRA. But they just hid it on page 27 of a 52 page document about digital mental health technologies. I'd read the document originally, but I of skimmed over that one paragraph and then suddenly noticed it again. I was like, wow, they actually have now said something. yeah, to my point, regulators I think could be a bit more proactive. The FDA are the best at this.
actually have an enforcement discretion list, so a list of devices where they say, look, these are medical devices, but they're low risk enough that we're not going to do anything about them. As long as you do the right moral thing, we're not going to chase you. The EU has a borderline manual, which gets updated, I don't know how often, every three years or something. That should be updated more often. And the other thing I think is that
Shubhanan Upadhyay (26:03)
Yeah, yeah.
Hugh Harvey (26:18)
often companies approach medical companies approach the medical device regulators with novel ideas for some kind of structured dialogue and no one ever knows what the outcomes of those are and I think if there could be a mechanism to provide like anonymized public information on
Shubhanan Upadhyay (26:31)
Mmm.
Hugh Harvey (26:39)
regulators thinking on devices, then the whole industry could move a lot more in sync with the regulators if they knew what other decisions had been made. But unfortunately, these don't get made public. I'm in a very privileged position where I join these calls with the FDA or with notified bodies and I hear firsthand what their feedback is, but I can't say anything publicly.
What I can do is if another client comes up with the same idea, I can say, well, I do know XYZ, but I can't say anything publicly. And I think it'd be really, really useful to have that. The second thing I think the regulators need to do, again, is on the transparency angle, is be much more transparent or demand transparency from companies who've made it through the process.
So there is a concept in regulations called the intended use. So what your device does, who it's meant to be used for, in what environment, and what are the contraindications. Think of it like the leaflet that you get in a packet of pills. When you open a packet of paracetamol, you know what it's for, but there's still that leaflet with all the information, should you require it. And I really think that manufacturers should be encouraged and incentivized, in fact, probably mandated to share that level of information on their websites.
The EU AI Act has gone one step towards that by making sure or mandating that there is a permanent URL with digital information on the intended use and the performance of a high-risk AI system. But we still don't have that for medical devices. And in fact, know, here in the UK, we don't know what the intended use is for any registered medical device on market because it's not publicly available information.
The FDA, again, are very good at this. You get an actual summary letter of the device written to the company saying what the intended use is and what their performance was that they shared the FDA. And I think that kind of level of information is really, really useful. Again, so the market can look at that information and figure out what other people have done, what they're claiming, and where there might be a gap in the market that they can then solve a problem for.
So I think regulatory transparency is something that the regulators could definitely do better on. Absolutely.
Shubhanan Upadhyay (28:46)
just on that before you move to the next one? Is this also linked to like, I know there's been organizations like CHAI in the US who've like developed, I mean, they've been developed a few years ago, but like the model facts labels are maybe like one level deeper with much more information that goes into like more specific details about performance, like where this should be used, what the training data is, what it hasn't been used on, et cetera. Is that a good example of like something that, like a form of this?
Hugh Harvey (29:11)
Yeah, so you could think of it like that, a Model Facts card that has no regulatory standing, but it's still really useful information that think people should be putting out there. absolutely. It was originally... No, it's okay. And so the third thing I think regulators could do better is, it's not really the regulators, it's governments who fund regulators should provide more funding to the regulators. If, say, we are super bullish on AI,
Shubhanan Upadhyay (29:21)
Carry on, I interrupted you.
Hugh Harvey (29:40)
then we should be putting in more funding to the regulators so they have capacity to review and approve more of these devices. We're in the unfortunate position in Europe and indeed in England that we don't have enough notified bodies to do the reviews. The backlogs are up to a year to even just get an audit. And so this slows things down.
And that's not problem with the regulation per se, that's the problem with regulatory capacity. So I'd like to see more funding go towards regulators in general. And that's not to say that the regulations are any harder or easier, it just means that there's more regulators to talk to so more devices can get to market quicker.
Shubhanan Upadhyay (30:18)
Yeah, definitely, that makes sense. Thanks, Hugh. That's a really good set of recommendations, definitely. And an area that's not talked about enough, because everyone talks about what vendors and et cetera could do better. And on that, let's kind of turn the attention on, okay, I'm a founder, I'm a product manager, I'm a clinician, clinical safety officer, I'm working at company, and we're looking at kind of building
something maybe also say serving multiple geographies potentially. And you have a tiny bit of funding, right? And everything is pulling at your attention. We've got a very limited budget. Everything is important. We have to be using it for this. We have to make sure we get this, do this bit early. And I've heard that there's like, you know, regulation and getting evidence is this like massive boulder. I have to like push up a hill.
How do you think about this? without thinking of it as this kind of massive thing that I have to invest in that's going to be really expensive? Because I think that's one big thing that lots of companies think or startups think, oh my God, how am I going to do this? It's like a massive barrier to entry.
Hugh Harvey (31:28)
Yeah, I think of it like learning to drive. You wouldn't drive a car on the road without a driving license, right? So you shouldn't be using your medical software in hospital without a regulatory certification. So if we use that analogy, how do you learn to drive? You pay for lessons and you practice.
and you go away, you read the highway code, you practice the sample exam questions, and you sit two stages of exams. That takes time and it costs money. But what happens at the end? You are safe and you feel safe. You feel like you know how to drive and providing that you stay safe, you will maintain your license for your whole life, right?
So I think it surprises me when people come into the medical sector with a piece of software or AI and then A, don't realize there's any regulations and B, when they do know there's regulations, they try and figure out ways to kind of avoid it. And I always say to people, it's like, you just have to learn to drive the car. You have to learn to drive. And here's the highway code. So here's the quality management system.
here's a clinical trial design that you have to do, and here's the level of documentation you need to provide. And what I do, what my team does is teach people how to do that. I don't know any other way to make it cheaper or faster other than to educate people.
Because the more educated you are about this stuff, the easier it becomes. And I remember the first time I did it, I thought, just like most people, it's a mountain, it's insane, there's so much. Done it 150 times now, it's not that hard if you actually just embrace the process and realise that the end goal is you will have your driving licence, to use an analogy, to go and drive on the highways of healthcare. And...
get a good driving instructor because that will obviously help you get there quicker. You don't just have to use me by the way, can have a mental plenty of people out there. So I think, yeah, and just be aware that yes, it does cost money. Get a good idea of the upfront costs. There are fees involved in, you you should hire someone.
Shubhanan Upadhyay (33:31)
I was gonna say good, good, good little plug there.
Hugh Harvey (33:48)
Ideally within your company who's responsible for quality management, you should get quotes from notified bodies and be prepared to pay 50 to 100,000 pounds for audits. You should get good consultants in and you should get some software to help manage your documents or your cyber security and things like that.
You may well have to pay for third party penetration testing if your device interfaces with the electronic health record or other sensitive systems. So you need to budget all of things upfront and then be very honest when you're going out for investment or grant funding about how much one of this is going to cost. I see too often that founders raise a million, somewhere between a million and five million and their regulatory budget is like 10,000 pounds and you're just like, well, you're not going to get there.
It's like saying you're going to go to learn to drive, but then not wanting to pay for your lessons and it's it's not going to happen. So education is absolutely paramount. And you've got to remember that you're entering into a very highly regulated sector and it is expensive and it does take time.
But you know, I've seen people do it on a decent budget within about 12 to 18 months and it is entirely possible, but it's about embracing that complexity of the challenge and getting the right people around you to help, help do it.
Shubhanan Upadhyay (35:00)
Yeah, and I think I like the driving analogy. there's something to me about mindset as well. Like you can do driving lessons to pass the test or you can do driving lessons so you can be a good driver for the rest of your life. Right. I don't work for Ada anymore, but at least the guys that I worked with kind of embraced it in that way of like, actually like let's lean into this as a way to like,
us creating a product that people want that is of high quality and proves that it does what it says. Like, that's good, right? So hopefully with the right ecosystem pressures in place, helps you like actually be competitive and survive in the market.
Hugh Harvey (35:39)
Well, talking about survival in the market, you literally aren't allowed to sell your medical device until you have it. So it's kind of, I say a lot to founders, I say, what's the cost of not being compliant? Well, it's everything, it's your entire business model. So surely you're going to invest something towards it. So yeah, the budget should be roughly five to 10 % of what your overall
budget is and you should be factoring in the timelines as early as possible. It's a day one problem. The minute you know you're going to be a medical device get that support, get that advice and do that budget and do your timelines because...
time and time again you see people come up against it and the later you leave it the larger the problem is so there's a concept of regulatory debt so there is a concept of technical debt you know if you hack something together and don't really pay attention to your code base and things you build up technical debt and you have to refactor your code
Shubhanan Upadhyay (36:22)
Mm-hmm.
Hugh Harvey (36:31)
It's the same with regulations. you ignore them till the last minute, they become more more insurmountable and the level of regulatory debt sometimes is impassable.
Shubhanan Upadhyay (36:41)
So it's a foundational thing right at the beginning, but it's part of the foundations of your company. I think you touched on something earlier, which helped me on, it kind of answered the question I was going to have, which was, if you're thinking of going into multiple markets, how do you think about like, which ones to go for? And there are already some key components of that that are relatively universal, right? Like, so you mentioned the QMS and ISO 13485. And I guess like the principles of
Hugh Harvey (37:08)
So.
Shubhanan Upadhyay (37:11)
like safety and surveillance and getting evidence and etc. Any other kind of more universal things that you can think about if you're thinking about serving multiple markets.
Hugh Harvey (37:21)
So risk management is a core concept because at the start we talked about benefits and risks. So the clinical evaluation is about measuring the benefits and also the clinical risks. But risk management is about documenting those risks, acknowledging they exist, and then putting in place things that work that actually mitigate those risks. So that's very universalized.
Shubhanan Upadhyay (37:26)
Yep.
Mm-hmm.
Hugh Harvey (37:41)
regulations as well. Again, not just in medical devices, but in all sectors. If you look at the risk management documentation for automobile manufacturers, it is 10 times more complex than it is for medical devices.
Shubhanan Upadhyay (37:52)
And I mean, in terms
of risk as well, there might be specific work you might need to do for a specific context that that might have unique risks within that, right. But yeah, absolutely. The principles of like how you approach risk apply universally. So yeah, definitely.
Hugh Harvey (38:07)
Absolutely.
The other thing to say is that, you know, not only do you become a good, safe driver, you're confident in yourself, but actually it becomes a defensive moat as well. In a sea of millions of people trying to make AI systems for healthcare, the ones that are regulated are the only ones that can sell. And it's a huge moat. And what I'm
Shubhanan Upadhyay (38:15)
Mm-hmm.
But it's so on the
moat, if you see what's going on with the FDA, I mean, people might say, well, it's not a moat anymore in the FDA. Like we could just try and skirt it like everyone else. What's your take on that?
Hugh Harvey (38:37)
Yeah, so the FDA, they have one specific regulatory pathway. It's called the 510K pathway. And this is the one that people always cite saying, well, America's regulations are much easier, much quicker. But that's only true if you can show that you are substantially equivalent to a device that has already received FDA authorization.
So you are limited to saying my device does the same thing as someone else's device. So there's a question there about really how innovative is that device? And actually, the other problem with it is that you are telling a federal government agency, I'm the same as someone else. And that poses massive problems for intellectual property down the line.
And there have been court cases where people have claimed that they've got the intellectual property or something and they've lost the case because their lawyers have turned up their FDA submission going, well, you told the federal agency that you're the same as someone else. Now you're claiming that you've got the intellectual property of it. And those two things just do not chime together. and the other thing is that 510k devices are generally relatively low risk. They're what I would say are class 2A.
Shubhanan Upadhyay (39:27)
on that, citing that.
Thank
Hugh Harvey (39:48)
equivalent over here in Europe.
The hype around AI is really focused on things being automated. So machines doing things instead of humans, not necessarily alongside humans. And to do that, you need to go for higher levels of risk and higher levels of regulatory scrutiny. So people should be going, if they really want to move the needle in healthcare, is to create systems that are higher risk that...
own a piece of a care pathway. And a really good example is the skin analytics, class three. Exactly. And everyone looks at that and goes, wow, that must have been so hard. It's not actually that much harder than a class two. And I'm really surprised it's taken this long. I think about eight years since AI medical devices has been authorized to get the first class 3 device. It's class 3 because it is doing a task without a human in the loop.
Shubhanan Upadhyay (40:18)
They got class three, right? Superb.
Hugh Harvey (40:41)
and they have proven that it's safe and effective. And you can do that for any device if that's what you want to claim. But of course, people want the quickest, easiest route, which is the 510k route. But that means that you're limited in the claims you can make and you're going to be disappointed that you're not moving the needle at the end of it because all you're doing is decision support or something with a human in the loop that is essentially the same as something else that's already on market. So you haven't really done anything incredibly exciting.
I'm not poo-pooing everything that has gone through the 510k pathway. There are some interesting devices for sure. But then there's this quote going around, there's a thousand plus FDA cleared devices. Actually, if you break it down, about 20 % of those are people recertifying their original 510k with a small update. And about 80 to 90 % of those are devices that are exactly the same as other devices already on market. If you actually look at the number of De Novos,
which are more strict equivalent to the CE marking process in America. If you look at the number of de novos with AI devices, it's very, very few. And to my knowledge, I don't believe there are any PMAs, which are pre-market authorizations for Class 3 AI devices in America.
So, you know, hype is all around these quite quick and relatively easy 510k authorizations. But I think I would love to see enough to support people going for these much higher risk things because with great power comes great responsibility, as we said. So if you want to be able to do that, you have to take on that challenge.
Shubhanan Upadhyay (42:11)
So regulation is currently a stick, but it can be a carrot, i.e. like your competitive edge and protect you. Another angle to the question then is what if I'm a founder and I'm looking at one of those, countries that are the level two or level three within that have been classified by the WHO within Africa. There's not really much regulatory.
enforcement ability, there's not really much many guidelines there. If I'm a founder and like looking to get in that space, how should I lean in? You know, I've taken on what Hugh said, I'm going to lean into this now. But there's nothing really giving me any guidance. How do I approach it for those contexts?
Hugh Harvey (42:51)
I think you'd have to take the morally correct path, which is take your device and the claims that you wanted to make, put it into the international medical device regulatory forum risk classification matrix, figure out what your risk class is and get a certification.
in a country or state that has a robust authorization pathway and then approach those African governments and say look I've done everything I need to do in those countries I know that you don't have a robust medical device process here but I'd love to be able to work with you on this and offer them that insight that knowledge and the learnings that you've taken from bringing that to market and teach them how to do things safely.
The main issue with AI devices in those, I guess, tier two and tier three African countries is that it's very hard to plug AI into an infrastructure that pretty much doesn't exist in the format that we have over here, like electronic health records and digitalizations of services and pathways. So it's not as attractive a market. Plus, they don't have the resources and wealth to necessarily pay huge sums of money for these things.
Um, there have been great successes, you know, that I know across India, there's a lot of AI now being put directly into X-ray scanners in rural communities and things like that where exactly, exactly. And so they've got all their relevant clearances and then they can go and, and, you know, work with the Indian government to get some kind of regulatory authorization reliance on.
Shubhanan Upadhyay (44:05)
Qure AI has done a lot of work there, right?
Hugh Harvey (44:20)
robust current status and then work with the governments to put it in ethically and responsibly and do that morally correct thing. I think the days are over of pharma companies using the African population as guinea pigs per se and I really would not want to work with tech companies who approached it in that that would feel wrong to me.
Shubhanan Upadhyay (44:38)
Yeah, yeah. And just the final bit on this topic is if you are a founder read Hugh's article on the five stages of regulatory grief. Do you want to quickly tell us about that?
Hugh Harvey (44:53)
Yes, there are five stages of regulatory grief. So you'll remember from your med school days that, you know, from psychiatry that when people experience grief, usually from like a bereavement, they go through five quite well-defined stages. And they're so applicable to founders in medical device space that I wrote a blog on it. Essentially, the first stage is denial, which
Shubhanan Upadhyay (44:55)
He
Of course, yeah.
Hugh Harvey (45:19)
can be active or passive. So passive denial is you just don't know there are regulations. And we touched on this earlier. It kind of shocks me that people don't realize there's any regulation whatsoever in healthcare. Or there's active denial where you know there's regulation, but you just kind of go, well, I'm going to ignore it until someone tells me off. And you can't really help people with that. Yeah.
Shubhanan Upadhyay (45:37)
or Hugh comes in like rights on your LinkedIn.
Hugh Harvey (45:42)
Yeah,
exactly. I mean, I have had people respond to my LinkedIn comments saying you're wrong. I'm like, well, I'm pretty sure I'm not wrong. But anyway, there's active denial and you can't really help people in denial. If they're in denial, they don't want to be helped. Then there's anger as they react both to the fact that the regulations are complex and deep and tough, but also to the fact that their worldview, their denial has been challenged and proven to be wrong. So they get angry.
Then there's a period of bargaining. So this is the period where they go, okay, fine. What's the cheapest way? What's the quickest way? I can really outsource this. I don't want to have to think about it. And that's why you see people kind of looking at them. They do a bit of diligence and they go, oh yeah, the 510k process in America is the quickest and cheapest. Let's do that. And that's a, they bargain their way out of it.
Shubhanan Upadhyay (46:18)
Hehehehe
Hugh Harvey (46:35)
Then there's bit of depression when they realise how long it takes, the level of evidence they need to do, the level of documentation they need to do. And then the last bit is the most beautiful thing, it's acceptance. And they go, do you know what? I feel safe. I understand quality management. can now...
sell to customers and they ask me for my risk management documentation. I've got excellent documentation I showed them and they love it. And you know, I can now sell this. I know what my intended use is. I've got instructions for use. I've got clear contraindications and people trust my product.
And it's that acceptance phase that I love getting people to. And I save all the emails I get from founders and I show them the first email that they sent me and I show them that last acceptance email. It's been a journey, but you know what? I feel...
safe, I feel proud, I've learnt a lot and I'm going to make people's lives better now because I've brought a product to market that demonstrably works and is demonstrably safe and effective. And the acceptance phase is a wonderful, wonderful thing but unfortunately there is that grief process to go through in order to get there.
Shubhanan Upadhyay (47:40)
it to get there. Yeah, very good. I really like that. I want to go to a section I'll call Hugh's spicy takes. Let's let's do that if that's all right. Thanks for that overview. And yeah, definitely read Hardian Health's blogs. And that's a particularly good one. But you have lots of good stuff in there. Really practical bits of advice if you're like going on this journey wherever you are on your regulatory grief journey.
What's your most controversial opinion right now, Hugh, that you get most pushback on? Maybe you've mentioned it already, but what's your most prickly opinion that you get most pushback
Hugh Harvey (48:13)
that's a really good question. I get pushed back on a lot to be honest. I think my spicy opinion is that large language models are massively, massively overrated for what I call reasoning, or what is known in the sector as reasoning. I think they are fantastic tools for
semantic search and for association of concepts. I'm not convinced they are reasoning engines and I think that applying them in the medical device realm is going to require huge amounts of evidence. I am not saying it's impossible.
I think with appropriate guardrails, an appropriate risk mitigation, an appropriate post-market surveillance and very well conducted studies, there is potential for large landed models to assist in medical tasks such as triage and potentially preliminary diagnosis and treatment planning. But there are huge caveats that go along with that.
The problem with large language models is that they are generalist by nature. They can do anything. You can ask chat GPT a medical question or a space question or a cooking question. It can do anything, right? Limiting large language models to specific functions and then measuring the benefits and risks of those specific functions is an incredibly complex and difficult task. I am working with a few people who are tackling this challenge and they are all, you know, they're fully accepting of the regulations and they want to do it properly.
But it is tricky and difficult. And to be fair, no one really knows how to do it. Not least the regulators, I don't believe have approved any in that regard. So that's my spicy take, I suppose, is that large language models in medicine are exciting, but probably massively overhyped. And I think the first ones that get through, it will be a watershed moment. But I think we'll find that when you actually look at what they're limited to do, you'll be like,
Well, I thought they could do more than that. And I think the first two that they get regulated will appear limited. But then over time, you'll see the scope increase as people get to grips with how to demonstrate safety and effectiveness.
Shubhanan Upadhyay (50:22)
That's a warm level of spice. Yeah, I like that. That resonates. Is there anything that you've obviously been outspoken for a few years on lots of parts of this and read a lot about what you what you write and what you say? Is there anything you've changed your mind on that you might have fervently believed a few years ago?
Hugh Harvey (50:41)
That's a good question, noone's ever asked me that before, but something I've changed my mind on.
Shubhanan Upadhyay (50:45)
that you were like, never, never this. But actually like, maybe now.
Hugh Harvey (50:47)
I
am a scientist, I will change my mind if the facts prove me wrong. What have I changed my mind on? I used to think that the EU AI Act was probably overreaching and I think I've changed my mind in that having read it and actually started to incorporate it into some of our clients processes is actually not that bad.
Shubhanan Upadhyay (51:11)
Was there anything in it that you
thought, yeah, anything specific?
Hugh Harvey (51:14)
there was one bit, again, I can't quote it verbatim, I'm not that inclined, but to paraphrase...
something on data transparency in the EU AI Act. It said something to the effect of that the AI manufacturer has to demonstrate that the data used in training and validation is representative and complete. And I thought, how the hell are people going to demonstrate that the data they've used is complete, especially when you think about generative AI, where you don't even know what data is being used to train on. And that, I thought, was kind of overreaching.
Shubhanan Upadhyay (51:43)
Mm-hmm.
Hugh Harvey (51:47)
But I think actually what you're trying to show is that it's complete for the task at hand to the best of your ability and knowledge. Like you don't actually have to prove that it's 100 % complete and watertight. Other bits within the EU AI Act were rolled back during some of the policy discussions and the final ratification of the legislation, which was good. Other things, I think the European medical device regulations. So they were...
first proposed in 2017, then come into force in like seven years later, essentially. People had time to prepare. And one thing I got wrong was that people will be ready for them. No one was ready, even the big, big manufacturers. And I'm like,
Shubhanan Upadhyay (52:30)
It
still caused a lot of issues when it came around.
Hugh Harvey (52:35)
Yeah, and at
that point there's big backlogs now, it's because everyone who had a certification pre-MDR had to go and get recertified, as well as everyone new coming to market. I'd like double the volume that the regulators expected. What they thought was that people would go, there's these new regulations coming into force in seven years' time, let's get in early. And really no one did. And so I was wrong about that. So yeah, wrong on a couple of things.
Shubhanan Upadhyay (52:44)
Mmm. Mmm.
Hugh Harvey (52:59)
And I will always admit when I'm wrong if science can prove it.
Shubhanan Upadhyay (52:59)
Very good.
can prove it. Very good.
Hugh, I'm so grateful for your time. I'm going to do a quick thing of like what I've taken away. You've given us a really, really great kind of overview of the regulatory picture and some of the nuances of the FDA that you think, oh, I can just get in through the 510. Is it a 510 K? Right. Yeah. But actually, there's a lot more to this.
Hugh Harvey (53:22)
Okay.
Shubhanan Upadhyay (53:28)
Thinking about regulation as a way to create good products and actually being a defensive moat. I really liked what you said as well about things that regulators can do better, particularly around being more transparent, and a good learning was enforcement varies around the world as well.
I think it was really great that you gave us an overview of what you've observed and what you've seen in Africa and India, et cetera, and kind of how regulation is thought about and approached in those ways. And some really, really great ways to think about regulation if you're a vendor in early stage. Think about it like a driving test and investing in being a good driver.
I think those are the big takeaways.
Hugh, it's been really, really great talking to you. Thank you for your valuable insights. You've clearly got such great breadth and depth of knowledge. So it's really, really valuable. And of course, you give a lot of insight to people who are kind of building in the so-called West. But I think there's a lot of people who are building with communities and underserved communities and
kind of within LMICs will be able to take a lot from what you've talked about here. And a lot of the principles, even though there might not be strict enforcement, can be applied. So thank you, thank you very much. How can people reach you, Hugh?
Hugh Harvey (54:43)
website, hardianhealth.com. Everything you need to know about AI medical device regulation. And there's a contact form just you can fill that in and I'll get the email.
Shubhanan Upadhyay (54:52)
Awesome. Thank you. And yeah, look forward to kind of hearing more about what you're doing. Thank you so much, Hugh.
Hugh Harvey (54:59)
Thanks, Shubs
