Blog image
Artificial Intelligence In Healthcare: What Medtech Companies Need to Know

Artificial Intelligence In Healthcare: What Medtech Companies Need to Know

20-12-2023

Artificial Intelligence (AI) is impacting every industry right now, and healthcare is no exception. Although it’s currently under-regulated, new regulations – like the EU’s AI Act – are on the horizon. That means medtech companies must strike the right balance between innovation and regulation. 
In the next couple of years, medtech companies using AI, and operating in the EU market, will need to adopt an entirely new regulatory framework. At the moment, their main point of reference is the Medical Device Regulation (MDR).

How is AI used in healthcare?

In the healthcare sector, AI can be used to perform and enhance all kinds of routine tasks. That’s because at a software level, it can mimic a range of human cognitive learning and decision-making processes. 

This has a number of key advantages for medical professionals and their patients. The most obvious benefit is that it can take repeatable and repetitive tasks out of the hands of overbooked clinicians to provide a faster and more streamlined healthcare service. 

Whilst there are lingering questions and concerns around the safe and ethical use of AI in healthcare – and an obvious need for clear guidance and regulation around its implementation –  there is also a growing willingness within the industry to adopt AI. 

Here are two key pillars of AI implementation in healthcare:

  1. Decision-making AI: An AI algorithm is trained on a specific dataset, using a variety of techniques. Its purpose is to support human decision-making, but it’s static – meaning it doesn’t adapt and learn from new data.
  2. Autonomous decision-making AI: This type of AI absorbs and learns from new information – perhaps new patient data – and uses it to improve over time. For example, it might become better and better at predicting the status of a disease.

There’s a breadth and depth of applications for AI in healthcare. It can be used in everything from image analysis across multiple scanning methods to robotic surgery. The outcome is faster and more personalized healthcare, and increased efficiency. 

At the moment, most regulatory information around AI in healthcare is a work in progress, with standards, guidances and regulations still being drafted or updated. As such, it is of importance to stay up-to-date on changes within the field.

Right now, what medical device companies need to know is that any software that provides decision support to medical professionals needs to be compliant with the MDR.

Is AI covered by the MDR?

The short answer is ‘yes’. The MDR defines ‘software’ as a set of instructions that processes input data and creates output data. Software in healthcare is categorized in different ways.

Defining Software as a Medical Device (SaMD) largely depends on the intended use of the product. Software can be considered a medical device in its own right; it might drive or influence a medical device – such as software that operates a robotic arm; or it could even be defined as an accessory that’s used in conjunction with a medical device.

Software that performs an action on patient data doesn’t necessarily come under the MDR. Take an Electronic Patient Record System (EPRS) as an example. It stores or transfers patient data. Even if AI assists with more efficient storage or data transfer – the intended use is not medical. 

Broadly speaking, any software, including AI, that has a direct impact on a single patient’s care is likely to be covered by the MDR, and needs to be fully validated. That means determining the risk class – usually Class IIA or higher – depending on the potential risk to the patients. 

Have any AI products been approved for the EU market?

While it is challenging to get an AI-enabled medical device on the market, it is far from impossible. For example, a product from the US – which uses AI – has recently been approved for the EU market. The product, which enables patients to test their glucose blood levels, predicts future levels, and suggests actions to reduce them. 

Another product that provides insights into cell anomalies, such as cancer, has also been approved. Classed as an in vitro diagnostic device, it’s an AI-driven tool for diagnosing patients. Again, it’s been approved in the US and the EU.

One key hurdle for AI-driven medtech is change management. In theory, any change needs to be documented and reported. If AI constantly learns and improves – say on a minute-by-minute basis – change management is unfeasible.

This restriction stifles innovation and the speed of development. But there is a workaround. Improvements to an algorithm can be tested, formally documented, and then – with the correct approval – released in an update. 

The impact of this is slower yet safer, iterative development of AI within medical devices. What it means is that implementation of fully autonomous learning AI for the medical device market is not yet possible. For now. A likely future approach to balancing innovation with patient safety is to place autonomous AI within tightly controlled parameters. So a certain level of predefined risk is allowed and mitigated.

What’s the future of AI in medical devices? 

Right now, the US is leading the way with regulating autonomous AI in medical devices. In fact, the FDA is working on a change control plan that enables companies to document all the possible modifications that an AI could perform. This should lead to a robust system for validating and implementing modifications in a way that ensures continued patient safety. 

Alongside that, it will also be necessary to document a thorough risk assessment regarding the impact of any changes as a result of AI. With the risks and benefits carefully documented in the technical file, it should be possible to release a device with autonomous learning AI. For medical device companies, it would be good practice to start the groundwork now, ahead of the AI Act, which is due to arrive in 2024.

The EU’s incoming AI Act will have its own risk-based system for classifying AI devices. The risk levels will be based on the level of impact that an algorithm could have on users. Companies will also be expected to mitigate unintentional bias in Machine Learning (ML). 

For example, if an AI model is trained on a dataset of male patients, it may learn to unintentionally discriminate against female patients. That’s why it’s important for any device that uses AI to be validated for safe commercial use through a comprehensive clinical evaluation.

Advice on incorporating AI in medical devices

The precedent has already been set for using AI in medical devices. What’s clear is that getting AI approved for the EU market requires extensive documentation and validation.

A robust strategy for Post Market Surveillance (PMS) also needs to be in place to ensure autonomous AI does not develop harmful bias. With that in mind, here’s our advice for innovative medtech companies looking to incorporate AI into their devices:

  • Prepare a detailed technical file
  • Verify and validate your model 
  • Ensure you have robust risk management in place
  • Perform a thorough clinical investigation or evaluation
  • Create an upfront change management strategy for your AI product
  • Describe a comprehensive Post Market Surveillance plan
  • Ensure compliance with international standards, such as GDPR
  • Ask your Notified Body how they plan to take AI into account

 

At Peercode Regulatory Consultancy, we help medical device companies – including those seeking to bring AI into their products – to successfully navigate the MDR. To discover how, talk to a specialist.