Artificial intelligence (AI) is constantly bringing improvements to the field of medical devices, with AI technology being embedded in software used as a medical device or being a medical device by itself. According to the IMDRF, “Software as Medical Device” (SaMD) is defined as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device”. Conversely, the concept of artificial intelligence in medical devices is not that well defined yet. Moreover, there is a lack of a clear regulatory framework that can lead to the commercialization of the devices with this technology, posing different challenges for designers, manufacturers, users, and regulators.

In this article, we cover the definition of Artificial intelligence, it’s applicable in medical devices together with the current and future challenges to be faced by the use of AI in medical devices.

What do we understand as Artificial intelligence (AI)?

According to the World Health Organization (WHO), “Artificial intelligence” usually refers to the performance by computer programs of tasks that are commonly associated with intelligent beings. The basis of AI is algorithms, which are translated into computer code carrying instructions for rapid analysis and transformation of data into conclusions, information, or other outputs.

AI has already “invaded” many fields, including MedTech, offering many possibilities by helping improve processes, develop novel approaches, and transform data in ways that were not possible before.

What is the applicability of AI in medical devices?

Medical device (MD) manufacturers have also incorporated AI to better assist healthcare better. Nowadays, it is common to find AI algorithms within Software as a Medical Device, ranging in various degrees of sophistication and applicability.

The applications of AI in medical devices include, among others:

  • Diagnosis and prediction-based diagnosis (i.e., radiological, dermatological, pathological diagnosis)
  • Clinical care (i.e., identifying patients at risk and vulnerable groups)
  • Research and drug development (i.e., genomics, models or genetic targets, pharmacokinetics, safety and efficacy)
  • Public health surveillance (i.e., identification of disease outbreaks and their response, health promotion by micro-targeting, disease prevention by making inferences between the physical environment and healthy behavior)

What are the current and future challenges to be faced by the use of AI in medical devices?

AI is a fast-evolving technology that already poses known challenges to MD manufacturers, users, and other stakeholders, while other new challenges remain to be determined. As for now, the ground of battle includes:

  • Categorization

AI is a dynamic and innovative tool categorized according to its function or application (WHO, 2021; Hayes & Curran, 2021). Currently, MDs with AI lack this kind of guidance, leaving a caveat to be addressed in future guidance to help manufacturers identify the requirements to be met by their AI models to comply with the existing legislation.

  • Ethics

AI is a powerful tool that can help advance scientific knowledge and improve patient care. Still, it also has several potential ethical concerns that require addressing before it is “too late to say sorry”. Some of the main considerations include patient care dehumanizing, undermining of individuals’ autonomy, and vulnerability of data privacy.

Data privacy is becoming more critical with rapidly evolving technology. Automated personal data processing must comply with the General Data Protection Regulation (GDPR). In addition, manufacturers have to provide the patient with ‘meaningful information about the logic involved’ in any automated decision taken by an AI algorithm relating to patient care. Therefore, this might end up involving informed consent for the health data processed with appropriate legal frameworks. Moreover, data processing might change over time, forcing manufacturers to foresee these dynamic processing changes to provide accurate information to the patient.

In the cases of “autonomous decision-making” in the medical field, the power granted to AI could be translated to life-threatening decisions, like diagnosis, treatment, or intervention options, and their effectiveness, among others. The principle of autonomy requires that AI does not undermine human autonomy. This means that humans must remain in control of medical decisions and healthcare systems.

Last but not least, AI for healthcare should be designed to be equitable by ensuring inclusiveness in its use and access, avoiding bias and discrimination. To prevent these, the Council of Europe’s Committee of Ministers issued draft recommendations to the EU Member States on the impact of algorithm systems on human rights, along with several other local recommendations drafted by the Council. At the moment, according to the WHO, the key ethical principles for the application of AI in healthcare are:

  • Avoid harming others (sometimes called “Do not harm” or nonmaleficence).
  • Promote the well-being of others when possible (sometimes called “beneficence”). Risks of harm should be minimized while maximizing benefits.
  • Expected risks should be balanced against expected benefits. Ensure that all persons are treated fairly, which includes the requirement to ensure that no person or group is subject to discrimination, neglect, manipulation, domination, or abuse (sometimes called “justice” or “fairness”).. Deal with persons in ways that respect their interests in making decisions about their lives and their person, including healthcare decisions, according to an informed understanding of the nature of the choice to be made, its significance, the person’s interests, and the likely consequences of the alternatives (sometimes called “respect for persons” or “autonomy”).

To take control of the situation, manufacturers and healthcare providers must understand the role that AI systems play in patient care. As for the patients, the manufacturer must ensure that they have all the necessary information to understand the impact AI-driven decisions might have on their health status. Finally, since the use of AI in healthcare involves managing and processing personal data, data privacy and confidentiality must be ensured through appropriate legal frameworks.

Yet, it seems that not all fronts are covered. No specific ethical principles for the use of AI in healthcare have been proposed for adoption worldwide yet. A better definition of the relationship between safeguarding and human rights and how to apply it to AI and its regulations are still required.

  • Development

According to the Medical Device Regulation (MDR), “Devices that incorporate electronic programmable systems, including software, or software that are devices in themselves, shall be designed to ensure repeatability, reliability and performance in line with their intended use”. Currently, there are no available standards specific to AI to ensure these principles can be applied from the design and development (D&D) phases to the validation and verification (V&V) activities.

AI should be understandable to developers, manufacturers, healthcare professionals, users, and regulators. Since it generally uses large datasets, its V&V can become a complex task. In addition, quality control measures in the development of AI devices must be applied to ensure that the available safety, accuracy, and efficacy requirements are met according to their intended use. Three important considerations during D&D and V&V will be:

  1. Transparency and explicability: Given the complexity of the algorithms used by this technology, understanding how a decision was made and tracing back errors or incorrect outputs can be difficult, making V&V more challenging. Manufacturers must have a clear, transparent specification of the tasks that AI systems can perform and the conditions under which these can be successfully achieved.
  2. Controllability and instability: In some cases, as in machine learning, it is not clear enough how AI algorithms reach an output from a given input. This vulnerability can be subject to manipulation by other individuals. Establishing safety measures at every phase can help prevent undesired and/or adverse outcomes. Depending on the device, one of the most common approaches will be having a “human-in-the-loop” to supervise and intervene if required. Continuous oversight and retrospective reviews of the performance of the device can be pursued through a PMCF.
  3. Risk management: AI devices will introduce new challenges for failure models and hazard analysis within the risk management process. The risk assessment for AI should consider several additional factors which may lead to unforeseeable hazards, such as autonomy, transparency, or human intervention, which would have to be quantified and mitigated.
  • Data and bias

The quality, distribution, and integrity of the input data are key to determining the output of the AI algorithms. Hence, it determines the performance of the device. According to the WHO, the development of a successful AI system for use in healthcare relies on high-quality data for both training the algorithm and validating the algorithmic model. Careful considerations shall be taken in these regards, where data should be representative of the actual environment in which the model is designed to operate in, and bias should be minimized if considered harmful.

  • Post-market phase

When is a new software release considered a substantial change? For instance, machine learning can lead to changes in the intended purpose or medical indication, implying that the manufacturer must perform a new conformity assessment. Nonetheless, there are cases where determining if there is a substantial change is more difficult. To avoid this, the Quality Management System should address and foresee changes in design and characteristics with regard to regulatory compliance after the conformity assessment has been performed.Finally, MDCG 2020-3 guidance on significant changes in medical devices certified under the MDD to transition to the MDR, and MDCG 2022-6 guidance on significant changes regarding transitional provision provide insights on the significance of software changes.

  • Governance and regulation

Since there is no scientific consensus on the meaning of “intelligence”, the sole definition of AI poses interpretation issues from a legal perspective. Similarly, the growing use of AI and its applicability is not completely covered in the current regulations since the risks and opportunities associated with it are not yet well understood or are likely to change over time.

Regarding regulations, AI-related definitions; harmonizing medical device regulations and regulatory pathways; addressing software qualification, and updating processes are still required. There is currently no guidance from the EU Medical Device Coordination Group (MDCG). Nevertheless, the Borderline & Classification (B&C) Working Group has been assigned to this task with no specific timeline proposed. Despite this, the ISO/IEC 23053:2022 is also available for generic AI systems using Machine Learning to identify the core requirements and baseline for AI solutions in healthcare to be deemed as trustworthy.

Particularly for medical devices, the IMDRF “risk categorization principles for software medical devices” is available. However, this guidance document only covers classification rules for SaMD. Similarly, MDCG 2021-24, MDCG 2019-11 and ISO/IEC 62304 introduce a classification system for software, and MDCG 2020-1 provides guidance on Clinical and Performance Evaluations of Medical Device Software. Nevertheless, the need for specific guidance to walk through all the phases of the lifetime of a SaMD with AI is still required. This should cover its complexity and diversity, also finding a balance between promoting innovation and safe use, along with the generation of common specifications and/or harmonized standards to support the legislation of AI in healthcare.

AI in medical devices, friend or foe?

There are still many unknown challenges for stakeholders involved in AI from the “bench to the bedside”. Nevertheless, cautionary measures can be already taken for those identified. Firstly, the regulatory authorities are working to guide manufacturers of SaMD with AI and establish a legal framework to protect human rights. Secondly, the manufacturers should ensure using high-quality integrated data, keeping high-quality controls for the model processing, and keeping in mind ethical concerns and associated risks at the early stages. Finally, the users and/or healthcare providers should get involved in understanding the model used by the SaMD and the implications it can have on the patient’s health to explain these to the patient in sufficient detail.

In conclusion, although AI in medical devices is already a reality, we are still in a learning phase as to the implications it will have in the future. Joint efforts by all stakeholders are required to ensure that only safe and performing devices are available in the market.

How can AKRN support manufacturers and clients?

AKRN’s team has thorough expertise in the regulatory framework for software as a medical device. As part of our regulatory services, AKRN can assist with all requirements needed to achieve the CE mark under the IVDR or MDR and perform a complete transition from IVDD to IVDR or MDD to MDR. Our team is formed by:

  • Quality Experts: development and implementation of processes in accordance with EN ISO 13485 to comply with MDR and IVDR requirements on post-market surveillance, vigilance, market surveillance, and communication with economic operators, among many others.
  • Regulatory Experts: strategic guidance, evaluation and preparation of regulatory documentation, including technical documentation and clinical/performance evaluation documentation.
  • Clinical Experts: support manufacturers in designing, executing and monitoring clinical investigations and clinical performance studies for SaMD.
Subscribe to Directory
Write an Article

Recent News

Exposure to Heat and Cold During Pregnan...

The research team observed changes in head circumf...

Using mobile RNAs to improve Nitrogen a...

AtCDF3 gene induced greater production of sugars a...

El diagnóstico genético neonatal mejor...

Un estudio con datos de los últimos 35 años, ind...

Highlight

Eosinófilos. ¿Qué significa tener val...

by Labo'Life

​En nuestro post hablamos sobre este interesante tipo de célula del...

Un ensayo de microscopía dinámica del ...

by CSIC - Centro Superior de Investigaciones Científicas

La revista ‘Nature Protocols’ selecciona esta técnica como “pro...

Photos Stream