Artificial intelligence (AI) and related technologies are becoming increasingly prominent in healthcare, and with this comes the need to efficiently regulate and evaluate them to ensure that all innovations implemented and adopted within the NHS are safe, effective and provide value for money.
The importance of regulation was the focus of a recent conference, Intelligent Health UK, and will be analyzed in more detail, along with the formation of AI and the digital regulation service, the challenges of regulation and hopes for the future.
Speakers told a keynote session at the Digital Health Rewards 2023 in March that clarifying the regulatory framework and embedding training of health staff are critical to accelerating the use of AI in the UK health system.
Adopters also “lack the clarity and direction to confidently deploy”, according to Cliodhna Ni Ghuidheir, lead scientific advisor for AI at the National Institute for Health and Care Excellence (NICE), who spoke to Digital Health News.
This can cause them to miss important steps or focus their time and resources in the wrong areas, she said, making it difficult for them to navigate the way to market.
“Regulation and evidence production standards are essential to maintain trust between technology developers and health and social care professionals using these technologies.
“They also create a level-playing field so that developers must meet relevant standards on safety, effectiveness and fairness.”
Hatim Abdulhussain, national clinical lead for AI and the digital workforce at NHS England and medical director at the Kent Surrey Sussex Academic Health Science Network, told Digital Health News that he believes regulation is important, Specifically in developing healthcare workers’ trust in AI.
“There is no doubt that regulation is critical to the adoption of these technologies among healthcare professionals and our work to understand and develop healthcare workers’ trust in AI is clear on this,” he said.
“We want to enable the use of technology that is safe, effective and reliable, and so we must have clear standards and guardrails.”
AI and Digital Exchange Service
Formerly known as the Multi-Agency Advisory Service (MAAS), the AI and Digital Exchange Service is a cross-regulatory advisory service that supports developers and adopters of AI and digital technologies.
The service provides guidance in regulatory, assessment and data governance pathways, thereby benefiting the entire health and social care landscape through safer and more effective use of technology.
Abdulhussain explained that NHS England launched the service at the beginning of the year with the aim of “bringing together the major domestic regulators to create a central source of regulatory and best practice guidance”.
The service, which was introduced at the Intelligent Health session, is supported by four organizations – The National Institute for Health and Care Excellence (NICE); the Medicines and Healthcare Products Regulatory Agency (MHRA); Health Research Authority (HRA) and Care Quality Commission (CQC) – allowing it to provide comprehensive guidance at each stage of the regulatory, evidence building and market access pathway.
The service is now live for NHS and social care adopters
Ní Ghuidhir confirmed that the service at the time would be “quickly updated for new developments”. The service for digital product developers has been running in beta since autumn.
She also revealed that the service for NHS and social care adopters is live as of today (12 June). Joint web-based service aims to accelerate the development and deployment of safe, innovative, value-added technologies in health and social care,
NHS clinicians have welcomed the new service, including Dr Suraj Menon, consultant radiologist and clinical director at Dartford and Grevsham NHS Trust.
“As an end user of these technologies, AI and the Digital Exchange Service give me confidence that the products I use meet high security standards, are effective and provide value for money,” he said. .
Haris Shueib, consultant physicist and head of clinical scientific computing at Guy’s and St Thomas’s Hospitals NHS Foundation Trust, called the service a valuable resource, adding that his role of facilitating AI adoption at the trust had in the past been a challenging and risky responsibility Is. ,
“This new service provides a common and consistent adoption path. It will help people like me implement such technologies safely and at speed for the betterment of patients,” he explained.
Speed of innovation, level of evidence key challenges
Abdulhussain told Digital Health News that despite the essential need for new innovations to be properly regulated and evaluated before they enter the healthcare market, the speed at which technologies are being developed and implemented is a significant challenge.
“The second challenge is defining what the most appropriate level of evidence should be, and how efficiently and appropriately the right threshold can be reached. The NICE Evidence Standards Framework is a helpful starting point.
Ní Ghuidhir believes that some of the key challenges relate to the wide range of regulations and their complexity.
He explained that “it can be difficult for innovators to understand the breadth of regulations that apply to their product, what actions they need to take to ensure compliance, and how each regulation must be considered throughout the product’s life cycle. “
“It’s important to plan ahead and get it right,” she said.
regulatory hopes and aspirations
As the AI boom continues and the technology is increasingly implemented, having a regulatory landscape within health and care that is reviewed on a regular basis will become increasingly important.
Abdulhussain believes that the safe and ethical use of AI can help improve care delivery, patient experience, and system intelligence.
“However, regularly reviewing the landscape, collaborating with cross industry partners domestically and internationally to understand the right approach to secure ethical AI, and building the right infrastructure, skills and training is what we need to do for AI. Having a pro-innovation regulatory approach will allow us to achieve our potential to transform health and care for our citizens,” he said.
In March, the Secretary of State for Science, Innovation and Technology presented ‘A pro-innovation approach to AI regulation’ to Parliament, a document focused on changing the AI regulation landscape in a more general sense, but much of which can be implemented is health technology.
The document states: “Government intervention is needed to improve the regulatory landscape. We will build on existing regimes, while maximizing the benefits we already have, intervening in a proportionate manner to address regulatory uncertainty and gaps.” Intend to take advantage of and build upon.
It promises that its regulatory framework has been designed to be “adaptable and future-proof”.
Ní Ghuidhir has a very optimistic view of the future in terms of AI technology, its place within the care pathway and how it is regulated, adding that part of her job is helping companies develop solutions for their use cases. is helping to develop appropriate evidence-building strategies for
“It often leads me to inquire about the mechanism they have developed to improve safety assurance and models, but it also provides insight into how young this field is because it can Even then It can be quite difficult to identify where AI technology should intervene in the care pathway to add the most value,” she said, adding that within a decade developers were likely to have a better understanding of regulatory standards.
“The fact that the rules and assessment guidance will evolve is one of the main reasons why we have created a vibrant website , so we can quickly Update this as new rules come in,,










