What impact will AI have on the healthcare system and doctors’ working lives? Read a comprehensive report “Artificial Intelligence in Healthcare” by The Academy of Medical Royal Colleges.
Artificial Intelligence has already arrived in healthcare. Few doubt though, that we are only at the beginning of seeing how it will impact patient care. Not unsurprisingly, the pace of development in the commercial sector has outstripped progress by traditional healthcare providers – in large part because of the great financial rewards to be had.
While AI in healthcare promises great benefits to patients, it equally presents risks to patient safety, health equity and data security. The only reasonable way to ensure that the benefits are maximised and the risks are minimised is if doctors and those from across the wider health and care landscape take an active role in the development of this technology today. It is not too late.
That is not to say doctors should give up medicine and take up computational science, far from it – their medical and clinical knowledge are vital for their involvement in what is being developed, what standards need to be created and met and what limitations on AI should be imposed if any.
The Academy of Medical Royal Colleges has identified seven key recommendations for politicians, policymakers and service providers:
- Politicians and policymakers should avoid thinking that AI is going to solve all the problems the health and care systems are facing. Artificial intelligence in everyday life is still in its infancy. In health and care, it has hardly started – despite the claims of some high-profile players
- As with traditional clinical activity, patient safety must remain paramount and AI must be developed in a regulated way in partnership between clinicians and computer scientists. However, regulation cannot be allowed to stifle innovation
- Clinicians can and must be part of the change that will accompany the development and use of AI. This will require changes in behaviour and attitude including rethinking many aspects of doctors’ education and careers. More doctors will be needed who are as well versed in data science as they are in medicine
- For those who meet information handling and governance standards, data should be made more easily available across the private and public sectors. It should be certified for accuracy and quality. It is for the government to decide how widely that data is shared with non-domestic users
- Joined up-regulation is key to make sure that AI is introduced safely, as currently there is too much uncertainty about accountability, responsibility and the wider legal implications of the use of this technology
- External critical appraisal and transparency of tech companies are necessary for clinicians to be confident that the tools they are providing are safe to use. In many respects, AI developers in healthcare are no different from pharmaceutical companies who have a similar arms-length relationship with care providers. This is a useful parallel and could serve as a template. As with the pharmaceutical industry, licensing and post-market surveillance are critical and methods
should be developed to remove unsafe systems
- Artificial intelligence should be used to reduce, not increase, health inequality – geographically, economically and socially.
Chapters in the report:
— Patient safety
— The doctor and patient relationship
— Public acceptance and trust
— Accountability for decisions
— Bias, inequality and unfairness
— Data quality, consent and information governance
— Training and education
— Medical research
— The regulatory environment
— Intellectual property and the financial impact on the healthcare system
— Impact on doctors’ working lives
— Impact on the wider healthcare system.