top of page
Search

The Future of Artificial Intelligence in Healthcare

  • Writer: jordanelaineb
    jordanelaineb
  • Feb 13, 2024
  • 3 min read

When discussing Artificial Intelligence (AI), most people think of tools like ChatGPT, which gained popularity in 2023. However, AI has been around for a while, and the healthcare industry is already using AI to automate processes that assist in diagnosis and treatment of patients. However, medical professionals should be wary of AI as, just like humans, AI is prone to errors that could affect patient care and outcomes. Essentially, AI can be a useful tool in conjunction with human analysis and implementation.


An example of AI in healthcare could be a diagnostic tool that reviews the labs of a patient to determine whether the patient is suffering from a certain condition. While the AI may be helpful in some cases, it may also make a diagnosis that is inconsistent with the physician's final findings. Thus, physician training and experience may ultimately override an AI's findings.


AI will likely affect the standard of care analysis of a physician in medical malpractice lawsuits. Going back to my first post, a plaintiff must establish that the physician owed a duty to the plaintiff and that the physician breached said duty. This is known as the standard of care, which is evaluated in the context of a physician in a similar situation with the same training. The standard of care often needs to be proven through expert testimony as a regular person is not familiar with what a physician must or must not do. Currently, a physician is not typically expected to use AI when diagnosing and treating patients. However, this could change as AI develops, and using an AI tool could become the standard of care in most cases.


So, when a physician uses AI, who is liable when an error occurs? There may be three main culprits: (1) the healthcare provider who used the AI to diagnose or treat the patient, (2) the hospital or healthcare facility where the patient was treated, and (3) the AI developers and manufacturers.


A healthcare provider may be liable when using AI if they misinterpret the AI-generated data, rely on the AI when the provider should rely on his or her professional judgment, or improperly utilize AI programs meant for different specialties or conditions. These errors could be the result of negligence by the physician or the result of inefficient training.


A healthcare facility may be liable through vicarious liability if their physicians engage in the above behaviors. Further, healthcare facilities may be liable under negligent training if the facility failed to properly train its healthcare providers and other employees on the AI programs.


Finally, an AI developer/manufacturer may be liable through products liability. If the manufacturer creates a defective medical AI that causes injury to the patient, then the manufacturer will be held liable. However, if the healthcare facility of provider knew the AI was defective but still continued to use it, then the healthcare provider/entitiy will also be liable.


Clearly, as much as AI can benefit healthcare providers, it can also spell future medical malpractice claims.


So how can healthcare providers and facilities avoid claims regarding AI? In my opinion, the responsibility mainly falls to the healthcare facility more than the actual providers or to the AI developers.


Healthcare facilities should properly vet AI programs before implementing them into hospital procedures. This includes holding peer review studies regarding the AI tools and reviewing clinician testing in actual hospital settings. Further, healthcare facilities should implement indemnification agreements regarding AI technology within informed consent agreements. Finally, healthcare facilities should ensure all staff are adequately trained on the AI programs the facility intend to implement.


Healthcare facilities should also employ task forces to oversee the use of AI by healthcare providers. These task forces should conduct routine audits on AI tools to determine whether they are effective in patient care.


Finally, healthcare facilities should remain caught up on AI developments to implement new technology into hospitals or learn when an AI tool is no longer effective in patient care.


I'm interested to know your thoughts on AI in healthcare. Do you think it's a positive development, or do you think it will cause more trouble than it's worth?

Comments


Subscribe here to get my latest posts

Thanks for submitting!

JEB Esq. Powered and secured by Wix

  • LinkedIn
  • Twitter
bottom of page