AI Bill of Rights aims to create trusted algorithms in health care
AI Bill of Rights blueprint calls for monitoring algorithms to address ethical and legal issues of AI in healthcare. The healthcare industry is examining how to audit and test complex algorithms.
“These are quite complex models, so I think it could be massively beneficial to have some framework for rigorously evaluating and monitoring the performance of algorithms,” Xue says.
Xue points out that software developers sometimes have to go back and re-train models on additional data to improve model performance.
He suggests that clinicians evaluate the models to make sure they work well for different demographic groups.
“A clinician’s expertise is useful in helping generate labels and annotations for the data, which are then used to train or evaluate algorithms,” says Xue.
Xue says that after testing the model in different subgroups of patients, data scientists can narrow down and improve the algorithms.
“For example, thresholds can be set to determine the trade-off between how many false positives or false negatives the model can report,” explains Xue. “It depends on the medical application. Sometimes false positives or false negatives can have different costs.
discover: Tips to avoid four common AI mistakes.
Chris Marmigus, head of legal at RSA Security, agrees that it’s important to have a physician double-check the work of AI tools.
“Medicine is not an exact science,” says Marmigas. “That’s why you should always have a doctor or any health care professional review the findings made by AI.”
Marmigas compared the review process in health care to how a user double-checks the findings of red-light cameras to see whether a ticket should be issued.
Marmigus says, “There’s actually a computer program that logs all the cars that go through a red light, but then there’s someone who reviews all those results to decide whether or not to go through a ticket. “
The Framework warns that automated systems should be designed to protect people from “unintended but predictable uses or effects of automated systems”.
Xue points out that an algorithm could lead to a misdiagnosis or the wrong treatment.
He continued, “I think the main concern is that algorithms make a lot of mistakes, and that can hurt patient health care outcomes.” “If the patient has a disease and the algorithm misses the diagnosis, it can certainly result in a bad outcome for the patient, just like a human doctor can make a mistake.”
Maintaining health equity using AI tools
AI is known to introduce biases that make it more difficult to maintain health equality.
The AI Blueprint states, “You should not face discrimination by algorithms and systems should be used and designed in an equitable manner.”
However, Marmigas says that the bias may be more in the people programming the algorithms than in the algorithms themselves.
“The algorithm is not inherently discriminatory,” Marmigas says. “It is the person who programs it who may be practicing active or passive discrimination or has discriminatory tendencies. They knowingly or unknowingly program it into the algorithm.
The Digital Bill of Rights calls on designers and software developers to protect communities from algorithmic discrimination by including accessibility for people with disabilities, conducting disparity testing, and making the results of this testing public.











