There’s new advice on how to handle tampering that fools algorithms and enables healthcare fraud
Last June, a team at Harvard Medical School and MIT showed that it’s pretty darn easy to fool an artificial intelligence system analyzing medical images. Researchers modified a few pixels in eye images, skin photos and chest X-rays to trick deep learning systems into confidently classifying perfectly benign images as malignant.
These so-called “adversarial attacks” implement small, carefully designed changes to data—in this case pixel changes imperceptible to human vision—to nudge an algorithm to make a mistake.
That’s not great news at a time when medical AI systems are just reaching the clinic, with the first AI-based medical device approved in April and AI systems besting doctors at diagnosis across healthcare sectors.
Now, in collaboration with a Harvard lawyer and ethicist, the same team is out with an article in the journal Science to offer suggestions about when and how the medical industry might intervene against adversarial attacks.
Pages: 1 2