Detailed guides to painful problems, treatments & more

Hey Siri, scan me!

 •  • by Paul Ingraham
Get posts in your inbox:
Weekly nuggets of pain science news and insight, usually 100-300 words, with the occasional longer post. The blog is the “director’s commentary” on the core content of PainScience.com: a library of major articles and books about common painful problems and popular treatments. See the blog archives or updates for the whole site.

“Hey, Siri, please scan my whole body and interpret the results with the skill and vigilance of a million brilliant, well-rested radiologists.”

Machine learning (ML) is the most rapidly advancing facet of artificial intelligence, and it is spooky-as-hell good at pattern recognition. Old-school AI played chess by brute force, calculating the effect of vast numbers of possible moves, which made it roughly as good as the greatest human chess players. But ML-powered AI “learns” how to win at chess by rapidly ramping up its ability to recognize patterns of play that are linked to winning — when the board looks like this, a win is more likely than when it looks like that. This produces much more adaptive and creative tactics, plays that have literally never been seen before. And no human can beat it.

There’s no doubt ML will transform radiology, and probably develop an uncanny ability to notice things that would escape the notice of radiologists, to “know” that when the body looks like this, a disease is more likely than when it looks like that. But the point of my snarky fictional Siri-scenario is that the context is missing, and we routinely see our virtual assistants fail for similar reasons with much simpler challenges. What we really need to be able to ask is, “Scan me and interpret the results in the context of my symptoms,” but Siri doesn’t know what I had for breakfast, let alone the details of which positions make my shoulder hurt, and how bad, and whether the pain has a lancinating quality or it’s more of a burning… and so on and on. That kind of detail about the subjective experience is routinely out of reach of human intelligence, let alone artificial intelligence.

ML doesn’t have access to critical data it needs to learn the meaning of imaging results. The machines are not just clueless about how we feel, but there’s also no easy way to tell them. Someday, machines will probably learn to integrate clinical pattern recognition with imaging results, but we’re a long way from that still. While major symptoms will eventually be relatively easy to encode, the devil is routinely in the medical details and subtleties that will never be easy to feed to the ML beast.

PainSci Member Login » Submit your email to unlock member content. If you can’t remember/access your registration email, please contact me. ~ Paul Ingraham, PainSci Publisher