PainScience.com • Good advice for aches, pains & injuries

microblog

Hey Siri, scan me!

Paul Ingraham ARCHIVEDMicroblog posts are archived and rarely updated. In contrast, most long-form articles on PainScience.com are updated regularly over the years (see updates page).

“Hey, Siri, please scan my whole body and interpret the results with the skill and vigilance of a million brilliant, well-rested radiologists.”

Machine learning (ML) is the most rapidly advancing facet of artificial intelligence, and it is spooky-as-hell good at pattern recognition. Old-school AI played chess by brute force, calculating the effect of vast numbers of possible moves, which made it roughly as good as the greatest human chess players. But ML-powered AI “learns” how to win at chess by rapidly ramping up its ability to recognize patterns of play that are linked to winning — when the board looks like this, a win is more likely than when it looks like that. This produces much more adaptive and creative tactics, plays that have literally never been seen before. And no human can beat it.

There’s no doubt ML will transform radiology, and probably develop an uncanny ability to notice things that would escape the notice of radiologists, to “know” that when the body looks like this, a disease is more likely than when it looks like that. But the point of my snarky fictional Siri-scenario is that the context is missing, and we routinely see our virtual assistants fail for similar reasons with much simpler challenges. What we really need to be able to ask is, “Scan me and interpret the results in the context of my symptoms,” but Siri doesn’t know what I had for breakfast, let alone the details of which positions make my shoulder hurt, and how bad, and whether the pain has a lancinating quality or it’s more of a burning… and so on and on. That kind of detail about the subjective experience is routinely out of reach of human intelligence, let alone artificial intelligence.

ML doesn’t have access to critical data it needs to learn the meaning of imaging results. The machines are not just clueless about how we feel, but there’s also no easy way to tell them. Someday, machines will probably learn to integrate clinical pattern recognition with imaging results, but we’re a long way from that still. While major symptoms will eventually be relatively easy to encode, the devil is routinely in the medical details and subtleties that will never be easy to feed to the ML beast.

 End of post. 
This is the MICROBLOG: small posts about interesting stuff that comes up while I’m updating & upgrading dozens of featured articles on PainScience.com. Follow along on Twitter, Facebook, or RSS. Sorry, no email subscription option at this time, but it’s in the works.