Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Technology

Fake Cancerous Nodes in CT Scans, Created By Malware, Trick Radiologists (washingtonpost.com) 45

Researchers in Israel created malware to draw attention to serious security weaknesses in medical imaging equipment and networks. An anonymous reader shares a report: Researchers in Israel say they have developed malware to draw attention to serious security weaknesses in critical medical imaging equipment used for diagnosing conditions and the networks that transmit those images -- vulnerabilities that could have potentially life-altering consequences if unaddressed. The malware they created would let attackers automatically add realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them. Or it could remove real cancerous nodules and lesions without detection, leading to misdiagnosis and possibly a failure to treat patients who need critical and timely care.

Yisroel Mirsky, Yuval Elovici and two others at the Ben-Gurion University Cyber Security Research Center in Israel who created the malware say that attackers could target a presidential candidate or other politicians to trick them into believing they have a serious illness and cause them to withdraw from a race to seek treatment. The research isn't theoretical. In a blind study the researchers conducted involving real CT lung scans, 70 of which were altered by their malware, they were able to trick three skilled radiologists into misdiagnosing conditions nearly every time. In the case of scans with fabricated cancerous nodules, the radiologists diagnosed cancer 99 percent of the time. In cases where the malware removed real cancerous nodules from scans, the radiologists said those patients were healthy 94 percent of the time.

This discussion has been archived. No new comments can be posted.

Fake Cancerous Nodes in CT Scans, Created By Malware, Trick Radiologists

Comments Filter:
  • If you give doctor faked imaging scans, he might diagnose wrong?
    • by dlleigh ( 313922 )

      This team has also been fooling proctologists with rubber poop.

    • by EndlessNameless ( 673105 ) on Wednesday April 03, 2019 @05:31PM (#58381088)

      I'm pretty sure the studies are trying to demonstrate that their modifications are plausible and undetectable. The idea that you get bad conclusions from bad data... that's not really up for debate.

      Basically, you can fool anyone with good fakes, but not everyone can make good fakes. These guys proved they can. And they have an automated tool that can do it

    • If you give doctor faked imaging scans, he might diagnose wrong?

      No, that is not the issue. The problem is that the images are handled insecurely, and it is not difficult for a black hat to tamper with them.

      It is easy to imagine how this could be abused. For instance the Russians could modify Joe Biden's brain scan to make it look like a tumor was causing his weird behavior with women. Soon he could be elected president by people that believe he can be turned back into a normal person with just a bit of brain surgery.

  • But who in their right mind would connect an MRI machine to the internet? At my work we didn't even have the scanning electron microscope connected to it because of this.

    • Hospitals with document management systems to store electronic patient records. If the IT department is any good, dedicated VLANs should restrict the flow of data over the network. Too often everything is on the General VLAN.
    • by Anonymous Coward

      a lot of times the people doing the analysis arent in the hospital... its farmed out to places like india in many circumstances. Which is hard to do if the machine isnt on the internet.

    • At my work we didn't even have the scanning electron microscope connected to it because of this.

      Because.... ???

      Oh! That's right! Because most SEMs are run by Windows XP or older...

      - It's Windows
      - It's OLD Windows
      - It's maintained by some retired IT guy
      - It's got an inch of dust inside
      - There's a serial port with a badly-hand-soldered connection involved somewhere, I'm sure
      - The retired IT guy still blames everything on the "one stop bit or two" conundrum.

    • Not MRIs, but for some reason the major genome sequencing instrument vendors generally require remote access to their instruments (Illumina and PacBio both do this - PacBio was just bought by Illumina, but they've been doing it since the beginning). Heck, there used to be a map that someone made that found Illumina instruments on the internet and plotted their physical locations based on the IP address (it doesn't seem to exist anymore). They also tend to run unpatched versions of Windows.

      Sequencing data is

    • by AHuxley ( 892839 )
      The "digital" file is sent to any outside expert. By some network.
      The expert can sit at their desk and see the file on a computer. ie totally not part of the same network that did the scan.
    • But who in their right mind would connect an MRI machine to the internet?

      No one. But I've seen it done. In fact 12+ years ago I was at a hospital that had to reimage the console on a magnet because the techs were using it to surf the internet and got all kinds of malware and/or viruses on it. I think the the scanner was down for close to a week because of it.

    • But who in their right mind would connect an MRI machine to the internet?

      In my experience almost all CT and MRI scanners are connected to the internet, although usually on separate vlans that isn't directly connected to the open internet, but some are only protected by a firewall and perhaps some port mapping.

      Many hospitals also rely on radiologists-for-hire outside the hospitals to read and diagnose the images so there's access from the outside to the DICOM-databases holding the images, although usually through a VPN tunnel.

      Now, if a malware infects the external radiologists co

  • Buried the lead (Score:5, Insightful)

    by pr0t0 ( 216378 ) on Wednesday April 03, 2019 @02:36PM (#58379990)

    The real story here is that the researchers developed an AI capable of detecting cancer nodules in CT and MRI scans with 94% accuracy. I mean, if it can find them to remove them...it can find them. That seems like pretty high accuracy for computer aided diagnostics.

    • Re: (Score:2, Interesting)

      by scamper_22 ( 1073470 )

      Not really. Computer aided detection of anomalies in medical imaging has been pretty solid for a while.

      I worked in the field over 15 years ago. Already for breast cancer anomaly detection, it was easily over 80% depending on the system when compared to even the best radiologists.

      I obviously wouldn't book a surgery strictly on an automated analysis, but as a good first screening, we've had that covered for a while now comparable to real radiologists.

      It is used, but generally people don't trust the automated

  • by presidenteloco ( 659168 ) on Wednesday April 03, 2019 @02:45PM (#58380054)
    That we have to protect all technology against psychopathic super-assholes.
    • Well, sure, but it's a double-edged sword; in this case you can see how technology can protect us against psychopathic super-assholes. All the hackers need to do is fake scans on a bunch of sitting politicians and get more discussion on socialized medicine into the news cycle.

    • by Baki ( 72515 )

      Yes I know it is incredible, but people that regard life as a game, and need to "win" without any regards for the common good, really exist. Some are even rulers or presidents.

  • I have bone spurs, honest!

  • I appreciate the stunning and scary significance of the advanced malware that is able to "realistically" modify medical imagery in a way that coerces doctor into misdiagnosis. However, I do not see any description of the attack vector? I only read the free version of the article, so I could be completely missing it. Sorry, if so!!

    Will

    • by AHuxley ( 892839 )
      Re "any description of the attack vector"
      Hours in the ER due to an "accident" then result in other medial issues? Hours waiting for further tests and digital results.
      The result is the way the person responds to the unexpected event..
      Dissidents who are trusted and well connected in a protest movement at an important time in history can have an induced medical issue stop all their protesting.
      Finding an expert. Making an appointment. Waiting. Calling friends and family.
      Do they go with private heal
  • by az-saguaro ( 1231754 ) on Wednesday April 03, 2019 @04:33PM (#58380728)

    Look at the demo video at: https://www.youtube.com/watch?... [youtube.com] .

    As someone who looks at such things for a living, I find this interesting but not so compelling. For the example of just a single injected nodule, I thought it looked unnatural. But, how it is perceived depends on how it is presented. Suppose they presented the images to real radiologists this way, "You will be looking at films that might be real or might be faked, guess which is which", then I think that most radiologists would know that the single nodule was not natural. But, if presented this way, "Look at these films and see if there is anything abnormal", then many would have fallen for it. But likewise many would have been thinking, "It is probably cancer, because it is a solid nodule, but it looks rather odd."

    In comparison, the 472 nodule example was obviously fake. The nodules were all far too similar, too round, too uniform, too dense. I doubt many radiologists would have fallen for that.

    If the authors intent was to show that fake imagery can be made that could be used for nefarious deception, then I think we already knew of that concern. I would say that I have seen far more credible and persuasive false CGI than what was seen here. If Pixar for example decided to make fake x-rays, I suspect they could do a much better job of it.

    This brings up a question that seems far more interesting to me. If an AI agent can make a fake image that can fool some experts under certain conditions, but the fakery can also be recognized, then can there be a second AI agent that can spot the fakery created by the first AI?

    What do you think?

    • I am not an expert,what you say seems feasible, but the defensive AI would need ground-truthed datasets for training. How would these be obtained? i.e. where would you get known fakes? This process would have to be repeated if newer versions of the adversarial AI produced sufficiently different fakes.
      • Thanks for the thoughtful remark, makes sense. I too am no expert, but your remarks make me wonder if there is a different way to make the defensive AI. Rather than training the defensive AI to recognize each style of forgery, can the defensive AI be better trained than the adversarial one, even on the original dataset or an expanded superset of it, such that it knows that the forgery simply isn't very realistic, that the first AI was simply too amateurish?

If all else fails, lower your standards.

Working...