Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI The Military

Pentagon Believes Its Precognitive AI Can Predict Events 'Days In Advance' (engadget.com) 93

The Drive reports that US Northern Command recently completed a string of tests for Global Information Dominance Experiments (GIDE), a combination of AI, cloud computing and sensors that could give the Pentagon the ability to predict events "days in advance," according to Command leader General Glen VanHerck. Engadget reports: The machine learning-based system observes changes in raw, real-time data that hint at possible trouble. If satellite imagery shows signs that a rival nation's submarine is preparing to leave port, for instance, the AI could flag that mobilization knowing the vessel will likely leave soon. Military analysts can take hours or even days to comb through this information -- GIDE technology could send an alert within "seconds," VanHerck said.

The most recent dry run, GIDE 3, was the most expansive yet. It saw all 11 US commands and the broader Defense Department use a mix of military and civilian sensors to address scenarios where "contested logistics" (such as communications in the Panama Canal) might pose a problem. The technology involved wasn't strictly new, the General said, but the military "stitched everything together." The platform could be put into real-world use relatively soon. VanHerck believed the military was "ready to field" the software, and could validate it at the next Globally Integrated Exercise in spring 2022.

This discussion has been archived. No new comments can be posted.

Pentagon Believes Its Precognitive AI Can Predict Events 'Days In Advance'

Comments Filter:
  • Years ahead I can tell you that this will be the headline some day.

    • by arglebargle_xiv ( 2212710 ) on Monday August 02, 2021 @08:34PM (#61648833)
      I predict "Pentagon's AI was completely useless at predicting anything" headline some day. The previous one, "Covid19 AI was completely useless at predicting anything", ran here just yesterday.
      • by NagrothAgain ( 4130865 ) on Monday August 02, 2021 @09:38PM (#61648983)
        There's a big difference between a bunch of Venture Capital techBros with no Medical education trying to use ML to spot unknown markers related to a Novel disease, and military experts with decades of experience using it to spot well known markers of Militaristic activity.
        • Re: (Score:2, Informative)

          military experts with decades of experience using it to spot well known markers of Militaristic activity.

          Like, for example, the collapse of the Soviet Union was predicted? If there's one thing intelligence analysts have shown us over the years it's that they're about as good at predicting significant events as the Psychic Friends Network, and at $3.99(?) a minute they're much better value for money.

          There's also a difference between someone predicting something and it being actioned. To take a currently-being-played-out tragedy, it's like the Pentagon said "In Vietnam when we withdrew our troops the country

          • by Wycliffe ( 116160 ) on Tuesday August 03, 2021 @09:43AM (#61650627) Homepage

            Like, for example, the collapse of the Soviet Union was predicted? If there's one thing intelligence analysts have shown us over the years it's that they're about as good at predicting significant events as the Psychic Friends Network, and at $3.99(?) a minute they're much better value for money.

            This AI isn't going to be able to predict large political trends or unpredictable events but what it can predict is mobilization before it happens. Basically, instead of humans having to spend hours looking at images, this AI can predict new activity to be verified by humans. This should work for common things like a single submarine launch but should also be able to predict a major attack like 25 submarines getting prepared to launch.

      • by gtall ( 79522 )

        Yup, comparing apples and oranges leads to devastating insights.

  • The example is perfect, a submarine moving out.

    Pre- AI, the submarine is noted by a low level analyst, who immediately sees the danger and wants to tell the Secretary of the Navy. But he can't. He is only allowed to tell his boss, whom he must spend hours convincing to push it up the chain to HIS boss, whom we will call a Division Chief.

    Then, the Division Chief has to be convinced to tell the Secretary. Who spends a day coming up with a response and asks the President to do it. But by this time, that

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Blah Blah Blah AI misinterprets information even a low level analyst would have caught, president acts on false information ruining his reputation on a global scale.

    • It seems a little more like the analyst is being replaced because the actual analysis is a bottleneck. Instead of waiting for the analyst to count the number of cars in the parking lot at the shipyard and checking that against the last month's worth of photos, the computer does it. Well, the actual analyst is probably still there, but now has a tool to count the cars, estimate the time the sub will leave, and spit out a list of possible tasks it may have been assigned which the analyst can narrow down and
    • by Junta ( 36770 )

      I think it's more likely that they can actually process data that before was infeasible.

      The don't have an analyst watching every square meter of the earths surface, they have to pick and choose based on available manpower. For given input stream, if no human was available, there wasn't much in traditional algorithms to do with that imagery, so at best they archive and reference if they realize it's a point of interest.

      With more algorithmic reach, they can provide some level of analysis to all that collecte

  • Oh christ (Score:5, Interesting)

    by RightwingNutjob ( 1302813 ) on Monday August 02, 2021 @08:37PM (#61648839)

    No it can't predict the future.

    No it isn't magic.

    No it won't flag something it's never been trained on.

    Yes there will be a shit ton of false alarms.

    Is it overall useful? Maybe. But just as one of many tools in the toolbox. It won't be treated that way.

    Remember folks. Radar operators at Pearl Harbor saw the Japanese bombers coming in advance. They didn't understand their readings and ignored them.

    • by Anubis IV ( 1279820 ) on Monday August 02, 2021 @08:47PM (#61648881)

      No it can't predict the future.

      It’s actually really easy to predict the future. The hard part is getting it right.

    • Unless it predicts that it will be renamed "SkyNet" we should be ok.
    • When it comes to false alarms for the military, they are just training exercises. You could relatively accept a high false-positive rate as long as it entails a low percentage of false-negatives. The guiding limit to this is how many resources are expended on the former but ultimately everyone of those is added value for the experience of your war fighters.

    • Re:Oh christ (Score:5, Insightful)

      by quantaman ( 517394 ) on Monday August 02, 2021 @09:41PM (#61648987)

      The quote is clearly coming from someone who doesn't quite understand the tech (or he's trying to sell it to people who don't understand the tech).

      What I suspect is happening is they have a bunch of stuff like satellite footage of subs leaving port, satellite footage that usually doesn't get looked at until days/weeks later when someone says "why didn't we know that sub was about to leave port!".

      So they've trained some image recognition on the satellite footage (and other pattern recognition on other sources), so now they analysts are tools which pieces of intel they should analyze.

      And if the analyst is told to look at images of that sub getting loaded several days before it set sail... well now you just predicted that "days in advance".

      • by gtall ( 79522 )

        What part of "there is too much data for human analysts to eyeball" do you not understand?

      • What I suspect is happening is they have a bunch of stuff like satellite footage of subs leaving port, satellite footage that usually doesn't get looked at until days/weeks later when someone says "why didn't we know that sub was about to leave port!".

        If true, and done correctly, this is a really good use for AI. However, like most AI programs, its abilities are greatly exaggerated and oversold.

        It sounds like the military's intelligence gathering is drowning them in data, and they don't know where to focus their attention. Training an AI to sort through the normal data and spot abnormalities should greatly reduce the mental strain on analysts. The humans can then focus their attention on data that has already had a massive reduction in signal to noi

    • by jrumney ( 197329 )

      My AI predicts another Cuban missile crisis coming up.

    • Remember folks. Radar operators at Pearl Harbor saw the Japanese bombers coming in advance. They didn't understand their readings and ignored them.

      Actually, the radar operators reported the sighting to the officer in charge of the radar center. It was Lt Tyler that said "Don't worry about it."

      The first link tells the story pretty much as most people know it. The second link gives a better understanding of what happened.

      Background info: https://www.dispatch.com/artic... [dispatch.com] https://www.thedailybeast.com/... [thedailybeast.com]

    • It seems it is rather an AI meant to analyze the huge amount of images which are taken by satellites all the time. This will predict events days earlier than they would have been noticed before.
      It gets really interesting when it is predicting not just one sub leaving, but several, and some planes too.
      And I guess it is more about warning humans that something looks fishy, as if a large army could assemble in one place soon. Then the humans can investigate, gather more data, take preparations.
  • by nhbond ( 6522794 ) on Monday August 02, 2021 @08:40PM (#61648843)
    Watch them launch a "preemptive strike" during the exercise claiming the opponent was probably going to attack them because the software said so.
  • by The Evil Atheist ( 2484676 ) on Monday August 02, 2021 @08:41PM (#61648847)

    GIDE technology could send an alert within "seconds,"

    Good. Nothing like being able to start a war faster based on bad intel and bad, impenetrable analysis by AI.

    • Re: (Score:3, Funny)

      by chispito ( 1870390 )

      GIDE technology could send an alert within "seconds,"

      Good. Nothing like being able to start a war faster based on bad intel and bad, impenetrable analysis by AI.

      Shall we play a game?

    • by Joe_Dragon ( 2206452 ) on Monday August 02, 2021 @09:19PM (#61648953)

      Turn your key, sir!

    • The policy makers just ignored it when they started the last two major wars. I just made up whatever until they wanted and we just ran with it because we had more than little bloodlust following September 11th.
      • just made up whatever [...] they wanted

        That's all part of bad intel. And then feeding it into AI, which I doubt has been adequately tested on disinformation, is just a way of washing their hands of responsibility. Much like how AI is used in policing these days.

  • The past is known. The present happens. The future? I wait for the results. Lol.
  • I'm all for technology but if you believe this software can do what they claim then I have a bridge to sell you... and another to sell you tomorrow that just so happens to look the same and be in the same location. Seriously, it's a simple pattern cognition systems that they have given certain tasks to, chained them up and hope that somewhere it didn't mistake innocent but unusual activity for an unfolding plot. Also, I have someone is going to buy this bridge in a few minutes so I can sell it to you righ

  • by gmuslera ( 3436 ) on Monday August 02, 2021 @08:45PM (#61648871) Homepage Journal
    but they will be a minority.
  • by awwshit ( 6214476 ) on Monday August 02, 2021 @09:10PM (#61648939)

    > The technology involved wasn't strictly new, the General said, but the military "stitched everything together."

    Sure thing, General. Garbage in, garbage out.

  • by Joe_Dragon ( 2206452 ) on Monday August 02, 2021 @09:18PM (#61648951)

    W.O.P.R says go to DEFCON 1

  • Was it able to predict what happened on January 6?

  • I'll be impressed if the AI can successfully predict events that happened several days in the past. The cost is sure to be exorbitant, though, and perhaps they could just read a newspaper instead. Or Slashdot, if they want news that happened even earlier.

    • Re: (Score:3, Informative)

      by iggymanz ( 596061 )

      ...but this is military AI which is totally different. We can make new enemies with accusations and self-fulfill predictions of war with belligerent actions. It's a self-correcting system!

  • we've had too many close calls already

  • 100% chance the contractors running the program will be using it for stock picks in their spare time
  • by wakeboarder ( 2695839 ) on Tuesday August 03, 2021 @12:12AM (#61649263)

    Has seen minority report. If they had, we could skip the waste of money and not try and predict the future.

    • by diaz ( 816483 )

      The higher you get in any organization the more you believe that if you want something bad enough, it will happen, regardless of whether it is possible.

    • The DoD has $700 billion and change to burn. They have to do some buzzword stuff with it to prove they are really using it all in a clever way. We know by now that AI is not some magical thing. People will still misunderstand, overlook or ignore whatever comes out of the AI like they always did with information that was available and could have prevented some disaster.
  • what a complete load of shit. alerting someone to what is happening right now faster than a human can is NOT predicting the future.
  • They need one to just to make move all the little pieces on their war-game board, that's hardly 'predicting', that's doing your fucking job.

  • "Precognitive AI" is not a thing. Instead, it's what happens when Morty bashes two sci-fi words together and Rick calls him on it.

  • by kamakazi ( 74641 ) on Tuesday August 03, 2021 @08:28AM (#61650337)

    Now to find the Prime Radiant...

  • So Tony Stark is intimately involved with the data availability to the people making decisions on who to proactively kill. That can't be a problem.

  • Well, this leads to all sorts of scenarios.
    During the Reagan administration, the US would buy up blood supplies all over the world to see how the Soviets would react, and to start a war but be blameless. I think that Caesar came up with that concept first, to draw your enemies into a war, knowing that they might be ill equipped to fight it. You might add in a False Flag operation to get things heated. Leading up to the American Revolutionary War, the Boston Massacre was an attempt to draw young British
  • Pentagon is currently on lockdown following report of shooting on bus platform outside
    By Barbara Starr and Ellie Kaufman, CNN

    Updated 11:56 AM ET, Tue August 3, 2021

  • Good! They can start with the weather.
  • “It's tough to make predictions, especially about the future.” Yogi Berra
  • Sounds like the movie Minority Report, without precog psychics.

The trouble with being punctual is that nobody's there to appreciate it. -- Franklin P. Jones

Working...