Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

A New Way To Predict Ship-Killing Rogue Waves (economist.com) 46

AI models can find patterns and make predictions, but their reasoning is often inscrutable. This "black box" issue makes AI less reliable and less scientifically useful. However, a team led by Dion Hafner (a computer scientist at the University of Copenhagen) devised a clever neural network to predict rogue waves. By restricting inputs to meaningful wave measurements and tracing how they flowed through the network, the team extracted a simple five-part equation encapsulating the AI's logic. Economist adds: To generate a human-comprehensible equation, the researchers used a method inspired by natural selection in biology. They told a separate algorithm to come up with a slew of different equations using those five variables, with the aim of matching the neural network's output as closely as possible. The best equations were mixed and combined, and the process was repeated. The result, eventually, was an equation that was simple and almost as accurate as the neural network. Both predicted rogue waves better than existing models.

The first part of the equation rediscovered a bit of existing theory: it is an approximation of a well-known equation in wave dynamics. Other parts included some terms that the researchers suspected might be involved in rogue-wave formation but are not in standard models. There were some puzzlers, too: the final bit of the equation includes a term that is inversely proportional to how spread out the energy of the waves is. Current human theories include a second variable that the machine did not replicate. One explanation is that the network was not trained on a wide enough selection of examples. Another is that the machine is right, and the second variable is not actually necessary.

This discussion has been archived. No new comments can be posted.

A New Way To Predict Ship-Killing Rogue Waves

Comments Filter:
  • Wait wut? (Score:4, Insightful)

    by hdyoung ( 5182939 ) on Friday November 24, 2023 @01:08PM (#64029151)
    The first description does not match the second.

    The first paragraph claims that they devised some new way to trace exactly what the neural network is doing inside the black box. Which would be very cool.

    The second paragraph explains that they fed wave data into a neural network pattern recognition algorithm, and then fit a 5-variable function to the output. Which is ABSOLUTELY NOT the same. They then observed that the result closely matches a known wave equation (booorrinng) but with an unexplained extra term (hm. Could be interesting) which is likely an artifact of the training data they used (oh. Boring again).

    Worth publishing, but meh. Maybe some ML/CS or physicist can school me as to why this is cool. To me, this is like asking a ML algorithm to analyze movies of objects falling, and the algorithm spits out a quadratic equation. Not exactly earth-shattering research?
  • instead of the article that was written by a moron - https://arxiv.org/abs/2311.125... [arxiv.org]

    • Abstract

      Big data and large-scale machine learning have had a profound impact on science and engineering, particularly in fields focused on forecasting and prediction. Yet, it is still not clear how we can use the superior pattern-matching abilities of machine learning models for scientific discovery. This is because the goals of machine learning and science are generally not aligned. In addition to being accurate, scientific theories must also be causally consistent with the underlying physical process and

  • "AI models can find patterns and make predictions, but their reasoning is often inscrutable. This "black box" issue makes AI less reliable and less scientifically useful."

    I posit that the #1 reason AI predictions are not useful is mostly because of the "convincing bullshitter" problem* that all LLM-driven AIs seem to exhibit. If that wasn't an issue, the lack of insight into their reasoning would not be nearly as problematic.

    * Let's face it, "convincing bullshitter" is a more accurate description than the A

      • by martin-boundary ( 547041 ) on Saturday November 25, 2023 @02:03AM (#64030163)
        No he's right, the term "hallucinations" is a deliberate attempt to control the narrative and minimize the commercial impact of the blatant untruths being generated on a daily basis by such models and the people who promote them as solutions to everything. AI researchers collude in using the term because if they used a more precise one such a "flaw" and "inevitable" etc, then their employment prospects and hopes for IPO money would drop.

        A "hallucination" in human terms is a temporary aberration, something mostly harmless that cannot be explained but will soon go away. That is the connotation which the AI industry wants you to believe, that those mistakes and alternative facts in their gigantic models are just temporary. Throw a few people at the problem and it will go away, nothing to worry about. And for those who ask for more details: ok just build other AIs whose job it will be to fix the flaws in the previous AIs. They'll figure it out. It's AIs all the way down.

        • "the term "hallucinations" is a deliberate attempt to control the narrative and minimize the commercial impact of the blatant untruths being generated on a daily basis by such models and the people who promote them as solutions to everything."

          Did you just describe mainstream economics?

  • Scientists replace one method of producing results by means they don't understand, with another.
  • ...until there is a rogue wave.

    Laugh all you want, but I got paid six figures by the government for a rogue wave study to produce that insight.
  • by evil_aaronm ( 671521 ) on Saturday November 25, 2023 @12:07AM (#64030045)
    Rogue waves are bad, m'kay? But I'd like them to apply AI to look at past weather patterns to predict future weather patterns so I can avoid driving in snowy conditions. We have enough weather data over the last 20 years, at least, that meteorologists can "grep" what's coming up based on what happened the last few days. Maybe they do have this. In which case, (a) they should say so; and (b) it needs some tweaking, because they're still messing up forecasts for just the next day, let alone the next week.
  • by deepthought90 ( 937992 ) on Saturday November 25, 2023 @03:54AM (#64030245)
    NNs are essentially complicated equations whose parameters have been determined through training. So they decided they didn't want the neural net and used it to make a simpler equation. Why not just start with the simpler equation and fit the data using a more traditional regression analysis?
  • I have a feeling that everyone is just going crazy about these neural networks. It's like they're being shoved everywhere. Hey, folks, have you actually used them? In half the cases, it write complete nonsense, unsupported by anything, and even manipulate "facts" that don't exist. I tried using for my research and got complete nonsense. Living writers like https://edubirdie.com/research-proposal-writing-service [edubirdie.com] write without plagiarism and coherently, based on facts and experience. Maybe over time, they'll

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...