Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

AI Has Cracked a Key Mathematical Puzzle For Understanding Our World (technologyreview.com) 97

An anonymous reader shares a report: Unless you're a physicist or an engineer, there really isn't much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I've never used them since in the real world. But partial differential equations, or PDEs, are also kind of magical. They're a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes. The catch is PDEs are notoriously hard to solve. And here, the meaning of "solve" is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. "Solving" Navier-Stokes allows you to take a snapshot of the air's motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It's also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering. Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It's also much more generalizable, capable of solving entire families of PDEs -- such as the Navier-Stokes equation for any type of fluid -- without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That's right. Bring it on.

This discussion has been archived. No new comments can be posted.

AI Has Cracked a Key Mathematical Puzzle For Understanding Our World

Comments Filter:
  • The answer: (Score:4, Funny)

    by Chewbacon ( 797801 ) on Friday October 30, 2020 @01:35PM (#60666314)

    42.

  • by GlobalEcho ( 26240 ) on Friday October 30, 2020 @01:58PM (#60666388)

    Personally, I consider this a computational innovation, not a mathematical one. The authors' main innovation is training and running their neural network in the Fourier-transformed (frequency) domain, giving them much better results than previous attempts at using neural networks for fluid dynamics PDEs.

    Overall, if I understand correctly, the way this category of schemes tends to work is that the neural network learns more or less "what fluids behave like". It is then given some set of initial conditions and then generates what it "thinks" happens next.

    Traditional PDE solvers (which I worked with professionally for decades) can do the same, basically by doing repeated arithmetic computations simulating the physics within lots of little cells (or finite elements). This generates a solution that is "exact" and repeatable within the physical assumptions and discretization techniques involved. But...that's usually way more information than a researcher really needs.

    For example, someone looking at turbulence around an airframe doesn't really care about the exact shape of any given simulated vortex, but rather about where vortices tend to form. So much of that detail from the PDE solver is ignored and wasteful.

    These neural networks are asked to get the gestalt right, much more quickly. They are therefore useful even though no particular vortex position they happen to have "guessed" will correspond precisely to the exact simulated physics.

    Here is the paper: https://arxiv.org/pdf/2010.088... [arxiv.org]

    • Can they speed up the traditional full computation by using these low-cost estimates as a starting point to get the same answer as before?
      • by sjames ( 1099 )

        It seems unlikely, but if they can weed out the runs that will later be discarded, they can still save a great deal of computation.

    • by ljw1004 ( 764174 )

      The authors' main innovation is training and running their neural network in the Fourier-transformed (frequency) domain, giving them much better results than previous attempts at using neural networks for fluid dynamics PDEs.

      It reminds me of how our retinas translate vision into the log-polar domain prior to feeding it to our brain's neural networks. The log-polar domain has (1) better data compression, (2) increases the size range of objects that can be tracked using a simple translational model.
      http://users.isr.ist.utl.pt/~a... [ist.utl.pt]

    • by gweihir ( 88907 )

      Personally, I consider this a computational innovation, not a mathematical one.

      Not only you. This clearly is numerical approximation, done in a somewhat novel way, nothing else.

      • by sabian2008 ( 6338768 ) on Friday October 30, 2020 @09:07PM (#60667618)
        Exactly this. My PhD is in numerical solvers for scientific fluid solvers, the ones that have to be as precise as possible as you want to study the turbulence statistics for theoretical reasons, usually known as DNS (Direct Numerical Simulations). The paper has zero improvement on this requirements, as we already have the minimum description necessary to capture all the details of a turbulent flow (up to very very small scales) and that is called Navier Stokes equations (and its siblings).
        This is something else, more comparable to LES (Large Eddy Simulation) models, which try to capture the essential statistics of turbulence so that a broader resolution simulation (less computationally expensive) looks similar to the real deal, although we know since the work of Edward Lorenz that the solution will became exponentially wrong the longer we integrate, and the general statistics of the solution might be OK or might be complete bullshit.
        LES models are situation dependent. Models that work well in a certain setup (for example, isotropic turbulence) can lead to nosense in other contexts. I don't think it is very clearly stated in the article, but I think this is the same thing. You train a NN for certain examples for which you know the "exact" solution and use it to kind of interpolate in that region of parameter space, developing a turbulence model that you don't have the slightest physical idea of what is assuming, but which works ok.
        Even more, I don't really care for the example they use to illustrate the paper. They use it for the 2D equation with an initial condition that resembles a vorticity dipole. Two things:
        a) The 2d equations are MUCH easier to deal with than the 3D equations. 2d flow is much easier to organize and displays considerably less chaotic behavior. A direct consequence of this is that "we" (as if I ever could) have proven the equivalent of the Millenium Navier-Stokes problem for the 2D case. However for 3D case there isn't much going on the last couple of years.
        b) Somewhat associated with a), freely evolving isotropic 2D flows TEND to form a vorticity dipole, due to a process called inverse enstrophy cascade. When you are at that stage it is extremely easier to "guess" the dynamics, as you have most of your energy contained in the largest scales (which require geometrically less degrees of freedom to represent).
        I like the idea in the paper. My PhD is basically trying to use Fourier for turbulence simulations in situations where it isn't specially suited for. Everywhere you add Fourier I'll be happy, but the report seems to me like incredibly overhyped (being the AI field I'm not surprised). The paper is ok, although they could be a little more honest about shortcomings and the fact they are considering the easiest examples of them all.
        • by gweihir ( 88907 )

          Interesting. So this is very much another case where an ANN can be completely off. Does not surprise me, interpolation of statistics (the training data is always a statistical sample of something) can be exceptionally far off because they have no concept of hard state changes at some places.

          As a CS PhD, the overhyping the AI field does deeply offends me. They are constantly trying to sell something they cannot deliver. As such, they are competing with other CS research in a completely dishonest and dishonor

        • Nailed it. Just because you train a NN on a set of transformed data does not mean it will pick up examples of turbulence or non-Newtonian behaviors without training examples. Also the fact that the authors are resorting to deep learning rather than sophisticated feature Engineering that incorporates the theory and domain knowledge into the process just looks like snake oil to me. This strongly looks like cherry picking the data to show a spectacular seeming result instead of any real breakthrough. My ba
    • These neural networks are asked to get the gestalt right, much more quickly. They are therefore useful even though no particular vortex position they happen to have "guessed" will correspond precisely to the exact simulated physics.

      That is all very well, but how do you know the AI got the right answer? I guess you would have to run a slow exact physics simulation as a check. There does seem to be a risk with pattern recognition algorithms that they do silly things, like mistaking a cat for an elephant.

      I am a bit out of my area of expertise again, but I gather that one of the basic problems with the study of turbulent flow is that you cannot make the grid size small enough to be confident of an accurate simulation. The smaller the grid

  • I keep going to technical conferences where I see people throwing machine learning or AI at the solution of Hamiltonian systems like celestial mechanics and getting garbage out. Stuff with well defined and compactly written Hamiltonians are probably the most amenable to AI methods, and yet...no dice.

    All the same, it may be possible for AI methods to discover useful low order approximations to fluid mechanics solutions. But whether you can then use them for harder problems is kind of a squishy question that
  • Not really AI (Score:3, Interesting)

    by Tough Love ( 215404 ) on Friday October 30, 2020 @02:10PM (#60666442)

    It's a bit misleading to call this category of approximately "AI", or even learning. A neural net, particularly a deep net, can be viewed as a universal approximator, in this case mapping from a high dimensional space of the local state of Navier Stokes equations to the next predicted state. The mapping is an even higher dimensional, continuous surface, which is approximated by iteratively fitting neural net weights to it, using standard Navier Stokes numerical methods as a guide. In other words, exactly the same mathematical tools as for AI, but it's not AI. It is function approximation, there is nothing intelligent about it. Well, the intelligence of the researchers, they are bloody intelligent. Who else even has a clue what's going on here?

    To bad everybody quits math just before they get to partial differentials, that's where it gets fun.

    • A neural net, particularly a deep net, can be viewed as a universal approximator, in this case mapping from a high dimensional space of the local state of Navier Stokes equations to the next predicted state.

      Also known as a PDE integrator, then?

    • It is function approximation, there is nothing intelligent about it.

      That's really what AI is. Deep learning is just more complex functions.

      • Well, if you mean to say that AI is actually a bunch of cheap tricks, then agreed.

        Deep learning is nothing more than the process of training a deep network, and a deep network is nothing more than a neural net with internal layers. But it sounds so much more _intelligent_ when you say deep.

    • A lot of people seem to have trouble with this simple semantic issue. "Artificial" means "fake." So, if something is "Artificially Intelligent" that means that it is NOT intelligent. It is a non-intelligent algorithm doing something that seems intelligent, even though it's not.

      In computer science, the phrase "Artificial Intelligence" refers to a broad set of algorithms that are used to do this kind of imitation of intelligent behavior (without actually being intelligent behavior). Something like "a mach

      • '"Artificial" means "fake."'

        It can, but it doesn't here. Here it is used in its original meaning, made intentionally, by artifice, as opposed to something that just naturally grew. Words do indeed have meanings. You should learn what they are before basing arguments around them.

      • At some point in the last decade or so "machine learning" (which really should be called "machine training") got renamed "AI". People wanted to distinguish "new" machine learning (i.e. "deep learning") from "old" machine learning. That's fine, but many think renaming the whole field "AI" to be both jumping the gun a bit and not actually a useful description of what is being worked on, which is typically improvements in optimizing and structuring arbitrary algorithms (i.e. trained machines). Trained machi
        • At some point in the last decade or so "machine learning" (which really should be called "machine training") got renamed "AI".

          Really, no. It's been called AI since at least the 60's, arguably much earlier (can you spell Turing?)

      • Sounds to me like you have not heard of the Turing test.

    • I like to call them "application-optimized arbitrary algorithms". That's what current machine learning = deep learning = "AI" is. Take an algorithmic structure (number of network layers, connections, convolution and weighting operations, etc) that has a bunch of arbitrary parameters, then take a sample of data for which you know the input and the desired output, then optimize those parameters to give an algorithm that gives the desired output for the desired input under some defined measure of correctness
    • by gweihir ( 88907 )

      That there is nothing intelligent about it is exactly right. And I agree, this is not even the misnamed "AI" everybody without a clue is so hyped about.

      I found the one course I had to take that had this subject in it very boring. I did quite a bit more math though, but discrete stuff, up to and including some abstract algebra and quite a bit of formal logic, mostly non-classical. Now that stuff is interesting!

  • Why does this matter? Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job.
  • My first year courses for engineering had a special math class. Most classes were 3 hours a week, either 1 hr MWF or 1.5 hrs TH; this one was somehow four hours a week, plus two tutorials. It was basically to cram a full course of differential and integral calculus into the first semester, so that you could comprehend the second. We got it by the end of January: we couldn't have followed the chemistry course, the physics course, or the optics course without differentials and integrals. They would ha

    • yes, understanding is important to many disciplines and the calculus models our understanding of the underlying processes beyond churning out a numerical answer.
    • by PPH ( 736903 )

      So did mine. And my doing rather well in H.S calculus really helped give me a head start on my college engineering curriculum. But that might be due to the fact that my H.S. calculus teacher was also the basketball coach.

  • I feel like crediting Texas Instruments for manufacturing the calculator used to solve quite some world problems and earn Nobel prizes.

  • I actually read the article, and it appears they used NN training to transform a network of interconnected nodes into a network of finite elements, and ran the simulation. I would be surprised, though, if the authors realized they were transforming a NN into a FEA machine.

  • This doesn't solve any of these equations, it's only an approximation. Good approximations are useful, but we should be realistic about what this is actually doing. It is not solving these equations, it does not produce 100% accurate results. It only produces results that are similar to correct results. And since AI is a black box, we don't really know how accurate the results are, or what edge cases will result in inaccurate results, or anything like that. This makes the algorithm useless for situations wh
    • by gweihir ( 88907 )

      To be fair, most actual "solving" does involve approximations for the classical approach as well. As far as I remember, a closed-form solution does not actually exists in most cases. I might be wrong, it was 30 years ago and I never needed them.

      That said, the utterly stupid claim that AI has "cracked" anything is completely wrong, as usual. It has no insight, method or understanding here. It can just be trained very well on data and then can interpolate. Incidentally, that is all it can do. Because the prob

  • Unless you're a physicist or an engineer, there really isn't much reason for you to know about partial differential equations

    Tell it to mathematicians, finance folks, biologists, ...

  • It just is able to be trained on data, as usual. No insights, understanding, methods involved.

    Also, note that these problems usually get solved using approximations, and Artificial Stupididty is good at that. In fact, it is the only thing it can really do.

  • I always groaned while attending a SIGGraph paper presentation on fluid simulations when the author said "We implemented the Navier-Stokes equations..." Yeah, sure you did. Show me the source code.

  • The subject of this post is both the title of a book by Herman Wouk and a quote of what was said by Richard Feynman to Herman Wouk. Wouk interviewed Feynman while doing research for his novels on World War II. At their first meeting Feynman asked Wouk, "Do you know calculus?" to which Wouk replied that he did not. Feynman replied, "You had better learn it, it's the language God talks."

    The language begins with the Calculus, and quickly moves to Differential Equations and from there into Quantum Mechani

Work continues in this area. -- DEC's SPR-Answering-Automaton

Working...