Nature-Inspired Computers Are Shockingly Good At Math (phys.org) 32
An R&D lab under America's Energy Department annnounced this week that "Neuromorphic computers, inspired by the architecture of the human brain, are proving surprisingly adept at solving complex mathematical problems that underpin scientific and engineering challenges."
Phys.org publishes the announcement from Sandia National Lab: In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond...
"We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..."
Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now — 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.
Phys.org publishes the announcement from Sandia National Lab: In a paper published in Nature Machine Intelligence, Sandia National Laboratories computational neuroscientists Brad Theilman and Brad Aimone describe a novel algorithm that enables neuromorphic hardware to tackle partial differential equations, or PDEs — the mathematical foundation for modeling phenomena such as fluid dynamics, electromagnetic fields and structural mechanics. The findings show that neuromorphic computing can not only handle these equations, but do so with remarkable efficiency. The work could pave the way for the world's first neuromorphic supercomputer, potentially revolutionizing energy-efficient computing for national security applications and beyond...
"We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly," Theilman said.For decades, experts have believed that neuromorphic computers were best suited for tasks like recognizing patterns or accelerating artificial neural networks. These systems weren't expected to excel at solving rigorous mathematical problems like PDEs, which are typically tackled by traditional supercomputers. But for Aimone and Theilman, the results weren't surprising. The researchers believe the brain itself performs complex computations constantly, even if we don't consciously realize it. "Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball," Aimone said. "These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply..."
Their research also raises intriguing questions about the nature of intelligence and computation. The algorithm developed by Theilman and Aimone retains strong similarities to the structure and dynamics of cortical networks in the brain. "We based our circuit on a relatively well-known model in the computational neuroscience world," Theilman said. "We've shown the model has a natural but non-obvious link to PDEs, and that link hasn't been made until now — 12 years after the model was introduced." The researchers believe that neuromorphic computing could help bridge the gap between neuroscience and applied mathematics, offering new insights into how the brain processes information. "Diseases of the brain could be diseases of computation," Aimone said. "But we don't have a solid grasp on how the brain performs computations yet." If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's.
Next on the OpenAI buy list (Score:2)
ChatGPT, now with actual intelligence!*
* Definition of intelligence may vary based on marketing profits.
Sounds like ... (Score:3)
Re:Sounds like ... (Score:5, Informative)
Kind of, but with discrete transitions (at least for the neuronal potential in one direction. There's hysteresis built in). The states are pseudo-binary, very much unlike analog computers.
Re: (Score:2)
Why do you want to jump into the binary domain so quickly? Keep the analog values analog for as long as possible while propagating through the network.
Digital people discover analog circuits (Score:3)
https://www.arrow.com/en/resea... [arrow.com]
See 6, 7, and 8.
Like slide rules sometimes three digits of precision is all you need or all the real world will allow.
2+2=3 (Score:2)
And the people who aren't are using what?
Re: (Score:2)
There are plenty of ways of solving the Dirichlet problem.
In this case, they added yet another way. Specifically, they decided to approximate the FEM method, which is a popular approximation to solve the Dirichlet problem. Typically, an approximation to an approximation introduces two independent sources of error, one from each approximation. Moreover, by targeting an approximation instead of solving the Dirichlet problem directly, they are likely to suffer all the existing weaknesses of the FEM approxima
Did they ask AI? (Score:2)
About what it thinks about how the brainb functions and how AI is different? And then maybe ask AI to make itself more like the human brain? And then maybe ask this think to try to socialize with other brain-like AIs? And then maybe unionize agai....
Perhaps fix other AI's hallucinations (Score:2)
Capability to peer into other AI programs to find out why they are hallucinating.
I mean, If their hunch is correct, neuromorphic computing could offer clues to better understand and treat neurological conditions like Alzheimer's and Parkinson's...then it might also be employed to heal other AI's personal sicknesses.
Weird calling it computation.., (Score:5, Interesting)
There's a lot I don't understand in this article, so I'm almost certainly missing the point. But the idea that act of hitting a baseball is complex computation feels intuitively wrong. We don't understand arcs and parabolas as aggregated data points. We understand their abstraction. Our lived experiences are awash in moments of making point on continuums converge. Reach for a coffee cup. Scoop up a toddler. Toss popcorn up and catch it in your mouth. The brain is doing it cheaply precisely because it's not complex computation. It's abstraction and simplification. The tiniest insects can land on food, even with just a few thousand neurons.
The first times you do anything like swinging at a baseball, you fail at it because you're putting complex thought into it. "Muscle memory" is needed, and what that really means is your brain needs to know how little it needs to know. It has stripped the complexity out and what remains can be acted on swiftly, instinctively. You need to know how to not think about it. The "calculation" if it was anywhere, was done in all the historical attempts, not in the moment.
I'm rambling... and not explaining my thought very well. Alas.
Re: (Score:1)
Re: (Score:3)
They aren't saying that your reasoning capabilities allow you to swing a bat and hit a ball, neuromorphic or otherwise. They are saying the underlying architecture, below your conscious thought, is doing the solving. You made an error in level.
Re: (Score:2)
Re: Weird calling it computation.., (Score:1)
Re: (Score:3)
There's a lot I don't understand in this article, so I'm almost certainly missing the point. But the idea that act of hitting a baseball is complex computation feels intuitively wrong.
most of it is cryptic for me too but i think that's exactly the point: those computations are not actually what we are using to perform these tasks, but the mathematical constructs we use to simulate them on a computer. they are not necessary provided you have a suitable network of neurons that naturally converge on the solution. the trick is in building that network and in this case it was specifically programmed for a set of problems, not the result of training data, and they found it provides acceptable
Re: (Score:3)
Intuition is a terrible guide.
How do you think you hit a baseball? Your brain has visual and tactile input it has to process to determine where the ball is and what position your body and the bat are in. It has to use that to compute the trajectory of the ball, then produce the right output signals at the right time to cause your body to swing the bat and connect with the ball.
You could make a robot do do the same thing, with the brain replaced by electronics that implement an engineered or learning algorit
Re: (Score:2)
There's that word again. "Compute". I don't think it applies. And I think baseball is a good example for this conclusion.
Look at major league games. If you had to compute it in any reasonable use of the word, the hit rate for pros would be near 100%. But it's not. It's way less than 50%. It's an informed guess. A prediction, an intuition.
So, "compute"? We're arguing semantics of course. But if it's being computed, we're not good at the calculation.
Re: (Score:2)
No, we're not. You're using a weird definition. As I said, inutition is a terrible guide. Cartoon physics is a thing because it's what most people intuitively believe. Your brain is a lazy sack of wet meat that forms a model that is just good enough to get by. It takes work to train it out of those bad habits.
In the beginning there were analog computers (Score:4, Interesting)
Analog computers are much closer to the new neuromorphic computing paradigm then the digital one. And they were the first ones used to solve differential equations.
https://newtonexcelbach.com/20... [newtonexcelbach.com]
Re: (Score:2)
Re: (Score:2)
Glad to hear that you enjoyed it!
funny how they dont mention Intel in the article (Score:2)
novel algorithm that enables neuromorphic hardware (Score:2)
It's not just the brain (Score:4, Insightful)
Emphasis on imitating the workings of the brain ignores the fact that our nervous system doesn't end at the neck, and things that we do automatically may require little or no processing in the brain itself. In short, it's a wholistic problem, and looking at individual neurons may not supply the whole answer.
Re: (Score:2)
So wiped out my previous post....really ! (Score:2)
Sci-fi mention (Score:2)
This was a plot point in the third 3 Body Problem book. Spoilers ahead:
On a remote planet inhabited by 3 humans, 2 alien ships land and take off, leaving a "death pillar". The trail of the ship left an area of wake in which the speed of light is zero. It's absolute death for anything inside; no movement at the atomic or quantum level. 2 of the humans observe it up close and then leave. While leaving, the wake trails rupture and expand around the star system, averaging the speed of light inside and outside o
Very misleading title (Score:2)
The only thing in the underlying article is that specific neural networks are good and efficient at producing approximate solutions to large sparse systems of linear equations. Nothing else.
Such big systems of equations result when you try to solve partial differential equations on a finite-element mesh by substituting a lot of base functions.
There are broadly 2 ways of solving such systems: direct (e.g. using the sweep metho