Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing Science

Why We Should Build a Supercomputer Replica of the Human Brain 393

An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
This discussion has been archived. No new comments can be posted.

Why We Should Build a Supercomputer Replica of the Human Brain

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Wednesday May 15, 2013 @04:42PM (#43735507)
    Comment removed based on user account deletion
    • by account_deleted ( 4530225 ) on Wednesday May 15, 2013 @04:47PM (#43735567)
      Comment removed based on user account deletion
    • by X0563511 ( 793323 ) on Wednesday May 15, 2013 @04:49PM (#43735583) Homepage Journal

      It doesn't need to be a mirror image, but it needs to "develop" in the same manner.

      The brain is plastic.

      • by Synerg1y ( 2169962 ) on Wednesday May 15, 2013 @04:57PM (#43735659)

        The brain "develops" in humans for a very long time though, to work around /with that the mechanical brain would either need to be able to develop itself or start off in an adult state.

        I have my doubts about the success of this project, but we've got to start somewhere & we'd learn a lot with this project, not like we don't spend our country's money on wars, or policing / giving aid to people who hate us instead.

        • Wouldn't a major feature of the design be emulating how a wetware brain can reconfigure it's own neural connections? I'm assuming here that we're talking about creating a "blank slate" brain and allowing it to learn and develop it's own personality.
          • by Rich0 ( 548339 )

            This is assuming that there is such a thing as a blank slate brain, or that any brain can be shaped in arbitrary ways.

            Brains grow. In fact, learning to play an instrument at an early age can actually cause changes to the folds of the brain visible to the naked eye. That is a dramatic example, but I'm sure there are a bazillion subtle ways the physical wiring of the brain gets set in near-permanent ways as it is forming. Some of that might be the result of experience, but some is likely the result of genet

      • by gl4ss ( 559668 )

        It doesn't need to be a mirror image, but it needs to "develop" in the same manner.

        The brain is plastic.

        simulating that needs simulating quite a lot more than what the guy is proposing, hence why people label his project as impossible.

        we have trouble modelling the interactions of a few hundred atoms. never mind all the atoms the brain has. HOWEVER THAT IS UNDER CONSTANT RESEARCH already without dumping cash on this guy and "following his lead".

    • by Gabrosin ( 1688194 ) on Wednesday May 15, 2013 @04:53PM (#43735633)

      I think the stake holders need to think about that simple question. The last thing we need is some sentient silicon running around like a pestilent child lobbing nukes between hemispheres for fun.

      Pestilent children are the worst, with all their plagues and their boils and their oozing pustules.

    • by Alsee ( 515537 ) on Wednesday May 15, 2013 @06:35PM (#43736435) Homepage

      The last thing we need is some sentient silicon running around like a pestilent child lobbing nukes between hemispheres for fun.

      If scientists persist in trying to play God with projects like this, they are going to unleash the Four Horsemen of the Apocalypse:
      War, Famine, Death, and Petulance.

      -

    • The last thing we need is some sentient silicon running around

      As long as they don't give the supercomputer legs, it won't be running around.

  • One teensy detail (Score:5, Insightful)

    by maugle ( 1369813 ) on Wednesday May 15, 2013 @04:43PM (#43735515)
    Simulating how the neurons and connections function won't be enough. You also need an initial state for each of them. Get even a tiny precentage of them wrong, and the result would probably be a virtual seizure.
    • by X0563511 ( 793323 ) on Wednesday May 15, 2013 @04:50PM (#43735595) Homepage Journal

      Let "neurons" power themselves up, simulating mitosis. Your neurons didn't just appear one day, they grew from a single gamete.

    • Also not to mention that we have no clear understanding of what cells do what. We now know that the human glia cells -- well, some of them, anyhow -- when injected into mouse brains, make them human-smart mice.

      So obviously those glia cells do something. What?

      Now, glia cells weren't mentioned in the simulation. But lets be generous, and say that when this guy discovers that something is amiss, and researches more, and decides to put in glia cells, he'll be sure to make them do ... .... something.

      Yes. I w

    • Re:One teensy detail (Score:5, Interesting)

      by rwa2 ( 4391 ) * on Wednesday May 15, 2013 @04:55PM (#43735655) Homepage Journal

      Well, supposedly they have enough CPU power to do a pretty reasonable simulation of insect and even small mammal brains, like rats and cats.

      But supposedly there might be more going on in there than just interactions between connected neurons...
      http://discovermagazine.com/2009/feb/13-is-quantum-mechanics-controlling-your-thoughts#.UZQDe7VeZ30 [discovermagazine.com]

    • Simulating how the neurons and connections function won't be enough. You also need an initial state for each of them. Get even a tiny precentage of them wrong, and the result would probably be a virtual seizure.

      And what about the moral implications of subjecting a sentient artificial entity to this kind of torment over and over until you get it right.

    • Please mod parent up.

      It is not a question of computing power, but whether the feedback loops down at the cellular level are correct. And even if those are correct, there are intermediate structures that must be tuned or the "brain" is a useless jumble. And even if those are very close, it would still take only tiny errors in initial conditions for the "brain" to be insane or otherwise crippled.

    • It'll still be a virtual seizure unless you're simulating all the signals a human body is sending to it. Otherwise, it'd just freak out because it has no body. You'd also need to pretty much simulate an entire word for it, as it would wonder why it couldn't see, couldn't walk, couldn't talk, etc. It would be an extremely depressed mind.
    • Presumably this 'brain' would be able to be restored from a backup to a known good state, and the simulation tweaked in some other direction. That's something human brains aren't capable of.
  • Yeah! (Score:5, Funny)

    by Arkh89 ( 2870391 ) on Wednesday May 15, 2013 @04:43PM (#43735527)
    sudo cat /dev/me > /dev/you
    You are not in the sudoers file. This incident will be reported to God.
    • by gweihir ( 88907 )

      To God? Naa, the sys-admin may object to that and then God may get shoddy system administration for a long time.

  • by Bogtha ( 906264 ) on Wednesday May 15, 2013 @04:48PM (#43735575)

    As a developer, I think initiatives like this are important.

    As a person, I can't help but think that being the person trapped inside the computer would be absolutely horrifying.

    • How do you know you're not trapped inside a computer right now?
    • by tftp ( 111690 )

      As a person, I can't help but think that being the person trapped inside the computer would be absolutely horrifying.

      You are already trapped inside the computer. To make matters worse, that computer is not very reliable, and cannot be repaired.

    • http://xkcd.com/876/ [xkcd.com]

      Apart from the obligatory xkcd, the only way to simulate the brain is at a relatively high level.

      You can use really, really detailed simulations of tiny parts - detailed simulations of neurons and their parts, synapses and the various signalling molecules to derive a higher level model.
      This higher level model does not need to be perfect - it only has to be as accurate and repeatable as the natural variation between neurons under various conditions.
      For example, we accept that both 7 year

    • by gweihir ( 88907 )

      Put the 100 trillion connections (and they all have to be made in a way to for a whole) into any software project complexity model, and you see how ridiculous this is. Even assuming one line of code would be enough for one connection, you end up with with something like 5000 years and 900 Million people (by the COCOMO). So just forget it. This is plain old fraud to get money for a project that can only fail.

  • To show politicians what they are not.
  • make they can remake they saved hitler's brain

  • by dmomo ( 256005 ) on Wednesday May 15, 2013 @04:55PM (#43735651)

    Robots will be so good at complex tasks that they will find it overkill to use one for simple tasks. They'll simply say, why waste a robot on this task when we have all of these stupid humans who are willing to do it for basically nothing. Half the quality at an eighth the price. Can't beat that.

    • I know you're being sarcastic, but I'd still like to respond seriously by pointing out that that's only true until the price to build new robots drops. The price to create new humans has remained roughly the same for as long as humans have been around, and it isn't getting any cheaper (if anything, the cost has gone up as new forms of fetal care have been put forward).

    • Or not. The following short story presents a picture where, instead of being slaves to robots, we may enslave them instead.
      http://marshallbrain.com/manna1.htm [marshallbrain.com]

    • by Kjella ( 173770 ) on Wednesday May 15, 2013 @05:42PM (#43736043) Homepage

      Robots will be so good at complex tasks that they will find it overkill to use one for simple tasks. They'll simply say, why waste a robot on this task when we have all of these stupid humans who are willing to do it for basically nothing. Half the quality at an eighth the price. Can't beat that.

      Yeah right, a robot that smart at complex tasks will use lesser computers and robots as tools the way we use them as tools. You think companies will deal with hiring and training employees with all their quirks and unreliability when they can put in a purchase order for a $10 sensor and a $2 micro-controller and have the complex robot tell it how to do the job? Not bloody likely. Most of the reason computers suck at what they do is because we suck at telling them what to do, well I expect a robot to suck equally bad at telling a human what to do, while it should be excellent at simulating what a cheap piece of hardware could do and could transfer that control software with perfect accuracy in no time. Even the Matrix plot that we'll be living potato batteries is more plausible than that they'll need us for simple tasks. We have a baseline for living, computers don't.

  • Moral objection (Score:4, Interesting)

    by girlintraining ( 1395911 ) on Wednesday May 15, 2013 @04:56PM (#43735657)

    We've long established that the source of the human "soul" is in the brain. Those interconnections give rise to consciousness and self-awareness -- and sentience. If you build something that precisely models the brain, you will be creating sentience. I have to question how we can create a sentient creature simply to experiment upon it and still claim to have a shred of humanity to us.

    I know that this is not as dazzling and interesting as building the device to geeks like us, but we cannot simply ignore the ethical consequences of our actions. All vocations, all manner of human endeavor, must move forward with an eye towards a respect for life. This may not be human life we're creating, or even organic life, but it is no less deserving.

    Someday we're going to have cybernetic life walking about. And I have to wonder -- how well will they treat us, when they find out how ethical we were in creating it?

    • Re:Moral objection (Score:5, Interesting)

      by Intropy ( 2009018 ) on Wednesday May 15, 2013 @05:11PM (#43735783)

      When you create a child you're on the hook for raising it. You don't start out knowing everything about it so you have to learn about it at the same time you teach it. That's moral. A new form of life is necessarily going to require more learning on our part in order to raise well. We will make mistakes. We will hurt it. But that's life. The only realistic other option is not to create it to begin with. Better to exist imperfectly than not all.

      • When you create a child you're on the hook for raising it. You don't start out knowing everything about it so you have to learn about it at the same time you teach it. That's moral. A new form of life is necessarily going to require more learning on our part in order to raise well. We will make mistakes. We will hurt it. But that's life. The only realistic other option is not to create it to begin with. Better to exist imperfectly than not all.

        Yes, but we don't dissect our children to figure out how to better parent them.

    • by gweihir ( 88907 )

      We have established no such thing. We have established that the interface is the brain, but whether it creates anything or merely interfaces something is completely unknown. Even the seemingly complex observations possible today are interface observations only and very, very crude compared to the object observed. And remember that there is a lot of quantum effects going on in the synapses and these are not well understood at all, even if the rest were completely deterministic.

    • by ViXX0r ( 188100 )

      Obligatory reference to this TNG episode (one of my favorites) that deals with this very subject: http://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Trek:_The_Next_Generation) [wikipedia.org]

    • We've long established that the source of the human "soul" is in the brain. Those interconnections give rise to consciousness and self-awareness -- and sentience. If you build something that precisely models the brain, you will be creating sentience. I have to question how we can create a sentient creature simply to experiment upon it and still claim to have a shred of humanity to us.

      I know that this is not as dazzling and interesting as building the device to geeks like us, but we cannot simply ignore the ethical consequences of our actions. All vocations, all manner of human endeavor, must move forward with an eye towards a respect for life. This may not be human life we're creating, or even organic life, but it is no less deserving.

      Someday we're going to have cybernetic life walking about. And I have to wonder -- how well will they treat us, when they find out how ethical we were in creating it?

      1) The brain being the soul is far from established... the problem of consciousness is still with us.

      2) You probably had something that was sentient at one time for lunch.

      3) Taking things apart in a biological situation is a one way process... typically it can't be repaired or "rebooted". That is a big difference in what is being described in the OP.

      All vocations, all manner of human endeavor, must move forward with an eye towards a respect for life. This may not be human life we're creating, or even organic life, but it is no less deserving

      To this I agree.

  • Experience a brain other than my own? Me think better with VR goggles and fake brain? I'm not sure I understand what that sentence meant. Perhaps I need some VR goggles.
  • by BlackSabbath ( 118110 ) on Wednesday May 15, 2013 @05:03PM (#43735713)

    Say this actually works. We create a brain and start down the long path of "teaching" it just like with new-born humans.
    What happens when we detect that the brain is "experiencing pain" (we already know that pain has a detectable neurological basis right?)
    What happens when we detect the brain is experiencing depression?
    What are our responsibilities then? Is this thing a human, a lab-rat, or a machine?

  • by wherrera ( 235520 ) on Wednesday May 15, 2013 @05:03PM (#43735715) Journal

    What exactly are "the functions of all 86 billion neurons"? I sense massive oversimplification here. Neurons have lots and lots of functions we have no idea how to simulate exactly, such as all the details of the thousands of networked internal metabolic mechanisms of any large mammalian cell, which most neural network simulations simply neglect.

    Furthermore, we have plenty of evidence that the non-neuronal components of the brain (glia and oligodendroglia) massively influence brain functioning, and may be required for adequate cognition. Furthermore we have no way of knowing if a brain-in-a-vat will work the way a brain in the body, with all its connections, works. The above issues are just a start to the limitations of the scheme.

  • by FuzzNugget ( 2840687 ) on Wednesday May 15, 2013 @05:04PM (#43735719)
    They're going to build the matrix!
  • by Rob_Bryerton ( 606093 ) on Wednesday May 15, 2013 @05:04PM (#43735723) Homepage
    To put it in perspective, that 86 billion neurons would be 86 "giga-neurons"; huh, conceptually not too overwhelming. Then we have the 100 trillion connections between them, or 100 "tera-connections"? Forget it.

    Not to even mention (as someone already did) the initial state, then the learning process. To even form this structure in RAM would require, what? 40-50 more Moores Law iterations? Which I doubt is even physically possible.

    I think this is the wrong approach, and even if possible, not in our lifetimes....
    • by gweihir ( 88907 )

      Indeed. And it is not that you just need to simulate each of these 100 trillion synapses (which each is complex), you need to figure out how to connect them in the first place. I think writing a piece of software with 100 trillion lines of code is probably a fair comparison. If I put that into the (not very good) COCOMO, I get PM = 3.6 * KLOC^1.20, i.e. 5.7* 10^13 person-months to do it. That leads to a project time of 63000 months, or 5300 years. Sounds about adequate. Incidentally, this time is only enoug

    • by xtal ( 49134 ) on Wednesday May 15, 2013 @05:36PM (#43735983)

      Of course it's possible. It exists in your head right now.

      There is even a known process by which they are constructed in ~9 months.

    • I am 99.9999% certain this thing has no hope in hell of outhinking my dog, let alone a human. Dog - barks and points nose at box of bones Me - go away, I'm watching TV Dog - barks at door Me - don't want the dog to pee, better let her out Dog - when I am partway to door dashes back to bones and points at them Can computers think up shit like that?
  • by jcaplan ( 56979 ) on Wednesday May 15, 2013 @05:08PM (#43735757) Journal
    Please don't waste your time with this nonsense.

    1. It is not possible to simulate a system when you don't know the rules of the system. We don't know how neurons work. Sure, we know much about neurons and we can set up small networks that seem to give interesting results, but there is a vast amount about real neurons that is unknown. We don't even know what all the types of ion channels are, let alone the varied states of modulation (phosphorylation of proteins and binding of various neuromodulators). We know little about how the brain learns. We have some knowledge about how a neuron might maintain a mean firing rate over time or how certain connections may vary in fairly artificial stimulus regimes (pairs of spikes with varied timing) in slices of brain tissue (typically hippocampus) in vitro. We have only basic understanding of how the brain is wired up on a microscopic scale (e.g. cortical columns). At this point people are still making fundamental discoveries about how the retina works.

    2. Throwing a supercomputer at the problem would be orders of magnitude too weak, even given huge simplifying assumptions, such as using "integrate and fire" neurons.

    Anyone attempting to do whole brain simulations at this point is simply wasting their time and a lot of electricity. When they promote the idea they waste other people's time. A perfect example of this is the fool who claimed that he had simulated a cat visual cortex, which though only a presentation at a conference, not a published paper, got attention here on Slashdot. He included one equation and randomly connected his network and then simulated on a large compute cluster. His "chief scientific conclusion" was that he could replicate the propagation speed of data through the layers of the network - a feat that could have been accomplished with paper and pencil in less time.
    • by aXis100 ( 690904 )

      I agree.

      Also, we dont fully understand how a bird flies, and how the complex interactions between feathers creates lift and thrust. We should never attept to simulate flight using such crude models as fixed wings and propellers.

      And dont get me started on locomotion....

      Have you ever thought that maybe starting a simulation using our limited knowledge of how neurons work will help us to refine our understanding? Even a failed experiment provides useful data that we can use to improve our models.

  • IIRC, the idea that the human brain runs entirely on classical interactions between neurons is one that is not settled science.

    I suppose doing a simulation will lend some data to proving or disproving the theory, but to start out claiming that it will replicate the human brain makes some definitive a-priori claims. Maybe it will, maybe it won't.

  • by paiute ( 550198 ) on Wednesday May 15, 2013 @05:11PM (#43735787)
    No, you cannot make a supercomputer which will be a replica of the human brain. First of all, we don't know enough about the biochemical workings of the brain to do that. Every day the literature contains papers in which the incredibly complex soup inside cells shows us some ridiculous interaction we could not have predicted.

    It would be the equivalent of building a lemonade stand, staffing it with a five-year-old, and claiming that you were replicating the US economy.
  • use human DNA to program the simulation. If the the DNA in a human zygote can develop into a brain, why can't a simulation of the DNA develop into the simulation of a human brain?
  • It is like claiming that throwing a lot of transistors together in the form of a CPU and memory makes a working computer. Ever heard of _software_? Ever heard that software is actually orders or magnitude more complex than hardware? And ever heard that there are quantum-effects going on in synapses that cannot easily be simulated?

    But those stupid enough to give money when the claims are just grand enough will give money for this as well, no doubt.

  • by mandginguero ( 1435161 ) on Wednesday May 15, 2013 @05:17PM (#43735819)

    As a neuroscientist, this seems absurd. Not all neurons perform the same functions, some are very different in terms of structure and connections (pyramidal cell vs interneuron for example). We don't have a good sense for all the multitude of ways they can connect (via axon projections, or through retrograde signals at a given synapse). And we're just starting to appreciate the role that non neuron brain cells play in cellular communication - astrocytes release signaling molecules that modulate neuronal function (caffeine interferes with these) and they also regulate the amount of ions around neurons - in essence they enable neurons to change states.

    • Indeed. If we were actually simulating a brain at the molecular level those details wouldn't be terribly important to understand, so long as we recognized their existence, but to simulate it using only the total computational power of all the computers on the planet is going to require a lot of glossing over the details. Still, there could be much to be learned along the way - say by simulating a complete rat brain as he intends to do. It will probably take a *lot* of work to get it to behave anything li

  • by caywen ( 942955 ) on Wednesday May 15, 2013 @05:25PM (#43735909)

    In order to construct a virtual brain, doesn't that mean it has to be grown, virtually? What would be the environment in which it grows?

  • by Okian Warrior ( 537106 ) on Wednesday May 15, 2013 @05:30PM (#43735933) Homepage Journal

    ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition.

    This.

    Also, the lack of any sort of a roadmap as to how to do this.

    Also, the lack of any sort of definition for "consciousness", or any indication that it is an emergent property, or any way to measure when you've succeeded in making consciousness, or any theoretical evidence at all that it would arise from any specific plan.

    We could model as many neurons as we like and it *still* wouldn't be a human brain unless we figure out how those neurons connect with each other. With no detailed plan, it's like trying to build a house by tacking boards together.

    The "self-assured scientist" could start by telling us how a Cortical Column [wikipedia.org] is wired up, how the feedback and feed-forward between columns works, and why artificial neural nets have inputs on one side and outputs on the other, when the brain apparently has both inputs and outputs on one side (in the sense of a functional diagram; ie - the efferent and afferent neurons connect to the same level of layer), and what the distinction is between these models.

    If he can't solve basic issues, how can he hope to succeed in such a complex and ambitions project?

  • by asmkm22 ( 1902712 ) on Wednesday May 15, 2013 @05:44PM (#43736063)

    If we were to be able to build an AI, what would we teach it? Stuff that's taught in school? Would we do anything to simulate social development? Would we let it read through 4chan?

    So much of what makes us intelligent, rather than simply smart, is through experiences. So how would we simulate experiences?

  • In our development of AI and our understanding of the human mind.

    As to the rights of or risk of an artificial intelligence. I think if we're appropriately paranoid it will pose no risk of going skynet on us. And as to any abuses to the little fellow... he's going to be a billion dollar lab rat... and we he's going to exist at our sufferance and will be created by our will.

    We will be this thing's mother and this thing's God. I have no problem assuming a role we've earned. If we create a sentient artificial m

  • Locating lost car keys.
  • We discovered that eddy currents can excite nearby neurons that aren't even connected by synapses. This means the damn shape and size needs to be the same. Same size because of the inverse square relation of electromagnetic effects...

    I believe that any sufficiently complex interaction is indistinguishable from sentience, because that's what sentient is. This type of research may help us study our own ethics and theory of the mind, but unless it's built to scale, or the simulation simulates at the quan

  • The rational thing to do is wait until computer technology is fast enough that doing this kind of project is cheap. Spending billions of dollars on it right now is a waste of money, and may not even get us there any faster. After all, Markram has no control over when CPUs, switches, and storage devices actually are fast enough.

  • by Animats ( 122034 ) on Wednesday May 15, 2013 @05:58PM (#43736187) Homepage

    From the article:

    "There are too many things we don't yet know," says Caltech professor Christof Koch, chief scientific officer at one of neuroscience's biggest data producers, the Allen Institute for Brain Science in Seattle. "The roundworm has exactly 302 neurons, and we still have no frigging idea how this animal works."

    That's the problem. Just because we can extract the wiring diagram doesn't mean the components are well understood yet. Also, if we understood the components and how to wire them up, it would be cheaper to just build hardware. Simulating neurons is slow. It's like running SPICE instead of building circuits. Works, but there's about a 1000x or worse speed, power, and cost penalty. GPUs are often simulated at the gate level before making an IC; NVidia uses twenty or thirty racks of servers to simulate one GPU during development.

    What bothers me about claims of strong AI is that I've heard it before. Ed Feigenbaum, the "expert systems" guy at Stanford, was running around in the 1980s, promising Strong AI Real Soon Now if only he could funding for a giant national AI lab headed by him. He even testified before Congress on that. Expert systems were a dead end.

    Rod Brooks from MIT went down this road too. His COG project [wikipedia.org] had a robotic head and some arms, some facial expressions, and a lot of hype. Work ceased on that embarrassment in 2003. He'd done good artificial insect work, but the jump to human level was way too big.

    This is the hubris problem in AI. Too many people have approached this claiming their One Big Idea would lead to strong AI. So far, not even close.

    All the mammals have similar DNA and brain architecture. A mouse brain is about 1g; a human brain is about 1000g. So build a simulated mouse brain and demonstrate it works, or STFU.

  • by Ungrounded Lightning ( 62228 ) on Wednesday May 15, 2013 @07:46PM (#43736887) Journal

    ... once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies.

    You can watch it go immediately insane from sensory deprivation.

    Modeling the brain is not enough. You have to model enough of its supporting systems and environment to keep it functioning.

All extremists should be taken out and shot.

Working...