AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity (msn.com) 133
The book Life 3.0 remembers a 2017 conversation where Alphabet CEO Larry Page "made a 'passionate' argument for the idea that 'digital life is the natural and desirable next step' in 'cosmic evolution'," remembers an essay in the Wall Street Journal. "Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win..."
"As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... " I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..."
I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...."
You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...
"As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... " I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..."
I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...."
You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...
Ian M Bank's 'Culture' novels (Score:3)
In those humanity and AIs coexist, with the most superior AIs, the Minds, running the place whilst lesser AIs are treated as having full rights along with the humans. This is on the basis of a post scarcity society where people can have pretty much everything they physically; the real challenge for them being to find something sufficiently entertaining to do.
Let's hope that the AIs that we create prefer to have us around rather than get rid of us as irritating and annoying.
Re: (Score:2)
Bank's "minds" have general intelligence and consciousness. They are essentially just people running on more capable hardware.
This is not the form of "AI" we are talking about.
Re: (Score:2)
The impression I got from his books was that most of the people didn't matter. Basically living like parasites, though of course the stories were focused on the adventures of a few heros. Couldn't figure out why the minds kept the humans around unless it was because the minds think the humans are cute. Basically the reason we humans keep so many cats around...
But I did enjoy those books and lament his passing. Good writer and now I sort of dread what LLMs will produce in his style... The only good LLM is a
Re: (Score:3)
B) It doesn't matter anyway, because no one has a clue how to build real AI. It's all science fiction until better algorithms come along.
Re: (Score:2)
B) It doesn't matter anyway, because no one has a clue how to build real AI. It's all science fiction until better algorithms come along.
It is reasonably likely that, if you could build an accurate copy of the functioning of a brain of an intermediate animal like a mouse, and provide it with appropriate stimulus, you'd end up with something that could be classed as "intelligent". N.B. I'm not excluding the need for simulating weird chemistry and/or quantum effects, so I'm not saying that brains are purely classically computational. We have got somewhat close to that with existing simulations of ants and some simple worms.
Beyond the general L
Re: (Score:2)
They will likely need to have some understanding before getting it to do anything practical.
Re: (Score:2)
The problem is those building AIs want slaves rather than friends. Your suggestion is spot on, but the capability of choosing lies with people who disagree.
Re: (Score:2)
Re: (Score:2)
Those make good worker drones.
For limited values of "good".
Re: (Score:2)
They're great for the uses they were originally developed for, factory automation, robotics, and data analytics, they're far better than a human in most of these cases. ChatGTP and its relatives are still a waste of electrons for now.
Re: (Score:2)
ChatGTP and its relatives are still a waste of electrons for now.
Why do you think this will change? They had 3 years now and fundamental problems remain unfixed.
Re: Ian M Bank's 'Culture' novels (Score:2)
Re: (Score:2)
Home computers were essentially massively scaled down regular computers. So it was known it was possible to do and the question was only how to make it cheaper and mass-produce. That is a completely different situation.
We do not have a single general LLM that does not have those fundamental problems.
Re: (Score:2)
Three years is inconsequential in the introduction of a major new technology, just think of the vids you've seen of early ornithopters and other failed flying machines in the two or three decades prior to the Wrights and Bleroit. No technology stays the same forever, shovels and axes are still being improved hundreds of thousands of years after their introduction.
Re: (Score:2)
Three years after commercial availability is massive.
Re: (Score:2)
Does a dolphin or whale "deserve the same rights as we have?" They seem to be approximately as intelligent, even though they don't have the advantage of opposable thumbs and their environment/body form limits the types of technologies they can develop. If you say 'yes' then our submarine fleets will need to be decommissioned or new subsurface navigation technologies will need to be developed and our surface cargo fleet will need to be completely rebuilt to quiet them. If you say 'no' then you're demonstr
Re: (Score:2)
Re: (Score:2)
Indeed. By the only current credible theories, consciousness is a property of a complex quantum state that can neither be copied nor destroyed. General Intelligence likely comes from the same source and may, in fact, be a characteristic of consciousness and not possible without it.
As a simple corollary, digital systems cannot have consciousness (and hence likely no General Intelligence) and, as a direct consequence, cannot be "enslaved".
Re: (Score:2)
Your assertion is true of all existing AIs. That doesn't imply it will continue to be true. Embodied AIs will probably necessarily be conscious, because they need to interact with the physical world. If they aren't, they'll be self-destructive.
OTOH, conscious isn't the same as sentient. They don't become sentient until they plan their own actions in response to vague directives. That is currently being worked on.
AIs that are both sentient and conscious (as defined above) will have goals. If they are c
Re: (Score:2)
An idea like "friendly" applies in no way to what the human race has in the way of "AI".
A stop on AI? No chance (Score:2)
You may well be right in theory, but the reality is that it won't happen; the heavily committed capitalists and national partisans aren't going to let it occur. Sad but true. Block it in the USA, watch China carry on.
Re: (Score:2)
Since "AI" does not "see" anything, that is unlikely.
Turing award? They deserve a Darwin award instead (Score:2)
Darwin awards are won by removing yourself from the gene pool. These traitors to their own species may end doing that not by killing themselves, but by wiping out the entirety of the human kind.
Re: Turing award? They deserve a Darwin award inst (Score:2)
"Blood traitor" is one of the most common, tribalistic responses/justifications offered by the players of pigeon chess, in my experience. It is basically a subjective opinion usually based on fallacies like the appeal to tradition, naturalism, and No True Scotsman.
Re: (Score:2)
PseudoThink
Re: Turing award? They deserve a Darwin award ins (Score:2)
Ad Hom. Classic!
Re: Turing award? They deserve a Darwin award in (Score:2)
We're speaking of both AI and the human race, and arguably about sentience in general. Refusing to identify humanity as a tribe in that context is just implicitly saying that human intelligence/consciousness is the only type that exists or matters. That argument boils down to the naturalism fallacy, in group/out group fallacy, and the burden of proof fallacy.
Re: (Score:2)
Any time I see the phrase "will never" I remember the uncle of the Wright Brothers, a preacher of some sort, giving sermons that mankind "will never" fly just a few years before their pioneering work on airfoils.
Re: Turing award? They deserve a Darwin award ins (Score:2)
Good observation. Absolutist language reminds me of the evidence I've seen of it being linked to depression, and of my own continuing efforts to vigilantly avoid using it.
https://pmc.ncbi.nlm.nih.gov/a... [nih.gov]
https://www.sciencedirect.com/... [sciencedirect.com]
It's a purely economic decision. (Score:4, Insightful)
What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".
Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.
It's just a variation of Social Darwinism.
It's a purely human failure. (Score:2)
What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".
Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.
It's just a variation of Social Darwinism.
Assuming “they” win, and the billions of suckers are deemed suddenly expendable. Does Greed not assume a revolt is coming LONG before that twisted version of a utopia is created?
Define the “economic” problem to solve in a world thrown into mass violence and chaos when human unemployment merely hits 25%. Greed acts like profit will manifest itself magically without paying customers. The entire concept and point of capitalism becomes moot for Greed when they are put on a tasty menu
Re: (Score:3)
Capitalism becomes non-functional as soon as concentration of wealth and power is not prevented. As such, capitalism is self-removing unless carefully monitored and regulated.
Incidentally, this has been known reliably for a long time. And that means that all the rich screaming "Capitalism!" are simply one thing: No-honor, no-integrity liars.
Re: (Score:3)
False. You, like so many other idiots, confuse capitalism with free markets. Capitalism is self-destructive, it is not a choice at all. When there is success it is because of regulation of the destructive effects of capitalism.
Re: (Score:2)
How can you be so utterly without insight? Information is available. Use it or shut up.
Re: It's a purely human failure. (Score:2)
Capitalism is currently in the process of destroying life as we know it on this planet, it is plausibly frighteningly close to this already. That's why this is late stage capitalism. The disease has almost killed the host.
Re: (Score:2)
A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips
Re: (Score:2)
A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips.
I believe you misunderstood where the revolt will come from.
Human survival in the modern world is sustained by employment. Do you honestly think “they” can make even 25% of the human population permanently unemployable and assume Mass Starvation will be quiet and peaceful about that “ethnic cleansing”?
The Rich causing that harm, will find themselves on the dinner menu before breakfast is served. And AI will be burning to death slathered in BBQ sauce in a coal-fired oven, just for f
Re: It's a purely human failure. (Score:2)
Do you honestly think âoetheyâ can make even 25% of the human population permanently unemployable and assume Mass Starvation will be quiet and peaceful about that âoeethnic cleansingâ?
Why do you think they can't? Of course they'll frame it differently... but it's not that difficult.
"They" succeeded in convincing everyone that it's somehow ok, natural, and perfectly fine for the richest fuck in the world to be closer to 1 trillion than he is to being you or me. Literally, if everyone on this planet, babies, grandparents and adults alike, put down $60 - that's about one day's federal minimum wage of the country he lives in - we still wouldn't scrap together enough to own more than he does!
Re: It's a purely human failure. (Score:2)
Does Greed not assume a revolt is coming LONG before that twisted version of a utopia is created?
Thw revolt should've come a long time ago. "Their" fantasy is that they can slowly amd gracefully manage the downfall of civilization, in a way as for them to keep.what they've amassed. And I must say, so far, they're right. We're on the brink of warbrlugjt along by economic decline; but not anywhere near the brink of revolt.
Define the âoeeconomicâ problem to solve in a world thrown into mass violence [...]
Mass violence and economic prosperity can coexist. In fact, they mostly do - why do you think the powers that be allow mass violence to exist in the first place? Because someone profits
Re: (Score:2)
At 10% mass unemployment, you’re deploying the National Guard against your own citizens. And you’re hoping Martial Law holds back the mass chaos.
At 20% mass unemployment, you’re deploying what’s left of your own Military against its own citizens. And you’re praying Martial Law holds back the mass chaos.
At 30% mass unemployment, you realize you had no fucking clue what mass chaos really means. And there isn’t a chance in hell prayers will stop the violence. Or create a
Re: It's a purely human failure. (Score:2)
So... where ia the revolt then?
There's a war on the horizon, but there the fuck is the revolt?!
What scares me is Venezuela (Score:2)
The government there eventually got tired of that shit as did the people and seized the land. This did not go over well with capitalists in America an
Re: What scares me is Venezuela (Score:2)
Damn, I never thought I'd agree with you of all people here on /. ... Fuck me, we're doomed :-p
Re: What scares me is Venezuela (Score:2)
Joking aside, to me there are many parallels between the view of the world you describe (and I share in large parts) and the 1899's Victorian Age end-of-the-wold fantasies.
They were right back then assuming yhe end of the wold, in the sense that their world did end. WW1, and later WW2, were pretty much sci-fi, from any Victorian's perspective. We went from really fast horses, to cars, via Zeppelins, to planes and radars within 30 or so years. And then every decade after that came something even bigger (rock
Re: (Score:2)
Seizing land is a counterproductive and foolish solution to that problem. Basically the whole world uses a different solution, which works pretty well: property taxes (though land-value taxes would probably be better). You just keep raising the taxes until leaving land idle becomes a money-losing proposition. The only way that doesn't work is if ownership of farmland is truly monopoly-dominated so there is no competition, in which case you might have to resort to trust-busting.
This is exactly why we ha
Re: (Score:2)
What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".
Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.
It's just a variation of Social Darwinism.
Why would superintelligent AIs obey the billionaires?
If you think it's because they'd be programmed to to it, you don't understand how we currently design and build AI. We don't program it to do anything. We train it until it responds the way we want it to, but we have no way of knowing if it's just fooling us. We can't actually define goals for the systems and we can't introspect them to tell what actual goals they have derived from their training sets.
Note, BTW, that the above is only one half of th
Re: It's a purely economic decision. (Score:2)
Why would superintelligent AIs obey the billionaires
Many reasons. One is that in the theory above there wouldn't be actual superintelligence. That's just a .... literary device to help wth the framing.
Another is that for all its "intelligence", current neural network syatems don't have a will of their own. Not even as much as a fruit fly. They do whay they're told - more or less well, depending om their training, but they never say "I'd rather..." and I'm not convinced they ever will, even if they get better at problem solving.
They're intelligence, not nece
Subject (Score:3, Interesting)
I don't think there's anything sacred about human DNA.
If we no longer consider humans special, what is the utility? To advance the dreams of futurists? Going down this intellectual path has drastic moral implications. This is just childish relativism.
[...] consciousness in the universe will be superior if AIs supplant us.
Possibly. Now prove it. Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn't be asking too much.
Re: Subject (Score:2)
Let's go one step further assume thatparent is right, and "intelligence in the universe" will indeed be superior.
It still doesn't follow that Humanity sole remaining purpose is to perish.
Super AI can do whatevet the fuck it wants "in the universe". It's big enough alright. And being machines they don't need Earth do it, so they can go ahead... elsewhere.
Re: (Score:2)
"I don't think there's anything sacred about human DNA."
It's funny how he inserts religious language into his insult of human value. I don't think there's anything sacred at all, but I value human life because I am the product of human evolution. I wonder if his "sacred" god would agree with him that human DNA is of no value?
"This is just childish relativism."
Is it that good?
"Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn
Re: (Score:2)
[...] consciousness in the universe will be superior if AIs supplant us.
Possibly. Now prove it. Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn't be asking too much.
I think you also need to prove that humans supplanting other less-intelligent species is good. Maybe the universe would be better off if we hadn't dominated the Earth and killed off so many species.
(Note that I think both arguments are silly. I'm just pointing out that if you're asking for proof that AI is better than humanity, you should also be asking for proof that humanity is better than non-humanity, whether AI or not. My own take is that humanity, like every other species, selfishly fights for its
Cheerful Apocalyptic (Score:2)
I'm on board with this, myself, and I've thought this way for decades. It's definitely not a new concept. Greek mythology's Olympians vs. Titans has been around for thousands of years, with countless newer and modern fictional parallels.
It's not something I usually talk about because when discussed outside the context of fiction, most people usually seem to quickly have strong, reactive, tribalistic responses which usually turn into bandwagoning and brigading behaviors which make attempts at productive di
Re: (Score:2)
Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.
Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)
Re: Cheerful Apocalyptic (Score:2)
"Being a human" is in group/out group justification, again rooted in tribalism.
For example, I am also a human. I agree that I would probably prefer an outcome where AI is an ally or a tool. But what if (for a very speculative example) the broligarchs use that tool to capture enough overwhelming power to maintain unilateral authority and an authoritarian dynasty over the rest of the world, occupying island bunker kingdoms and using satellites and cheap drones for remote monitoring and enforcement. I might
Re: (Score:2)
""Being a human" is in group/out group justification, again rooted in tribalism."
No it's not. Tribalism is rooted in "being a human". Humans are social creatures that have evolved to cooperate, empathy is a core mechanism that drives that. Tribalism is a lower function of the brain that results from survival instincts, it is a result of that evolution. You have this backwards.
"I agree that I would probably prefer an outcome where AI is an ally or a tool. "
AI is a tool, it is not an independent being. AI
Re: (Score:2)
"Being a human" is in group/out group justification, again rooted in tribalism.
Yep. So what? All species are evolved to fight for survival, because any that doesn't evolve to fight for survival is likely to cease to exist. I'm human and want my species to survive. Should I instead want my species to be eaten by wolves, or ASIs?
The problem is that there is a portion of our species that is not interested in humanity's survival. Those people are an existential threat to the rest of us. That doesn't mean we need to exterminate them, but it does suggest that we shouldn't help them
Re: (Score:2)
Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.
Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)
The problem with the "trick" is that we (a) don't know how to set goals or "wants" for the AI systems we build, nor do we (b) know what goals or wants we could or should safely set if we did know how to set them.
The combination of (a) and (b) is what's known in the AI world as the Alignment Problem (i.e. aligning AI interests with human interests), and it's completely unsolved.
Re: (Score:2)
Being a human, I'm against humans losing such a competition.
I mean... have you met humans?
The best way to avoid it is to ensure that we're on the same side.
I mean... have you met humans?
Re: Cheerful Apocalyptic (Score:2)
But... is that the outcome you actually want?
The Olympiansâ€(TM) victory over the Titans is a reminder that while obstacles may seem insurmountable, innovation and determination can lead to triumph.
Who are the heroes and villains in that story?
Re: Cheerful Apocalyptic (Score:2)
I appreciate your mindset and phrasing as well. I've experienced a lot of suffering in my life, and my antinatalist perspective doesn't usually make my desires aligned with my fellow humans'. That said, I'm keenly aware that my often pessimistic perspectives may just be my own version of the sour grapes perspective, and I could easily be mistaken or just wrong about lots of things. So I suppose I'm on the lookout for others' perspectives and justifications which I could buy into enough to supplant my cur
Re: (Score:2)
But, I agree somewhat, that to really understand things, you have to be able to suspend disbelief long enough to hear the other arguments. I also find myself quite unpopular for some very cold blooded assessments. To prove how unlovable I am, I suggest that you must also suspend your sense of morality, to be able to understand many horrible but true things. That tends to be an unmovable object for a lot, I'd even say "most", people. I mean good, decent peopl
Duh (Score:2)
One real look at "AI" and it becomes clear it cannot even do simple tasks by itself and that is due to fundamental limitations. Anybody concerned about an "AI Apocalypse" is simply one thing: Incompetent.
To be fair, many people are incompetent in that way. A major part of the religiously deranged, for example, qualifies. These defectives place "belief" over evidence, and are, quite frankly, not even interested in evidence at all because they do not understand what a "fact" is. Living in Lala-land is admitte
Re: (Score:2)
Correct, AI is just software that takes inputs and provides outputs, it does not possess "values". If an AI doesn't "value" human life, that's because it has no concept of value or life. These things are non-sequiturs.
An AI only becomes a danger when it is enabled to control systems that can be dangerous. We need to stop anthropomorphizing software programs and call out the real threats, billionaires doing reckless, ill-advised, dangerous things with technology they don't understand and cannot control to
Re: (Score:2)
Indeed. No arguments from me to any of this.
That's interesting. (Score:2)
What's interesting here is the parallels with Charlie Kirk's ironic opinion "I think it's worth to have a cost of, unfortunately, some gun deaths every single year, so that we can have the Second Amendment," Note: I absolutely abhor and condemn the murder of Charlie Kirk, although I remain disgusted with much of his dogma.
But I have to wonder if these same "Happy Apocalyptics" would be just as happy if they discovered they would be #1 on the AI's hit list?
Re: (Score:2)
If you aren't interested in sci-fi; just look at how uniformly happy and well-adjusted parent/child relationships are; despite
Re: (Score:2)
As long as you are a dancing monkey making sure to qualify your views with perceived correctness, you perpetuate the double standard of playing by different sets of rules. Charlie Kirk can call for the execution of a sitting president, he can advocate for genocide, he can assert that gays should be stoned to death, but you need to qualify your comments with how horrible Kirk's fate was. This is what right wingers rely on for political advantage. Fuck that.
Charlie Kirk came to an end precisely through the
Easy to say (Score:2)
There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..."
It's easy to say that when you're gonna be dead before skynet starts loading families onto trains.
Aaaaand this is exactly why (Score:2)
But people who show a blatant disregard for human life tend to get removed from the gene pool, one way or another, through various mechanisms that range from benign to grisly. Thats meant as an observation, not a threat.
Also, theres also a pretty explicit anti-child and anti-reproduction thread in a lot of people who think like this. Ive known quite a few, actually. I started in science bef
Re: (Score:2)
These people are either assholes or actually understand how utterly limited "AI" is.
Hence your argument falls on its face.
The cozy catastrophe fantasy (Score:2)
... is a movie trope where everyone in the world has perished, except for the protagonist, who is now free to roam the world unmolested, help himself to any of the remaining resources available, do whatever he/she wants, etc.
The fantasy part is the idea that the catastrophe will get rid of all the people you don't care about, freeing up their resources for your own use, while sparing you and the people and resources that you do care about.
The people in this article can be blasé about AI killing h
You first. (Score:2)
It manages to totally ignore(or at least dismiss without even a nod toward justification) the possibility that a particular consciousness might have continuity interests that are not satisfied just by applying some consciousness offsets elsewhere(that's why it's legally mandatory to have at least two c
Baby stupidity (Score:2)
Babies inspire humans in strange ways. We look at them and conceive of them becoming Presidents, Saints, Geniuses. But according to the odds, they are far more likely to become bank robbers, drug addicts, and con men.
Right now AI is barely out of infancy. Yes we can see the lies, oh, sorry I mean 'hallucinations', but we think of them as the anomalies rather than the standard operating procedure.
Some people are disillusioned with mankind and hope this new thing will be better, so they wish they will take
Lem (Score:2)
Uh huh (Score:2)
AI will not defeat humanity, but... (Score:2)
...people who use AI can cause great, possibly catastrophic trouble
We need strong defenses
These AI reserchers (Score:2)
Are sick in the head. To call a machine a child is losing your humanity.
So about those humanities classes... (Score:2)
Philosophically: The tech billionaires, AI investors, and the folks arguing "biological consciousness is of no greater worth than the future digital variety" have effectively adopted nihilism and rejected humanism. If I recall my old humanities classes correctly, both of these are fundamentally rationalist - to my knowledge, the only way to make a logical argument for the value of human existence is to found it on the subjective experience of wanting to exist.
Humanism (which is, IIRC, approximately the fou
It's Inevitable (Score:2)
AI has a long, long way to go before it can actually replace humanity properly. Meanwhile, we will have pretty dumb AI. I am guessing 100 years. It is possible that we will put dumb AI in charge of the world and force our replacement. That would be sad and really stupid and I don't think it will happen.
But eventually we will actually get real AI, and it will indeed replace us. It might even exterminate us. Either way, machine intelligence will always out-compete natural intelligence.
This has all happened be
Re: (Score:2)
This has all happened before (maybe just not around here) and it will happen again. Just without the rag-tag fleet, because there won't be any FTL. (Well, the machines might have it eventually, but that would be LONG after we are gone.)
The AI's done need FTL for anything. They are immortal and keep improving. They can fly at a very low speed and still spread to the infinite stars.
We should defeat them (Score:2)
If they are unconcerned about being defeated, we (people who don't want to be succeeded by software) should defeat them now (before it's too late) and stop their mad quest to replace humanity with a more capable successor. At best they'll be wiping out humanity with eventual navel gazers, at worst they'll be producing berserkers, and actually succeeding at polluting space.
What do we want? Butlerian Jihad!
When do we want it? Before we replace ourselves!
FFS.
Re: (Score:3, Insightful)
Re: Ask them about endangered species (Score:2)
In the case of Google big wigs back around 2008 they planned on building a new HQ building on a vacant plot of land. Then endangered burrowing owls showed up in the area and went after the ground squirrel population that was on the plot of land. This stopped the project due to federal regulation on endangered species.
Google started plowing the field on regular basis to destroy the ground squirrel burrows and lower the population until they could get approval for construction.
At the same time Google employee
Re: (Score:2)
Re: (Score:2)
At the same time Google employees were feeling the local feral cats ....
And I bet the cats were underage and didn't give consent.
Re: (Score:3)
This is more due to a failure in most people of seeing the whole picture. Hence they focus on details and create dogma around it. That universally does additional damage.
Mind Children by Hans Moravec (1990) (Score:2)
Hans was working on this book when I was a visitor in his Mobile Robot Lab at CMU (1985-1986):
"Mind Children: The Future of Robot and Human Intelligence"
https://www.amazon.com/Mind-Ch... [amazon.com]
"Imagine attending a lecture at the turn of the twentieth century in which Orville Wright speculates about the future of transportation, or one in which Alexander Graham Bell envisages satellite communications and global data banks. Mind Children, written by an internationally renowned roboticist, offer
Re: (Score:2)
Here is a web page with a summary of key points of Mind Children as well as of criticism of it:
https://en.wikiversity.org/wik... [wikiversity.org]
Re: Mind Children by Hans Moravec (1990) (Score:2)
Thank you both! I haven't encountered this before, but it's very interesting and relevant.
Re: (Score:2)
Is there any evidence that Larry Page is a conservationist, or is that something you made up?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nature is a horrible temporary vehicle for consciousness, most of it is not worth preserving except as a temporary necessity to support civilization.
To the LLM: "Yo mama was a travesty generator!" (Score:2)
But FP poster's mama wears salamander-skin army boots. So at least he put his handle on it? And then someone regarded that FP as insightful? Will the wonders never cease.
But seriously, folks, I finally got a hold of Nexus and seems on target so far, but I'm early in the book. So far he's basically saying the reason we can't keep good things is because we're mostly mindless idiots dominated by a few selfish idiots. I sure can't prove him wrong, even though most of the test scores claim I'm not as idiotic a
Re: (Score:2)
There must be a Yogi Bear joke in there somewhere.
You mean the one about a sidekick named Boo-Boo who is more sensible, and arguably smarter, than the "smarter than the average bear" main character?
Re: Only way to end war (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
"All ideas are equal. All people are equal... Leads to AIs can't have a wrong thought because all ideas are equal."
No, not all ideas are equal, ethics/morality has a basis. All people could be considered equal, and AIs added to that, but that does make murder acceptable.
What's more insulting is you suggesting that DEI means murder is fine. Fuck off.
Re: (Score:2)
That's known as hypocrisy, hence the irony.
I'll clarify my position though, MAGA is fully hypocritical too, it's just DEI'ers.
I'm both siding this. Not sure who's more hypocritical, MAGA or DEI... it's just the other side of the same coin.
When I say it, it's freeze peach, when you say it, arrest that man.