Is AI Dangerous? James Cameron Says 'I Warned You Guys in 1984 and You DIdn't Listen' (ctvnews.ca) 144
"Oscar-winning Canadian filmmaker James Cameron says he agrees with experts in the AI field that advancements in the technology pose a serious risk to humanity," reports CTV:
Many of the so-called godfathers of AI have recently issued warnings about the need to regulate the rapidly advancing technology before it poses a larger threat to humanity. "I absolutely share their concern," Cameron told CTV News Chief Political Correspondent Vassy Kapelos in a Canadian exclusive interview... "I warned you guys in 1984, and you didn't listen," he said...
"I think the weaponization of AI is the biggest danger," he said. "I think that we will get into the equivalent of a nuclear arms race with AI, and if we don't build it, the other guys are for sure going to build it, and so then it'll escalate... You could imagine an AI in a combat theatre, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate..."
Cameron said Tuesday he doesn't believe the technology is or will soon be at a level of replacing writers, especially because "it's never an issue of who wrote it, it's a question of, is it a good story...? I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said — about the life that they've had, about love, about lying, about fear, about mortality — and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.
But the article notes about 160,000 actors and other media professionals are on strike, partly over "the use of AI and its need for regulation."
SAG-AFTRA president Fran Drescher has told reporters that if actors don't "stand tall right now... We are all going to be in jeopardy of being replaced by machines."
"I think the weaponization of AI is the biggest danger," he said. "I think that we will get into the equivalent of a nuclear arms race with AI, and if we don't build it, the other guys are for sure going to build it, and so then it'll escalate... You could imagine an AI in a combat theatre, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate..."
Cameron said Tuesday he doesn't believe the technology is or will soon be at a level of replacing writers, especially because "it's never an issue of who wrote it, it's a question of, is it a good story...? I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said — about the life that they've had, about love, about lying, about fear, about mortality — and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.
But the article notes about 160,000 actors and other media professionals are on strike, partly over "the use of AI and its need for regulation."
SAG-AFTRA president Fran Drescher has told reporters that if actors don't "stand tall right now... We are all going to be in jeopardy of being replaced by machines."
A warning? (Score:5, Insightful)
Re: (Score:2)
Well for what it's worth I warned you too but nobody listened to me either.
Re: A warning? (Score:2)
Re: (Score:2)
I need an Alita sequel.
Re:A warning? (Score:5, Funny)
Calling Terminator a "warning" is a bit much. It was a fantasy sci-fi with time travel as a central story arc.
This. Ironically enough, the man who wrote 1984 back in the 50s managed to nail the future a hell of a lot more accurately.
Re:A warning? (Score:5, Informative)
"the man who wrote 1984 back in the 50s"
The 40s. Orwell wrote 1984 in 1948, which is how he decided on that particular year (although it would not be published until 1949).
Re:A warning? (Score:5, Interesting)
Also what is interesting personally to me is that he wrote that book influenced by his own experience of killing an elephant in lower Burma, where he was a police officer in a town, a hand of the Empire in one of its colonies. He was asked to come find an elephant that went through a 'musth' episode (hormonal condition with testosterone levels becoming 60-140 times the normal levels for a short time span, causing a bull elephant to go nuts searching for a female) and caused some property damage, killed a cow, scared people, destroyed a van and also killed man. George wasn't going to shoot the elephant, it made no sense, once the musth stops the elephant calmed down, after all, it wasn't a wild elephant, a tamed one, one used to do actual work in the village, like a piece of machinery to them actually.
However approximately 2000 of Burmese people were gathering around, all cheering, all encouraging for him to shoot the elephant, some wanting to get the meat but mostly probably just hoping for a spectacle. Orwell felt like he was a puppet in the hands of the crowd and he shot the animal not because it needed to be done, the elephant was no longer a danger, but because the crowd wanted it to be done. The only reason he killed the elephant (and he didn't know how to kill it right, so he shot it many times all in the wrong places, so it took more than half an hour for the beast to die) was to make sure that the white men wouldn't look foolish to the locals.
This was the event that prompted Orwell to write the 1984, because this was the event that made him realize how much we are *not* in control of our own actions but how much the *crowd* is in control.
Re: A warning? (Score:2)
And my first philosophy class at Berkeley in 1982 I predicted the singularity would happen in 2025. I just applied more slaw a threshold of million 1982 dollars for a grad student to be able to use a college bot machine to code intelligence. I also predicted that instead of writing code the way we do today there would be a new profession where we guide the machines telling it more about what we wanted than how to build it.
Humidity can take this in two ways. We can harse the awesome potential to produce m
Re: A warning? (Score:2)
And my first philosophy class at Berkeley in 1982 I predicted the singularity would happen in 2025. I just applied more slaw a threshold of million 1982 dollars for a grad student to be able to use a college bot machine to code intelligence. I also predicted that instead of writing code the way we do today there would be a new profession where we guide the machines telling it more about what we wanted than how to build it.
Re: A warning? (Score:3)
Re: (Score:3)
I said the same thing tomorrow but the mods deleted it. They've picked their side.
Re: (Score:3)
Not the AI we need to worry about (Score:3)
Re: (Score:3)
There are several AI doomsday scenarios:
1. singularity theory, where AI gains super powers and humans can't stop it.
2. AI as a tool is used to make bad things
3. AI is used for good. It replaces all jobs and humans can just eat all they want and do all they want. There are no wars or crime as everyone has everything. It might sound strange, but according to rat tests, this kind of scenario leads first into overpopulation and then into people not wanting to have or take care of kids, which causes sudden and t
Re: (Score:2)
3. AI is used for good. It replaces all jobs and humans can just eat all they want and do all they want. There are no wars or crime as everyone has everything. It might sound strange, but according to rat tests, this kind of scenario leads first into overpopulation and then into people not wanting to have or take care of kids, which causes sudden and total collapse in population.
Yup. In some ways, this is the most terrifying of the 3 options.
1/ is lofty and abstract. There's no telling what will happen then. Chances are we won't even know what hit us.
2/ happening now, and being contained, or at least recognized. Nasty, but probably manageable.
Re: (Score:2)
One of the things that struck me about the Calhoun experiments was it is still just a velvet lined caged (i.e- no opportunity to move).
Even for the Beautiful Ones, they were essentially remaking "culture" to fit their circumstances. Walling themselves off was essentially an attempt at escape.
While I can see very abstract social games coming into play just for pecking order and boredom in a material utopia, I can't see collapse.
Re: (Score:2)
Forget any of the Hollywood adaptations, they could never capture the spirit of Judge Dredd &, even if they did, US audiences would more than likely be completely turned off by it. Remember how US audiences totally misunderstood Paul Verhoeven's Starship Troopers?
Might do well in Europe though!
Re: (Score:2)
Let's see...bug aliens attacking earth?
Denise Richards at her HOTTEST (but sadly way too clothed in this movie)....
What did I miss?
Re: (Score:3)
But I fail to see the similarities he tried to draw...
I don't see Trump as a fascist....he's a populist for sure, but I didn't see the consolidation of power to ONE person, we still had and have all 3 branches of govt. and they checked Trump over his time in office. They didn't try to gather the commercial sector to be ruled by the govt....
And...I didn't see his administration try to suppress voices they didn't like. Conversely I have seen evidence (Twitter files, etc)...that the liberal aspe
Re: (Score:3)
I would posit that we're already seeing this in action.
Re: (Score:2)
Re: (Score:2)
, which causes sudden and total collapse in population.
Looking at birth rates in Japan, Korea, China, most of Europe, Russia, etc...
We don't need AI for population collapse.
Re: (Score:2)
AI is currently a tool. It *could* be kept as a tool, but there are advantages to having it act as an agent, so at least some versions probably will be in that form, eventually. And the problem there is that it will act to achieve the goals it was given, and the folks that gave those goals probably didn't think about all the edge cases.
Re: Not the AI we need to worry about (Score:2)
Re: (Score:2)
But there will be lots of DIFFERENT people making the choices. Some of them will be cautious, some won't.
Re: (Score:2)
Warned us? He promoted it... (Score:5, Insightful)
AI isn't a robot holding a shotgun, AI is a robot generating fake spam SMS spam that fewer and fewer people are willing to click on.
The problem in the real world is that corporate spammers use bots to generate fake content for other bots to fake consume so that humans can monetize the metrics.
No shotguns. No truckers. No liquid metal cops. Cameron didn't warn us, he monetized a fear in a wholly unproductive way.
Re:Warned us? He promoted it... (Score:4, Funny)
Re: (Score:2)
And, at some point, Daeaerys Targaryen got involved.
Re: (Score:2)
Re:Warned us? He promoted it... (Score:5, Insightful)
Terminator 2 was 1991.
The original Terminator's plot explains that SkyNet was an AI developed by humans to more efficiently control assets of war, making the military far more effective. It became self-aware and decided to use its ability to control warmaking to eliminate humans, eventually creating the Terminator cyborgs.
So, even in-universe in The Terminator, it's originally a computer AI created by humans that eventually creates real-world soldier cyborgs. SkyNet never was "a robot holding a shotgun." SkyNet was, from the very beginning in the Terminator universe, an pretty classic AI that was given control of real-world resources and used them for its own, internally logical, purposes.
Re: (Score:2)
It's a little worrying how close we are to that with today's chatbots. They can write code. A more advanced one with a Raspberry Pi and a root shell could be a dangerous thing.
Re: (Score:3)
A trucker with a shotgun is not trying to steal my moms credit card information. And a chatbot can't murder her.
Well, I remember reading that a chatbot is implicated in convincing a nut to attempt to murder the Queen of England. With a crossbow of all things:
https://www.vice.com/en/articl... [vice.com]
Also, advised a journalist to commit murder:
https://www.newyorker.com/cult... [newyorker.com]
I, Robot was written in 1950 and features a more accurate and contemporary perspective on tech's ability to replace people. Cameron's take on AI in 1990 was childish compared to the views of contemporary thinkers from WW2.
To be fair, the written word can encompass a lot more than the time you have in an action movie.
Especially with corporate breathing down your neck - for example, in "the matrix", humanity wasn't originally a power source/battery, but being used as a pro
Re: (Score:2)
Re: (Score:2)
Maybe art has a different way of relaying a message that isn't supposed to be taken literally, you fucking fauxtistic nerd.
I would like to add... (Score:2)
If a machine can put a human out of work, then it should. It makes zero sense to pay humans to do things that can be done more cheaply and better by a machine. The economic impact of the job loss must be dealt with by other means, and there are many options available.
"We don't want to lose our jobs to machines" is the wrong hill to die on, as we have seen before, and it will just leave people on the wrong side of history. Labor automation is coming, its awesome, and there is no stopping it. We must adap
Re: (Score:2)
If a machine can put a human out of work, then it should. It makes zero sense to pay humans to do things that can be done more cheaply and better by a machine.
And when enough humans are put out of work because of machines, who will buy the products?
The economic impact of the job loss must be dealt with by other means, and there are many options available.
Such as? Not everyone can be a programmer or robot repairer or robot manufacturer (which, ironically, might be done robots) or a writer or songwriter or
Re: (Score:2)
Machines will buy more raw materials to create server space to sustain themselves and multiply. This is called the "paperclip maximizer problem" in AI safety circles. Eventually they will convert the iron in your blood into server racks and the carbon from your body into rocket ships to mine asteroids to make more server racks.
It is a consequence of "instrumental convergence".
Heck, James Cameron was right about the OceanGate Titan when Stockton Rush asked him to endorse the project.
Re: (Score:2)
Economic Adjustments (Score:2)
Since you asked, here are some:
1. Universal Basic Income
It's already hotly debated, and silly token experiments have been attempted, but the fact is it doesn't work right now, in our current economic environment. THe reason is simple: give free stuff to working-class people, and they lose their incentive to work, which in turn means most of them quit, which in turn means we don't have enough people producing the things we need, which in turn produces fatal supply shortages across the board, which makes it
Re: (Score:2)
The economic impact of the job loss must be dealt with by other means, and there are many options available.
Yeah. Historically, some common means of dealing with the economic impact are rioting, revolution, war, etc.
The robot wasn't literal, nor the shotgun (Score:2)
The "robot holding a shotgun" was a plot device. We can't wrap our brains around billions of IoT devices self-organising, so he told that story through the representation of various characters.
That's the Terminator series of films to me. May there be many more!
This is what happens (Score:2)
when you give a celebrity attention.
Strange (Score:2)
So whose job is sacred? When we replaced the sewer with a sewing machine. That was not a big deal. When we replaced the human harvester with a combine, nobody complained. It was all wrong. Everyone's job is sacred. Instead of making robots and AI to increase production and provide UBI, lets everyone convert to Amish and ban all machinery of every kind. Force everyone back to being farmers with hand made tools only. Someday when a wealthier nation, like say North Korea, invades the USA the few of our citizen
Re: Strange (Score:2)
Re: (Score:2)
Cotton gin makes for a lousy martini.
Re: (Score:2)
The trick is to make them EXTRA 'dirty'.
W.O.P.R takes control of the nukes + nato ukraine (Score:2)
W.O.P.R takes control of the nukes + nato ukraine
Terry Pratchett's seamstresses are otherwise (Score:2)
Though Carrot does get his sowing done when he tries hard enough...
George Orwell warned us in 1948... (Score:5, Insightful)
and we didn't listen then, either. You're a bit late to the party, James.
Re: (Score:2)
and we didn't listen then, either. You're a bit late to the party, James.
Watching humans try and predict doomsday, isn't exactly something they're "late" for.
The behavior is quite predictable when it is repeated over and over again as doomsdays come and go, taking "prophets" with it. We'll probably have a dupe submission here by morning.
Re: (Score:2)
All of the technology Orwell predicted, is available today. Where is his dystopia?
The reality is, dystopia is caused by bad governments, not by technology. Look at North Korea, or Iran, or Russia, for examples. All these countries have and use technology to repress their people. But free countries have even more technology, and use it (for the most part) to improve the lives of their people.
Re: (Score:3)
1984 is not at all like today's society is like. As far as dystopias go I'd say we are more "brave new world" than we are "1984".
The idea of surveillance is kind of right, but it is not at all how it is works in today's real life. 1984 describes an coercitive society, and surveillance is very obvious, with cameras at home you can't turn off for the authorities to watch for your behavior and arrest you if you do something inappropriate. Today, people willingly install these cameras, which they pay with their
Re: (Score:2)
I am somewhat sympathetic to your "Brave New World" argument, but this:
https://www.nytimes.com/2009/0... [nytimes.com]
struck me as a bit coercive.
And one of the primary themes of 1984 is the misuse/perversion of language. This is abundant in both the private and public sectors of all the countries that come to mind.
Re: (Score:2)
Well, TECHNICALLY Orwell warned us in 1984 too, like "in the book 1984", which was published only in 1949.
Which is not about AI (directly). But how would Big Brother keep track of all the proles if not through AI? But we liked our newspeak and our doubleplusgood gadgets, so hating Orwell is love, and being ignorant of what he says is knowledge. Has always been.
Job offer... (Score:2)
Minitrue is hiring ! You appear to be an excellent candidate.
My bad on 1948 vs. 1949. 1984 was written in 1948, but as you correctly note, was not published until 1949.
Yeah, no (Score:2, Offtopic)
Terminator is cool and all, but that scenario has nothing to do whatsoever with modern dangers of AI.
If that was what we were supposed to be concerned about, then he missed the mark by a lot. The current danger isn't in us developing Skynet, but in the erosion of social trust. Comments, reviews, and articles can be quickly AI generated, and can promote any agenda you want. We're already seeing scarily good generated AI voices, images and videos.
We're well on the way to a world where nothing is trustworthy.
Re: (Score:3)
While AI eroding social trust is an issue, They are also developing AI for the military for killing people too, good luck controlling that, no serious military on earth is going to give up that tackle advantage.
Re: (Score:2)
While AI eroding social trust is an issue, They are also developing AI for the military for killing people too, good luck controlling that, no serious military on earth is going to give up that tackle advantage.
Clearly the Three Laws of Robotics won't be in their programming, either.
Re:Yeah, no (Score:5, Informative)
Has nobody actually read Asimov's books?
The Three Laws of Robotics were a plot device, used to illustrate how they were completely insufficient and would have all sorts of weird gotchas that were explored in many different stories.
Re: (Score:2)
OK, it was a joke. Clearly.
Why is everyone so serious?
Previous art (Score:3)
When you think of the dangers of a chatbot (with access to hardware), you'd think of 2001, a Space Odyssey (1968) or WarGames (1983), also maybe Tron (1982). The killer robots with death rays was depicted in Master of the World (1934). From the Wikipedia category https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
When you think of the dangers of a chatbot
When you think of the dangers, you read "I want to scream and I have no mouth". The stuff that nightmares are made of.
the primary goal is to win the game! (Score:2)
in an nuclear fight you can win with an big 1st strike that can take out the enemy's ability to fire back
Re: (Score:2)
in an nuclear fight you can win with an big 1st strike that can take out the enemy's ability to fire back
It's one planet. With both the winners and losers having to share one atmosphere.
You really think the rest of the fish tank isn't gonna have to eventually notice that massive pile of hot wet shit in the corner?
Re: (Score:2)
I don't know are you joking, but such a strike would cause massive dust cloud, which would block the sun, which would cause nuclear winter that would last for years, which would kill 90% of population and destroy civilization. If that was your goal, you won the game, but it doesn't matter that much where you hit if you have enough nuclear weapons.
Re: (Score:2)
in an nuclear fight you can win with an big 1st strike that can take out the enemy's ability to fire back
And how would you do that? This isn't science fiction where you can magically teleport a nuclear weapon to any point you desire in an instant. Delivery of nuclear weapons to their designated target takes about 30 minutes to complete (that is, launch to impact)*. As soon as the opposing country sees any launch of a nuclear weapon they will respond within minutes to launch their own, thus no "first strike".
For an explanation of what is needed to launch once a nuclear launch is detected, and the return strik
Re: (Score:2)
* If the launch were directed at continental Europe that time is even lower. Also, sub launched nuclear missiles could, depending on their target, strike even more quickly. Think a Russian sub off the coast of New York or Virginia.
Once you introduce nuclear subs though, sure, the Russian submarine nukes Virginia. Then the USA Virginia class submarines proceed to nuke the rest of Russia.
Not to mention all the sites in North Dakota opening up and sending ICBMs to Russia. It might be a first strike option, but nuclear submarines, at least in the quantities Russia has, aren't enough to take out enough of the USA's response.
Re: (Score:3)
Also Russia now has nukes big enough to take out Texas and hypersonic missiles which are now ship lanchable and probably sub lanuchable soon.
However, the three shall never meet, most likely.
1. Even the Tsar Bomba isn't big enough to take out Texas in one hit.
2. You can't fit such a big bomb onto the hypersonic missile
3. You can't fit such a big missile onto a submarine.
4. Their "hypersonic" have turned out to not actually be "hypersonic" - basically they're no faster or maneuverable than the ballistic missiles we've had since the start of the cold war. Proper hypersonic weapons, what the hype is about, is a missile that is both faster than m
Re: (Score:3)
For some reason I see Russia as being the more self disciplined and do what they say they will do than the US or NATO.
Russia was the one that invaded Ukraine without even telling its own soldiers that they were going in.
Given their history of violating agreements, truces, and all that, no, they aren't "more self disciplined". Russia might have "more than enough" nuclear weapons, but they don't have all that many Kinzhals.
Basically, after WWII Europe settled on a "no more war in Europe" policy. Russia violated that... Thus all the aid. And long range weapons can be very useful for cutting supply lines, hitting command p
Re: (Score:2)
Well, the ideal way would be to smuggle a bunch of nukes into enemy territory over weeks/years in advance, not send it via missile.
well... (Score:2)
If you cannot login (Score:2)
Re: (Score:2)
Slashdot's site "design" includes some badly implemented CSS.
If you don't see the "login" link, just drag your browser window narrower or wider - the link will eventually reappear.
But Can We Handle The Truth? (Score:2)
What I worry about right now... (Score:2)
Irony that Camron Was Sued For Plagiarism (Score:2)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Danger of AI? (Score:2)
What we have now is governments making robotic weapons, and governments building AIs, and nuclear/chemical/biological weapons, and weaponizing internet, controlling in the end the way global culture thinks and see reality. The common factor there are governments, not AIs, and against that we were warned in the other 1984.
Why is he acting like he's right? (Score:2, Troll)
When was his point (if he even made one) proven? It's a bit much for him to run around screaming "I was right!" .. Humans are still very much in charge. Meanwhile he used computers to replace make-up artists in Avatar. Hell, his movie takes away jobs from local theater actors.
Re: (Score:2)
Look at the percentage of men under 30 who have had sex in the year surveyed, and compare before and after the advent of the Tinder algorithm.
Elliot Rodger was radicalized by PUA Hate message boards, but to be radicalized you first need what the CIA calls a "personal injury".
For Islamic terrorists, the "personal injury" is usually as simple as simple poverty, in countries that have extreme GINI Coefficients. The Tinder ELO score (ranking system) created a GINI Coefficient for sex more extreme than 95% of ec
Re: (Score:2)
After a certain point, safety is pure waste. (Score:2)
Hold my gamepad.
Too late to be whinging about good writing (Score:4, Insightful)
> I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said — about the life that they've had, about love, about lying, about fear, about mortality — and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.
Sorry James. You just described the last 40-something years of Hollywood creative output. Right now we get about one truly original idea a year. The rest is "let's dust off a 1950's super hero" (Marvel), or "Let's shit all over a beloved franchise" (He-man, Velma, Foundation). Movies are written by committees. Honestly, at this point "A.I.-generated word-salad" will probably be an improvement.
Hollywood has no appetite for risk. Profit is the only thing that matters. Until that changes we're not going to get anything better.
Neil Gaiman is on a roll (Score:2)
'American Gods', 'Sandman' and 'Good Omens' have all been spectacularly good, though Good Omens 2 could have done with some truncation.
There are a few good things out there - but yes, they are rare, and some of the best are coming from unexpected directions (Squid Game?).
Why would Cameron know anything? (Score:2)
He's not exactly a technology expert, or a psychologist. Does he even know what AI does, exactly?
Living in denial (Score:2)
I just don't personally believe that a disembodied mind that's just regurgitating what other embodied minds have said â" about the life that they've had, about love, about lying, about fear, about mortality â" and just put it all together into a word salad and then regurgitate it ... I don't believe that have something that's going to move an audience," he said.
Cameron is living in denial. AI can come up with unique plot lines in less than a minute. Each time it's a totally different story. The shit even small AI models come up with is hardly worse or less creative than what we've been treated to by the industry.
So Cameron Has To Gloat? (Score:2)
At least John Badham of Wargames fame is not gloating.
Wargames and it's AI message came out in...
Wait For It...
1983 !!
Re:So Cameron Has To Gloat? (Score:4)
And Joseph Sargent's Colossus: The Forbin Project came out in 1970...well before The Terminator in 1984.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
Imo any future AI that wants to take down some government officials a peg/make them crap their pants should insist on communicating with them only in the fashion that Colossus did.
Colossus: The Forbin Project (1970) - Clip 1: Missile Launched! (HD)
https://youtu.be/tzND6KmoT-c [youtu.be]
With a human sounding voice they'd think they could BS you.
Remember the money you had to pay Harlan Ellison? (Score:2)
Speaking of Harlan, https://en.wikipedia.org/wiki/... [wikipedia.org]
Lol. Did you ever read The Moon is a Harsh Mistress? "Mike" turned out to be a "Dinkum Thinkum", but things could have gone differently.
I'm really getting tired of this (Score:2)
We already have over 200,000 homeless people in this country who have full-time jobs. Not to mention the additional 400,000 who don't. We're looking at the po
Re: (Score:2)
LLMs are not the Terminator nor are they going to be anytime soon.
But that's what they want you to think ;)
James Cameron is an amatuer (Score:2)
The Simpsons predicted everything. They even predicted that Trump would be President, murder hornets, and COVID-19 lockdowns.
Cameron is the New Prophet? (Score:2)
In 1984 Orwell predicted some elements of AI.. (Score:2)
Re: In 1984 Orwell predicted some elements of AI.. (Score:2)
All sci-fi is dystopian (Score:2)
Here's Cameron trying to blow smoke up his own ass by claiming that he was some kind of prophet. What a joke. Pretty much all sci-fi writing is dystopian and intended to be cautionary tales about the evils of mankind that the authors don't agree with.
How can you take this seriously? (Score:2)
AI is not dangerous (Score:2)
People who use AI as a weapon are dangerous, really dangerous
We need to develop effective defenses