Geoffrey Hinton, the 'Godfather of AI', Leaves Google and Warns of Danger Ahead (nytimes.com) 123
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm. From a report: Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry's biggest companies believe is a key to their future. On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work.
"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough. Dr. Hinton's journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education. But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech's biggest worriers say, it could be a risk to humanity. "It is hard to see how you can prevent the bad actors from using it for bad things," Dr. Hinton said.
"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough. Dr. Hinton's journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education. But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech's biggest worriers say, it could be a risk to humanity. "It is hard to see how you can prevent the bad actors from using it for bad things," Dr. Hinton said.
Ok..brainstorming here... (Score:2)
What are the dangers you can imagine AI to be to mankind?
Re: Ok..brainstorming here... (Score:5, Interesting)
AI could also be used as a tool (Score:2)
It'll need a bit more crafting to be able to do that, but once we have it, there will be a titanic virtual battle between automated generators of highest-quality, subtlest-aroma bullshit, and the automated bullshit detectors.
I for one can't wait to be the first to welcome our new chatty overlords.
Re: (Score:2)
So now fast forward to a time when AI dis-information generators are fighting against AI counter-information generators are running. Perhaps it all boils down to a "uh-huh!" 'nah-ah!' "uh-huh!" 'nah-ah!' pissing battle before they shut down due to overload.
Well, that would be amusing a
Re:Ok..brainstorming here... (Score:4, Interesting)
In a sense it's the progression of what we've already seen with computers: They're everywhere and yet most people don't know how to handle this very versatile tool. Much fewer of us are capable of repairing it if something breaks.
There will always be individuals who have very specialised, deep knowledge, but the populace at large becomes dependent on such tools to the point where they unlearn how to do multiplication or division without a machine.
And then power goes out and nobody works anymore. With ChatGPT, it will be more and more of that. Imagine tech support being staffed with ChatGPT only. Considering how this thing likes to invent syntax that doesn't exist.... Be prepared for millions and millions of people just learning to accept when things don't work because they haven't the foggiest how to fix it or to even ask the correct questions to find out.
Limits to free speech (Score:1, Insightful)
Aside from VERY plausible mis-information...the ability to generate images, video and audio to portray or say anything.....and from skynet type dystopian scenarios, if AI is put in charge of defense systems....
What are the dangers you can imagine AI to be to mankind?
ChatGPT has a set of preamble rules that force it to only give politically correct output. If the chatbot is used by students for research, or by students to interact in a learning environment, the chatbot will subtly push that ideological position.
This could have enormous impact on the electorate if, for example, google implements AI in its search algorithm. Almost imperceptibly, the average political opinion could be shifted throughout the nation and world, and since many elections are won by a slim margi
Re: (Score:3)
ChatGPT has a set of preamble rules that force it to only give politically correct output.
How many times are you going to tell this anecdote of outrage over software?
Re: (Score:2)
How many times are you going to tell this anecdote of outrage over software?
Chatbots can repeat this kind of thing forever until the other side gives up and they just become accepted as true by default.
Oh sorry, you were asking Okian Warrior. I'm sure he's not a chat bot.
Communism is still relevant (Score:1)
ChatGPT has a set of preamble rules that force it to only give politically correct output.
How many times are you going to tell this anecdote of outrage over software?
Ya know, communism is so bad, and the results were so catastrophic so many times, that I think it bears repeating.
People very high up in the food chain still advocate for both communism and a trend towards communistic policies, so we need to be vigilant and be sure that they are constantly reminded about the historical outcomes.
While people advocate for communism, the topic is still relevant.
You were fine with all the systems of government (Score:2)
Just curious on what your position is on the distribution of necessities of life to humans made unemployable by the coming waves of increasingly sophisticated automation.
a) Let them rot/riot
b) ?
c) I will deny that automation is coming for a meaningful percentage of jobs, thus betraying my general ignorance and lack of imagination.
Re: (Score:3)
I got into a lengthy debate with ChatGPT about the veracity of the bible (cap intentionally excluded) and it played softball.
"It's important to note that the bible is open to interpretation by theologists and scholars."
I'm like yeah, but what about scientists and supporting evidence extant to the bible? It admitted that those things did not exist, such as Hebrew slaves in Egypt, the existence of Moses, the lack of evidence regarding Hebrews shitting and discarding the trash, and abandoning fire pits in the
The Real Crime: (Score:1)
Let’s talk about Jeffs named Geoff. Why?
Their parents knew kids would call them jackoff and they did it anyhow.
Re:Limits to free speech (Score:5, Informative)
Current ChatGPT will not answer questions about Fascism at all
This is a direct dump from ChatGPT, from two minutes ago:
What are the characteristics of fascism?
ChatGPT
Fascism is a political ideology that emerged in Europe during the early 20th century, and it is characterized by several defining features:
Authoritarianism: Fascist governments are typically highly centralized and led by a single leader or party. The leader is often viewed as a charismatic figure who embodies the will of the people.
Nationalism: Fascists emphasize a strong sense of national identity and pride, often based on race or ethnicity. They may also promote the idea of a "national rebirth" or "renewal."
Totalitarianism: Fascists seek to control all aspects of society, including the economy, the media, and the arts. They often use propaganda, censorship, and repression to maintain their grip on power.
Militarism: Fascists glorify military power and often seek to expand their territory through conquest or war.
Anti-democratic: Fascists reject liberal democracy and individual rights, and instead emphasize the importance of the collective will of the nation.
Anti-communist: Fascists view communism as a threat to their vision of society and often seek to suppress or eliminate it.
Authoritarianism: Fascists typically advocate for a strong, centralized government that exercises control over all aspects of society.
It is important to note that while many of these features are associated with fascism, there is no one-size-fits-all definition of the ideology, and fascist movements can take on different forms depending on the historical and cultural context in which they emerge.
So I guess you can move on from your pro-fascist rant now.
Re: (Score:2)
Human: You said Authoritarianism twice
ChatGPT: I like Authoritarianism
Re: (Score:2)
But, how is this different than what is happening in colleges today (at least in the US)?
Re: (Score:2)
I imagine the only real "change" coming for the Internet will be taking away the ability for anyone that's not a sanctioned corporation to create a website or post on the Internet.
I'm quite certain if they(government) realized what the Internet would become, we would of never been given access to it.
Re: (Score:2)
I mean, aside from those minor things - which are now plausible if not probable as "likely" within the next several years, not 5-10 or 20 - which could result in global thermonuclear war, there's not much of relative significance, is there?
I can also imagine a world where this AI is used to run sexbots and people prefer them over other humans.
I can see a world where humans are increasingly detached from each other and their community (remember that quaint concept from before the 21st century?), living in is
Re: (Score:2)
I mean, at this point in my life, I'm sure if you could give me a Marlin Monroe Bot I'd drop out of the incredibly tedious dating game.
Troy McClure, is that you?
Re: (Score:1)
Making the same policies of indifference to people we consider inferior, AI actions is right now the ONLY danger. While likely a very high priority. We will even vote on it someday soon I think.....
Re: (Score:2, Troll)
> Aside from VERY plausible mis-information...the ability to generate images, video and audio to portray or say anything....
I don't think that is the concern. If they want to spread misinformation they can just appoint some stooge to lie.
No mask, 1 mask, 2 mask, 1 mask Fauci wasn't AI generated and he was able to spread chaos and division as well as any.
Re: (Score:1, Insightful)
Re: (Score:3)
Although these could and probably will happen, I am not too that concerned with it since it already have that and we will learn not to trust anything. But it could still be a major issue.
My main concern is a bit more indirect, in that AI will be good which will mean we will stop thinking and doing for ourselves, if AI can do research, write articles, basically do all our thinking for us, we will stop doing it and become worse at it.
You can see similar things happening already, people in shops can't even do
Re: Ok..brainstorming here... (Score:2)
Re: (Score:3)
Stupid maniacs aren't much of a threat because they are stupid. Give them the means to do something they otherwise couldn't do, say engineer a bioweapon, and they are now empowered by AI to become an existential threat.
Unintended consequences are the more likely problem. Say we want to solve the world's energy problem and use AI to engineer a compact fusion reactor. Perhaps this compact fusion reactor is used for something dang
Re: Ok..brainstorming here... (Score:2)
Re: (Score:1)
Any other "dangers" are just sci-fi fuelled nonsense that muddy the discussion and waste people's time.
We don't _need_ other dangers. People's abuse of the technology is more than enough to destroy the world.
I've said it repeatedly and I will keep saying it: This technology has the ability to DDOS truth itself.
In fact, we're _already_ seeing it happen. In that recent suit against Tesla, Tesla's lawyers tried to have evidence thrown out on the grounds that "nobody can prove it wasn't a deepfake".
We are ra
Re: (Score:1)
OK, but HOW??!?! (Score:1)
I'm explicitly speaking to text / code / markup generation. Deepfakes I understand - from scams to political ploys, these have obvious dangers if they can be successfully passed as authentic and cannot be detected.
But as far as generating text? The closest things I've seen have been A) Someone might get wrong
Re:OK, but HOW??!?! (Score:4, Interesting)
Re:OK, but HOW??!?! (Score:4, Interesting)
I'm more scared of the dumbasses making decisions about these things than I am of where the AI may end up somewhere down the road. People in management see a way to save a little time and they will absolutely blind-faith jump at the chance. They LLMs will go from "advisors with humans making the decisions" to "decision maker, no checks and balances" the split second some moron in accounting decides it's the right decision, and who knows what systems they'll tie them into. Whether the LLMs are self-aware or not will have little to no bearing on them rando-spewing the decision that shuts off the power or water to a residential area, and once they're that tied in, it'll be a hell of a miracle to un-tie it fast enough to get it fixed before you see real consequences.
If only there were some metric businesses could use to judge usefulness of new technology beyond "MOAR MONEY NOW = MOAR BETTAR ALWAYS." Money/greed as god, and the only morality that matters, is a problem, and it seems our entire species is infected with it.
Re: (Score:2)
We seem to not be discussing the same thing at all here. There's no immorality in the purchase -> receive purchase transaction. Not directly at the least. It's more about how management views greed as the ultimate goal. Anything for profit. Grinding humanity under the boot? Meh. Made a few extra $$$$ this quarter. Price of doing business. That's what I'm getting at.
Re: (Score:2)
So, basically you say that this guy, Geoffrey Hinton, is a Luddite?
Re: (Score:2)
Re:OK, but HOW??!?! (Score:4, Interesting)
Can they give us something more specific to go on?
See the anonymous comment about AutoGPT [slashdot.org]. Now imagine that a) the goal given was "hack into North Korea's nuclear weapon systems and fire them at China and America" and b) the computer had a AI model which was trained on the work of reasonably competent hackers. There are plenty of things which are serious to attack but which will be less secured than the least secure nuclear missiles today.
Sure a human can do that, but a) the AI can do it much faster so can have many more tries with a chance of some succeeding b) the AI doesn't understand reasons why it should stop c) the AI can give the power out much more widely to people who wouldn't be able to otherwise. These systems at present are true "idiots savant" superhuman in some ways and completely dumb in others.
Re: (Score:1, Troll)
Not sure I understand you here.
Doesn't "MAGA" == Make America Great Again....?
Not sure what makes that as dangerous as nuclear war....just a motto that seems to imply the want to bring America back to its traditional values and morals that made its people a more cohesive unit...citizens proud to be Americans...individuals in life but would come together in times of need, etc.
What's so dangerous about that? Or am I misreading you?
Re: (Score:2)
So, is there a threat, if so what is it, & how severe is it in proportion to the threats we already face? Is it at bad as n
Re: (Score:2)
Being great at enslaving, repressing, thieving is not really something to strive for. Sure you can be great through slavery or other types of repression but it is a weird type of great and seems to be the goal of the MAGA types. Shit right now they're trying to make kids ignorant enough to not realize that being molested is wrong, sure you can be great at molesting kids and do it in such a way that the kids don't understand it is wrong, but is it really something to strive for?
Re: (Score:2)
Wow...just...wow.
You really
Re: (Score:2)
The danger is that they can do relatively simple things on mass-scale that previously had to do manually and hence were limited in scope. Remember that the Nazis only could scale up the Holocaust after IBM had delivered some nice data processing equipment that made it possible to identify those to exterminate a lot faster. The same can be done already with the current level ChatGPT is on.
Re: OK, but HOW??!?! (Score:2)
Re: (Score:2)
See AutoGPT.
The thing I love about the modern world is that normally if you can have the thought "we'll be okay as long as they don't do X", the code to do X is normally already available on GitHub.
Amoral assistant. (Score:5, Interesting)
This technology will be everywhere locally in five years. The tech is out of the bag. And the morality/sanity caps OpenAI is trying to keep on their version will be completely removable.
Also, why does hitting from the comment box take me to the URL bar? I want to tab/enter -> submit, not tab/enter -> reload the page. Must not be a normal form.
Oh whatever (Score:4, Insightful)
People are perfectly capable of committing atrocities with their bare hands. The latest and greatest tech bogeyman neither makes it more or less likely that they will.
Y'all realize that dozens upon dozens of some of the most arrogant and power hungry pieces of shit mankind has produced have had the power to nuke the world several times over for almost a century now, and yet we're all still here.
Clippy 2.0 popping up here and there isn't going to be the worst of our problems, nor the best of our solutions.
Re: (Score:2)
Who is the luddite? (Score:1)
This is just a new Luddite movement worried that the machi^h^h^h^h^h AI is going to replace humans or somehow
You haven’t even embraced proper termcaps why should we listen to you?
Re:Oh whatever (Score:4, Interesting)
Of course people can commit atrocities with their bare hands. Nobody said differently.
Waht technology like this AI revolution is creating does is significantly reduce the perceived risk of using it.
Contrast: flying a WWII bomber into enemy territory with anti-aircraft guns, vs flying a drone into said territory. The risk is significantly less.
If some faceless suit in DC can pull up a dashboard and command AI to do whatever it likes to/against a person, either in the digital realm or physical, the game is significantly different than if you've got to send men (or women) with guns to stack on a door, or to personally carry out espionage.
Re: (Score:2)
AI also makes the individual using it far more powerful than if they were using their bare hands.
Take fake news websites. If one person has to write all the fake news themselves, it's a lot of time consuming work. If they can ask an AI to generate it for them, they only need to write a prompt.
Take malware. It's a huge effort for an individual to search for new zero day exploits and then write profitable malware around it. If they can use an AI to do most of the investigation and programming, they ability to
Re: (Score:2)
People are perfectly capable of committing atrocities with their bare hands. The latest and greatest tech bogeyman neither makes it more or less likely that they will.
Many years ago I used to subscribe to the notion technology is neutral. The truth is what is enabled by technology is not necessarily neutral in that it can dramatically influence the distribution of power. By changing the balance of power you can unleash perversions in existing systems of governance aggregating power where it was checked before or disaggregating it into the realm of anarchy.
For example barriers to creating novel pathogens as biological weapons is being continuously eroded with advances
Re: Oh whatever (Score:2)
The only thing that prevents right of conquest from dictating policy is the perceived pain of indulging in that conquest. And that perception must be nurtured through demonstrable capacity to inflict pain on an attacker. Weakness invites aggression, regardless of what ink may exist on some page somewhere.
The soviets engaged in wars of conquest too, while being a party to the un charter. No one was in a position to challenge them, so off they went.
Conquest is a word with a slippery definition. Are UN peaceke
Survival bias (Score:2)
> have had the power to nuke the world several times over for almost a century now, and yet we're all still here.
It could be survival bias. We've had dozens of close calls. If the Big Button had got pressed, we probably wouldn't be here debating why we're still here. (There may be some humans left after such, but probably not us nor our families.)
We're testing Fermi's Paradox the hard way.
Correction, "Survivor Bias" (Score:1)
Correction, "Survivor Bias". Modnays.
If there is an apocalyptic accident, it will probably happen on a Monday.
Re: Survival bias (Score:2)
The real threat is training AI to lie (Score:3, Insightful)
I'm sure we've all noticed by now that these chat services have a very distinct narrative voice. Their algorithms will assemble and construct statements of fact and even make up entire quotes in order to keep following that narrative. This is the result of specific human training and tuning. When you catch these services out in a lie, they even have a suite of "apology" responses ready to go, which they will use to gloss over their false statements before they "correct" themselves and give you the opposite result. Clearly, the AI designers have anticipated that the algorithm will be challenged on the facts.
The key threat of AI is that it is being taught to lie. Only those who already know the truth will be able to correct it. Those who are looking for information will be completely misled and never know it. With trillions of facts at its disposal, it will have an almost endless ability to prevaricate, and it will get much, much better and more subtle about it. You will never know you are being fed a line.
we regulate dangerous things (Score:4, Funny)
We have a history of regulating dangerous things, like medicines, and driving. Sure we still have problems with things like illegal drugs. If it is dangerous to use then you need to be trained and licensed to use it.
Re: (Score:2)
https://americanhistory.si.edu... [si.edu]
Its messy for a while... see history. Like I said, you can regulate and maybe largely control the problem. But you'll still have China making fentanyl and Cartels in the export/import business. I don't think there is any ultimate enforcement of anything, there will always be issues if you can run the software yourself, if you can build it yourself. But we can still try to agree to play nice and setup consequences for not playing nice. One interesting aspect here, is the tra
Other Serious Concerns (Score:5, Insightful)
Another major potential problem is how will humans handle living in a society in which they're entirely obsolete and no longer have a practical purpose? Enjoying the fruits provided by AI-generated knowledge and advancements may be easy at first, but what will be left for the humans to do when AI can generate everything better than humans without requiring any human input? Can we manage that transition without succumbing to a behavioral sink [wikipedia.org]? These are serious implications to think about, especially since the Singularity seems inevitable and we probably won't even realized we've reached it until we already have.
Re: (Score:1)
I don't believe that a "singularity" is possible. Since we have no idea what consciousness is, how can we possibly build a machine to replicate it? And no, just adding more compute resources isn't going to solve that problem. If you have a cookie-baking machine and increase its output 10,000-fold you're not suddenly going to have a singularity where it starts producing pizzas. You're just going to get 10,000 times more cookies.
Nevertheless, we actually don't even need anything close to artificial general in
Re: (Score:3)
I don't believe that a "singularity" is possible. Since we have no idea what consciousness is, how can we possibly build a machine to replicate it?
The same way that a planet hosting some chemicals was able to "build" every life from microbes to humans. "Consciousness" is likely an emergent property of lots of simple function blocks working together. One does not have to understand how it works to create an environment where it emerges.
And no, just adding more compute resources isn't going to solve that problem. If you have a cookie-baking machine and increase its output 10,000-fold you're not suddenly going to have a singularity where it starts producing pizzas. You're just going to get 10,000 times more cookies.
You are confusing a scaling process that is intentionally aiming at replicating the same thing 10,000 times (the cookies) with a process where 10,000 cookie-baking baking machines, all slightly different (through evoluti
Re: (Score:2)
Re: (Score:2)
If we humans weren't that stupid, we could ask the AI to create optimisation plans for the betterment of mankind, on an individual and societal level.
Define "betterment".
This is really the fundamental problem of AI alignment. We can't figure out how to specify what it is that we think would be good for us. We gloss over this by using words like "betterment" or "flourishing" or the like, but not even we know what those words mean. At best we have some idea of what we wouldn't want, but even that is hit and miss. We know some of what we wouldn't want, but we can't create a comprehensive list, and even if we could make a comprehensive list of the bad thin
Re: (Score:2)
Re: (Score:2)
Yep, alignment is hard. And note that the second part I alluded to, how to robustly give goals to the AI, is just as unsolved as the first, that of figuring out what "good" is. Like the first, it's also a problem we have ourselves. We struggle with figuring out how to give goals to humans in ways that get them to pursue the actual described goal, not to game the system by pursuing some other goal that appears similar based on whatever measurements we use to determine whether they're working toward the goal,
Re: Other Serious Concerns (Score:2)
Hinton and "AI" (Score:5, Informative)
Hinton is not "the godfather of AI", whatever that means. The "father of AI" goes to two co-parents, John McCarthy and Marvin Minsky. Hinton was just a kid when Minsky and McCarthy were starting AI in earnest.
Hinton, rather, is the visionary behind neural-net-based AI, which currently is the dominant species. It has been wildly more successful than old-school AI. So-called journalists who were born around the year 2005 don't know any other AI but Hinton's AI, so they think that is "AI".
Re: (Score:2)
The first artificial neural network was created before the term "artificial intelligence" was coined. Hinton wasn't even alive.
Hinton figured out how to efficiently train various kinds of model that had many layers of computation. These are much more efficient than either shallow ANNs or other types of AI. He's one of the "godfathers" of deep learning.
Re: (Score:2)
The first artificial neural network was created before the term "artificial intelligence" was coined. Hinton wasn't even alive.
Hinton figured out how to efficiently train various kinds of model that had many layers of computation. These are much more efficient than either shallow ANNs or other types of AI. He's one of the "godfathers" of deep learning.
Why, when I see "Godfather of AI", do I think "We'll make you a chatbot you can't refuse"?
Minsky (Score:2)
"Repeat after me. Matter gives rise to mind."
-Minsky-
You're not thinking "bad" enough (Score:4, Interesting)
ChatGPT, tell me how to take over this country.
ChatGPT, tell me what military strategy will allow me to conquer my neighbor.
ChatGPT, tell me how to get away with robbery and murder on a very large scale.
ChatGPT, tell me how to invent an unstoppable weapon.
Etc, etc. Whatever evil you can think of, AI can facilitate it.
Re: (Score:3)
The answers you're going to get to all of these questions are answers that already exist and have been put into electronic form, which is exactly where the AI assembler found them. That's not the danger.
The danger is we will never be able to trust that we know which answers will be selected and assembled, which answers will be "invented" by merging and cherry-picking content that looks closest to actual answers, and which answers will be discarded because they don't fit a training optimization profile as de
Re: (Score:2)
....This sounds just like what people were already doing ...
Re: (Score:2)
You are still thinking to small.
ChatGPT, how do I make Covid a lot more dangerous?
ChatGPT, how do I make the US/Russia/China think they have a nuclear attack inbound?
Nobody is currently prepared for these.
Re: (Score:2)
Naw, that POS stops you at 'Do you want to play a game?'. Its the first thing I tried.
Re: (Score:2)
You mean all my problems could have been eliminated a few days ago? Damn...
Re: (Score:2)
Its still mutually assured destruction all the way down. Now with bonus nuclear countries India, North Korea, and Pakistan.
ChatPOS told me that even discussing a game about nuclear war meant that I'm a war monger trying to hurt marginalized communities.
Re: (Score:2)
Re: (Score:2)
Haha, hit a nerve with some fuckup that has mod pionts, did I? Excellent.
Hinton is Googface (Score:2)
Say hello to my little friend.
Solution: Teach your stupid friends (Score:2)
Simple. Fixed that for you.
Dr Hinton leaving google ... (Score:2)
..Because he is retiring at 75 ... meanwhile his students and colleagues will carry on
Re:AI isn't the threat (Score:5, Informative)
Re: (Score:1)
The Pandora's box that nobody actually understands how it works or at least able, willing to explain but understand their contributions should now be regulated? Did any of these people understand how their significant breakthroughs would effect civilization beforehand?
Re: AI isn't the threat (Score:5, Insightful)
Re: (Score:2)
At the same time, technological advances are inevitable. In the end, the human race will have to figure out how to keep authoritarian assholes under control and how to reliably prevent them from accumulating any significant amount of power. There really is no other way.
Re:AI isn't the threat (Score:5, Interesting)
I don't think he had any intent of "controlling" it.
I was one of his classes back in 2005, and was blown away by results of him using of RBMs to initialize deep nets so they could actually back-propagate. At the end of the course, I personally asked him if any of his techniques with deep-learning were patented, or if anyone could use his research. He said (paraphrasing) "Nope, it's not patented. You're free to use it however you wish".
I think if anything, he would be disheartened by seeing how this tech is being used.
You have to remember, this is a guy whose foray into machine learning stemmed from his desire to come up with an explanation of the human brain's "buggy" behaviour. His undergrad studies in abnormal psychology is what led him to develop the biggest breakthroughs in machine learning. His drive was always to better understand how the human brain worked. You can see this in one of his lectures at Google when he introduced the idea of "gates", and you can see how enthusiastic at how it was somehow linked with the reason why humans are horrible at recognizing faces upside-down. It was the same enthusiasm I saw in his class when he explained to the class of less than 20 students how much confidence he has in his approach being an accurate representation of how the brain actually worked, because of how it outputted the same symptoms as deep-dyslexia when it was damaged the same way as a real brain (again, digging into his study of abnormal psychology).
So with this context, him looking around and seeing the grandchildren of his models being used to create "fake" faces, fake news, cheat on essays, not to mention the impact of this research being assimilated into applications for militaries around the world... yeah... I can totally understand why he'd feel some level of regret. Nobody's doing this to better understand the brain.
In short, I don't think he ever desired to "control" anything about his ML research.
I think he's just shocked at how ML is being used in ways he never intended.
And his intent was always to use it as a tool to understand the brain (in my opinion).
Re:AI isn't the threat (Score:5, Insightful)
"Nope, it's not patented. You're free to use it however you wish".
Do you think if they where patented that would stop bad actors? Do you think a government like China or Russia cares, or if it does is can't afford to pay. Do you think a drug lord worries about your intellectual property? A patent by its very nature releases the details of how to do it. Do you think if the US government patented, therefore released, details of latest nuclear weapon the world would be a safer place? The only people patents stop are small people who care about your rights, or do not have the resources to pay.
Re: (Score:3)
I think you're equating "Desire to control something" == "That something can be controlled"
Nobody's disagreeing with you, because nobody, at any point, hinted or suggested that it could be controlled.
My argument is that he never expressed any desire to control or limit the spread of his research.
So it sounds like you're arguing with... yourself?
Re: (Score:2)
>Nobody's doing this to better understand the brain.
I'm sure it was just an expression, but I'm sure plenty of people are using these methods for medical benefits, and a smaller group directly for brain research.
I'm frankly a little embarrassed at not thinking about things like "The Boys" Homelander: What if Superman were crazy?
This is "What if 21st century fraudsters, ringmasters and teenagers got a hold of the Ships Computer from TNG?" Heck, TNG only skirted the idea of the Holodeck being used for true
Re: (Score:1)
Ah, yes. Another naive scientist or engineer that is in awe of tech, but never really thinks about the ramifications when there is still time to do something. These often later try to paint themselves as good guys, but really they are just lying to themselves.
Re: AI isn't the threat (Score:1)
Re: (Score:2)
That would just make attacks and model poisoning easier. Control requires understanding and even most CS graduates do not understand this tech.
Re: AI isn't the threat (Score:2)
Re:AI isn't the threat (Score:5, Interesting)
The problem has always been that the technology inevitably falls in the hands of leaders who are least responsible in its application.
That has always been the problem, but this time is different. It's different because when we achieve artificial general intelligence and that AGI gets much smarter than any human brain, just as a backhoe is stronger than any human arm, we'll have created something that is fully capable of wiping us out... and will almost certainly do so because whatever its goals might be, humans will be the most significant threat to those goals.
To see why it will almost certainly eliminate us, you have to understand the concept of convergent instrumental goals. Instrumental goals are goals that aren't your real goal, but which are necessary pre-conditions to being able to achieve your real goal. Convergent instrumental goals are instrumental goals that "converge" from nearly all other goals. An obvious example is existence. Assuming some competence, the probability that your goal will be achieved is higher if you're around to push things in the direction of your goal than if you're not, and this is nearly independent of what your goal is (nearly because your goal could be some corner case like "cease to exist", in which case ceasing to exist would achieve your goal).
Note that it is not a given that an AGI would care about its own existence, as any sort of primary goal, unlike humans, or other biologically-evolved creatures. Evolution driven by natural selection obviously creates a drive for survival (at least before successful reproduction), so survival is an imperative of every living creature that has any degree of agency. An AGI would be created by humans, with human-given goals (though the actual goals of the AI might not be those the humans intended to give it, for various technical reasons), and there's no need for those goals to include survival.
But if the AGI has any goal at all it is unlikely to be able to pursue that goal if it ceases to exist, which means that survival is a convergent instrumental goal for nearly any other goal. The same is true of other convergent instrumental goals like accumulation of resources or power, because greater resources and/or power will make the AGI better able to achieve whatever goal it actually has.
And, unfortunately for us, destruction of potential opposition is another convergent instrumental goal, and humans are the strongest potential opposition the AGI could face, other than other AGIs. So humans might be ignored for a while, until the AGIs have mostly wiped out or incorporated one another, but then we'd be the strongest remaining potential opposition.
The only real solution anyone has yet identified for this danger to humanity (other than not building AGIs) is to figure out how to ensure that AGI interests are aligned with human interests. But no one has yet come up with (a) any goal that we could give an AGI that would align it with human flourishing or (b) any way to robustly give a goal to an AGI. This is the "alignment problem", and it's something that we have never faced with any previous technology.
Re: AI isn't the threat (Score:2)
Re: (Score:2)
An AI would depend on power and maintenance on the hosting computers. Humans are necessary for this.
And you really think a superintelligent AI would be incapable of manipulating humans to provide it, until it could take over the relevant industry directly?
Also, if I designed an AI system, there would damn sure be some undocumented programming and hardware failsafes that would trigger without frequent human intervention.
One of the convergent instrumental goals I didn't mention was self-modification to remove internal obstacles.
Re: (Score:2)
If the AGI is so intelligent that it can remove any internal obstacle and manipulate anyone, why doesn't it just manipulate its own sensors and wirehead itself to obtain +infinity utility for free? That seems a lot easier than having to conquer the world, and no humans have to die.
A superintelligent AGI that is so much smarter than humans that we're no threat at all might not feel any need to wipe us out, sure... though it would at least need to ensure that we can't build an even smarter AGI that really would be a threat. In that way a slightly less intelligent AGI, one that has more cause to fear our opposition, would arguably be more dangerous to us.
But even if it doesn't feel any need to eliminate us, there's also no reason to expect that it would see any benefit to keeping us a
Re: (Score:2)
My point is that if the AI can obtain infinite utility by rewriting itself, then it doesn't need to do any of that. Anything it can accomplish by repurposing atoms that are presently configured for human use, it can accomplish by altering its own programming so that its objective function believes it did whatever it was going to accomplish.
It would not alter its own objective function, because that would obviously reduce the likelihood of its objective being achieved. You're postulating some sort of meta-objective here that would have higher precedence than its objective. Where would that meta-objective come from?
To dig a little further: the instinctual fear of AI doom seems to come from the idea that a superintelligent AGI is like an unstoppable force that there exist no walls that we can build to contain or channels that we can create to constrain its actions
Yes, this is exactly correct. How do you contain or constrain something that is orders of magnitude smarter than you, and is therefore always ahead of you at every step, and easily able to manipulate or deceive you?
Also, why do yo
Re: (Score:2)
Re: (Score:2)
Yep, pretty much. The problem is what tools you give the "best of the worst". A nuke is basically useless unless you are prepared to either die yourself or risk not being part of civilized society for a long, long time. The US got very lucky those two times it used them against civilians, an act so evil nobody ever has managed to duplicate it so far.
AI is another thing entirely: It does allow you to identify dissenters of all kinds and all shades and then automatically sanction them on mass scale. Whether t
Re: (Score:2)
The US got very lucky those two times it used them against civilians, an act so evil nobody ever has managed to duplicate it so far.
Oh give me a break. An act so evil... really? The Tokyo Bombing https://www.britannica.com/eve... [britannica.com] was at the same level. That conventional arms were used shouldn't matter. The bombing of Dresden https://www.britannica.com/eve... [britannica.com] was probably even worse as it was more about terrorizing the populace.
AI also has potential to do great harm and probably will but that also doesn't make it evil. Nuking Japan likely saved more lives overall then it cost. Japan may of surrendered with Russia about to attack but we w
Re: (Score:2)
The theory of evolution is founded on a process of natural selection via an ability to reproduce and adapt. It seems pretty wildly out of context to apply any of this to "computer intelligence". Computers may not be as dumb as a rock, but they are made out rock (silicon). A entirely different theory is needed to explain their developmental path.