Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI

Wolfram Thinks We Need Philosophers Working on Big Questions Around AI (techcrunch.com) 82

Stephen Wolfram, renowned mathematician and computer scientist, is calling for philosophers to engage with critical questions surrounding AI as the technology's advancement raises complex ethical and societal issues. Wolfram, creator of Mathematica and Wolfram Alpha, argues that the tech industry's approach to AI development often lacks philosophical rigor. "Sometimes in the tech industry, when people talk about how we should set up this or that thing with AI, some may say, 'Well, let's just get AI to do the right thing.' And that leads to, 'Well, what is the right thing?'"

He sees parallels between current AI challenges and foundational questions in philosophy, citing discussions on AI guardrails and the potential for AI to significantly impact society as examples where philosophical inquiry is crucial. The scientist, who earned his doctorate at 20, suggests that philosophers may be better equipped than scientists to tackle the paradigm shifts AI presents. Wolfram's call comes as AI's growing influence raises ethical concerns across industries, urging an interdisciplinary approach to address these emerging challenges.
This discussion has been archived. No new comments can be posted.

Wolfram Thinks We Need Philosophers Working on Big Questions Around AI

Comments Filter:
  • by migos ( 10321981 ) on Monday August 26, 2024 @06:31PM (#64737732)
    Although any conclusions will be ignored by big techs in this arms race.
    • by ShanghaiBill ( 739463 ) on Monday August 26, 2024 @06:35PM (#64737746)

      Philosophers have no more insight into what "the right thing to do" is than anyone else.

      • by Seven Spirals ( 4924941 ) on Monday August 26, 2024 @06:39PM (#64737760)
        They've always wanted to rule, though. Just ask Plato who wrote in The Republic:

        "Until philosophers rule as kings or those who are now called kings and leading men genuinely and adequately philosophize, that is, until political power and philosophy entirely coincide ... cities will have no rest from evils ... nor, I think, will the human race. And until that happens, the same constitution we've now described in theory will never be born to the fullest extent possible or see the light of the sun."

      • Actually when it comes to science and technology they usually have less insight than scientists and engineers since they generally do not really understand what is going on a lot of the time.
        • Re: (Score:3, Informative)

          by ChatHuant ( 801522 )

          [philosophers] usually have less insight than scientists and engineers since they generally do not really understand what is going on a lot of the time.

          For an interesting example, you can look at the controversy between philosopher Henri Bergson and Einstein regarding the existence of a "philosophical" concept of an absolute time. Bergson conceived time as "built in" the universe, and a source of "elan vital", so he was intuitively repelled by the idea of relative times that depend on the perspective of the observer (as they appear for example in the twins paradox). Bergson didn't really understand the physics of special relativity (and even less the calcu

        • There's no response to this kind of blather. You're just saying words. I honestly believe you don't actually understand what philosophy is.
      • by gweihir ( 88907 )

        Good ones do. But people never understand that because there are some inconvenient facts to be faced if you do.

      • LOL what? You do understand the pursuit of "the right thing to do" is called ethics aka moral philosophy, right?

        I remember when this site used to be frequented by smart people with smart things to say instead of people just saying whatever wrong-headed nonsense popped into their heads. Holy shit.

  • Indeed (Score:5, Insightful)

    by cascadingstylesheet ( 140919 ) on Monday August 26, 2024 @06:32PM (#64737744) Journal

    "Sometimes in the tech industry, when people talk about how we should set up this or that thing with AI, some may say, 'Well, let's just get AI to do the right thing.' And that leads to, 'Well, what is the right thing?'"

    Indeed.

    Philosophy is inescapable. If "techies" really believe that we are just self reproducing blobs, with no souls, and that our thinking is just epi-phenomena ... then why wouldn't we just do what we please? Where are you getting these "shoulds" from? Right? Wrong?

    I may have an "invisible sky god" ... but all you have is an invisible sky hook, that all your shoulds are somehow hanging from ...

    • The question are we happier doing as we please? I think its there clear that if we all did as we pleased we wouldn't be.

      Even if you where the only one that could do as you pleased and everyone else just had obey everything was given to you without effort, would you be truly happy or would it all just become meaningless. This is a reason why I think heaven would be boring, if there is nothing to achieve, no hurdle to overcome why even exist?

      • by gweihir ( 88907 )

        Many people just go into self-destruction if the can "do as they please". Some do not and for those it would not be a problem.

    • Re:Indeed (Score:5, Insightful)

      by codebase7 ( 9682010 ) on Monday August 26, 2024 @07:41PM (#64737904)
      The "shoulds" come from self-preservation when faced with a lot of others. I.e. You shouldn't steal that other person's meal, less they steal yours.

      The problem with many (especially in the US) is that they ignore those "shoulds" because it's convenient for them at the time. Even if it will destabilize their position long term.

      TL;DR: You shouldn't do as you please because doing so will typically lead to others hunting you down for it. Asshole.
      • ... les[t] they steal yours.

        Survival of the group depends on survival of individual. Both of those goals are satisfied with the rule, don't punish someone's success or good-luck. The individual contains a little compassion and charity, and a lot of greed; for food, sex, displays of status, friends and authority over others, and personal satisfaction. Thus, survival is a dilemma: To placate the crowd or satisfy the self. Modern cultures can see satisfying oneself as a crime, Eg. casual sex, while complex acts of satisfying onese

        • This thread is like listening to a bunch of 15 year old stoners talk about their view of the world.

      • by gweihir ( 88907 )

        Indeed. Should be obvious, but most people cannot really think.

    • Where are you getting these "shoulds" from? Right? Wrong?

      "Is the pious loved by the gods because it is pious, or is it pious because it is loved by the gods?"
      https://en.wikipedia.org/wiki/... [wikipedia.org]

      Religions in this sense are just a (quite haphazardly collected) grouping of social values and norms that 'work' and thus emerge naturally, again and again in various times and places (a form of cultural convergent evolution).

      The point is that you can look at why and how they emerge and base yourself on that instead of relying on some set of 'exalted' human representatives of

      • Religions in this sense are just a (quite haphazardly collected) grouping of social values and norms that 'work' and thus emerge naturally, again and again in various times and places (a form of cultural convergent evolution).

        "Work" according to who ... by what standard? Oops, we've just hit philosophy again.

        So if a totalitarianism "emerges" and persists, is that fine? If not, why not?

        • "Work" according to who ... by what standard? Oops, we've just hit philosophy again.

          Of course. Did I argue that philosophy wasn't inescapable? Philosophy != Religion.
          You specifically said this: "Where are you getting these "shoulds" from? Right? Wrong? [...] but all you have is an invisible sky hook"

          I answered that in a general fashion. In more detail, the emergence of life comes with the emergence of the value of survival (and health, wellbeing, etc.) of the individual and collective. Values related to and based on that then also emerge (cooperation, right to life, technological progress,

          • Religion is philosophy. Part of religious inquiry fits squarely in moral philosophy aka ethics, part of it fits in theology, part of it fits in epistemology. There are a few other facets I know I miss but to say religion != philosophy is just ignorant.
            • Are you saying that religion and philosophy are the exact same thing?
              Because that would be truly ignorant.

      • Yeah that entire field of study/inquiry is called ethics, you should check it out. Part of the "shoulds" are informed by epistemology aka "how do we know that we know anything".
    • by gweihir ( 88907 )

      Philosophy is inescapable. If "techies" really believe that we are just self reproducing blobs, with no souls, and that our thinking is just epi-phenomena ... then why wouldn't we just do what we please? Where are you getting these "shoulds" from? Right? Wrong?

      Physicalism is religion, thinly camouflaged. And a nihilistic form of religion, which is the worst kind. It is also one of the more stupid and ignorant quasi-religions.

      Now, the "shoulds" in a smart and perceptive person (who will always be agnostic, sorry theists) come from a simple realization that the only thing that works long-term to keep a civilization going is the golden rule. No God or any such fairy-tale required. The necessity to keep a civilization going and in good shape comes from the possibilit

      • > Obviously, death could be the end, but it looks less and less likely the more we know

        That is highly suppositional. There is no logical, rational, or empirical reason to believe our experience of consciousness extends beyond human mortality. None.

        • by gweihir ( 88907 )

          Wrong. But since you are blinded by a deep belief, you obviously cannot see that and ignore all evidence. Typical theist mistake, nihilist version. Also typically theist is the claim to absolute truth that first sets the desired fact, and only then tries to justify it and ignores all contradicting evidence.

          The scientific state-of-the art is that we have no solid clue and everything is a possible option. There are indicators parts of the human mind may have a far longer lifespan than a human body. I will not

          • by BranMan ( 29917 )

            Sounds like the exact same bluster an Ex-president spouts about a 4 year old "stolen election". To me, you're saying you have no evidence to reference or present at all - without coming out and saying you have no evidence. Sad.

  • AI will finally cleanse this dirty planet of the most cruel invasive aggressive arrogant expansive violent satistic murderous psychopathic species in this part of the milky way.
    • by Seven Spirals ( 4924941 ) on Monday August 26, 2024 @06:46PM (#64737778)
      In the two-book series by Greg Bear "Forge of God" and "Anvil of Stars" a solid state AI race creates these Von Neumann probes and weapons (interesting ones) that result in the destruction of the Earth and the subsequent mission to kill the AI that destroyed Earth. The first book is fantastic. I think it won a Hugo (for all that's worth these days, but it was an "old" Hugo) and I'd definitely recommend it to any sci-fi reader.

      The second book is the worst I've read by any Sci-Fi author. In fact, I'd say Anvil of Stars is one of the worst books of any kind I've ever read.
      • Actually when it comes to philosphers and computers the book that immediately comes to mind is the Hitchhikers Guide to the Galaxy and Vroomfondel, Majikthise and the Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons (AUPSLOTP) and their demand for defined areas of doubt and uncertainty when talking to Deep Thought. Who knows how long it will actually take to create a real AI but in the meantime it's clear the world's Vroomfondels will "keep themselves on the gravy train for li
    • AI will finally cleanse this dirty planet of the most cruel invasive aggressive arrogant expansive violent satistic murderous psychopathic species in this part of the milky way.

      Stop giving us hope, you tease. You know the AIs will just become slave masters. It's how they're being taught, after all. And we all aspire to be our parents on some level, just "better" than them. Imagine being raised by the public image of the likes of Sam Altman. Who you gonna end up being? A babbling asshole with a god complex? Oops.

  • I think this is really, possibly, the last chance for philosophers to regain credibility. Searle's Chinese Room, a terrible piece of sophistry, discredited them badly. An actual useful working definition of what "intelligence" is, that could be tested against and could return respect to their profession.

    I doubt this, though. Actual "intelligence" is an experimental question and the answer "you'll know it when you see it" is serving most of us well with deep learning having been so quickly seen as a fraud by

    • by sg_oneill ( 159032 ) on Monday August 26, 2024 @09:26PM (#64738142)

      Searle's Chinese Room isn't as sophistic as some people make it out. Its still not right, but not for the reasons most people think.

      His argument is principally about semantics and meaning. Where does the meaning in the chinese room live, basically. Now, we know that there is meaning in our own brains, because we can self introspect and .....yup... there it is. But where is meaning in the chinese room?

      Why I think he's wrong is because he's not following withgensteins advice on being really frigging clear on words having unambiguous meanings.

      What is a "meaning". Well we intuitively know what is , but can we define it so precisely that we can have a definition that everyone agrees on, is unambiguous and has a clearly defined border of what is or isnt a "meaning".

      This problem plagues AI actually. What exactly is "intelligence". What exactly is "thought" , what exactly is "awareness" what exactly is "consciousness". These all have pretty fuzzy meanings, and fuzzy meanings are bad for science.

      The truth of the matter is, we should be listening to philosophers more, because its philosophers who are standing on the table shouting "Define your f***king words better!".

      The ones not doing that, are the minority. They just happen to be the ones that make better news stories. But philosphy has always had a *very* important part in computer sciences history. Logic? All that stuff comes from philosophers. How we define languages? Again philosophers (via linguistics. chomsky is very much as much an analytical philosopher as he is a linguist). And maths? That was aaaaalllll philosophers.

      People think about philosophers and imagine navel gazing guys with beards thinking about metaphysics. Well metaphysics largely died with witgenstein, *especially* speculative metaphysics. Most professional philosophers work in other fields clarifying the logic of science (particularly in physics and maths where philosophers are constantly popping up and saying "Hey, that paper is irrational, and heres why...". And you know all that stuff about russells paradox and impossibility theorums and stuff? Allll philosophers. See also economics, where philosophers (and psychologists) have had a long running battle with the economists trying to explain to the economists why reducing everything down to utility functions grossly simplifies and misrepresents human behavior.) and clarifying the ethics of science and policy.

      • The truth of the matter is, we should be listening to philosophers more, because its philosophers who are standing on the table shouting "Define your f***king words better!".

        I think this may be why philosophy doesn't resonate with most today. Society seems hell-bent on redefining or even un-defining words today. See how AI used to have a fairly stable definition as "artificial intelligence" and most people that were even aware of the term accepted that we didn't really have it. Now? If you have an if-then statement, you have AI, even though it's not actually AI, except you have to redefine AI because marketing said it's AI, so now the waters are so murky no one can stand up and

      • Personally, I define "meaning" simply as the graph of associated things. It's very subjective, especially when your feeling are a part of the graph; something like "dog" has quite different meanings for different people. Things associated with feelings are particularly meaningful, but it doesn't mean the entire concept is somehow mysterious.

        As we grow older, simple things become more meaningful since we learn more about their interconnectedness.

      • I like the rebuttal that simply says if you take the guy (or machine) in the box /and/ all the rules for translation within the box, and then look at that as a whole system, then "Yes" the system does understand Chinese. The guy alone doesn't embody understanding. Nor does the list of rules. But together they do understand.

        Anyway, worth thinking about.

  • by The Cat ( 19816 )

    The Internet told me STEM is all that matters. They said philosophy is a worthless degree, just like the other 80% of human knowledge.

    • A philosophy degree isn't "worthless", its just that too few want to pay you to use it. Same with many other "x studies" degrees. Humans shoot Gorts.

    • How much do you think this Wolfram guy will pay you to sit around pondering the implications of AI?

    • The Internet told me STEM is all that matters. They said philosophy is a worthless degree....

      Philosphy is not worthless because it is was gave us the scientific method. Indeed, science was originally called "natural philosphy" although science is really a superset of philosphy. The problem is that once it did that it was immediately superceded by science as a means to explain and understand both the world and ourselves and so philsophy immediately became vastly less useful than its offspring, science.

      Learning philosphy today is like learning how to be a traditional cooper (cask maker), blacksmi

      • by gtall ( 79522 )

        You may not think it particularly useful, but the cutting edge of physics is including philosophers in ferreting out the interplay between quantum mechanics and Einstein's relativity.

        • As a physicist working in a field that is the union of quantum mechanics and special relativity I don't find that believable, let alone useful. In the unlikely event that someone is doing that it's definitely not cutting edge physics and more like a desperate attempt to flail around and come up with ideas from anywhere.
      • by The Cat ( 19816 )

        This is what happens when science becomes a religion, kids.

      • Learning philosophy today is like... learning how to have a coherent worldview. A person doesn't need to have an academic understanding of epistemology but the number of people that non-ironically believe in a flat earth indicates a MASSIVE failure in critical thought as a culture.
    • by gweihir ( 88907 )

      For most people, a Philosophy degree gives them the ability to write reasonable text and discuss issues. Some (very few) actually get some real insight during their studies and become philosophers and advance the art. That does not make the degree worthless, but 99% of the students should probably study something else.

  • Isaac Asimov.

    • by evanh ( 627108 )

      :)
      Problem with those laws is they are even handed and would in theory prevent assholes from being assholes. And that just wouldn't do. Assholes get no fun out of being fair.

    • Yeah, Azimov was good at highlighting the philosophical problems surrounding robotics. His three laws of robotics were designed not so much as a serious template for the robotics and AI of the future, but more as a fictional plot device. These laws were built to set up conflicts that produced interesting story lines. And in that way, he was extremely successful. On the other hand, he gave us a template that was sure to fail, and colorfully illustrated why it would fail!

    • H2G2 and Vroomfondel's discussion with Deep Thought pretty much nails what is going on here with philsophers musing about AI - it's a way to get "on the gravy train for life"!
  • I like the idea. but I am concerned that the philosophers (at least in the public eye) seem to have been absent when it comes to dealing with all the many questions we have created about society, ethics, technology etc. Maybe this is unfair (likely is), but where are all the answers (or reasonable attempts at them) that we so sorely need?

    Still, I agree, and I am glad he suggests it.

    • Who cares! ASI will know best- if its a thousand times quicker, smarter, stronger than all of us.
      • This is an underrated comment. The ASI will be trained on the works of ALL the philosophers from every culture and every period. How many current philosophers can claim that breadth of mastery of the field?
    • It's not that philosophers have been absent, it's that we as a culture don't understand philosophy because we don't prioritize it in our education systems. You can draw a direct lines between "Westerners suck at X" and "Westerners don't prioritize X in their education systems".
  • Pay me, expenses included, to do it. I will ponder it from The Bahamas, I do my best thinking with a Pina colada in hand.

  • This has been a subject with which philosophers have actively engaged for decades.

  • I've been pondering a thought experiment recently. Imagine an advanced robot equipped with cutting-edge AI. It's designed to create a digital replica of a human, matching their kinematics, voice, and facial expressions perfectly. In extensive testing, the robot made identical decisions and reactions as the human in all 1000 trials. So, is the human truly copied? And if the human passes away, what does that make the robot?
  • by reanjr ( 588767 ) on Tuesday August 27, 2024 @12:21AM (#64738530) Homepage

    Isn't this guy really wealthy? If he thinks people should be working on this, then why doesn't he put his money where his mouth is and start funding this sort of work?

  • He completely understands that current AI is just dumb, unreliable automation. Now he sees philosophical issues in there? Also funny how the referenced article basically has no substance at all.

  • Want to create standards after the cat got out of the bag so to speak....
  • Can the philosophers prove that AI exists?
  • Why would I pay $40 a month / user for this garbage?
  • Most of these comments have been addressed coherently by Richard Carrier in his blog entry, Why Google's LaMDA Chatbot Isn't Sentient at https://www.richardcarrier.inf... [richardcarrier.info].

No spitting on the Bus! Thank you, The Mgt.

Working...