Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Public Trust In AI Is Sinking Across the Board 105

Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios. Axios reports: Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.

People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand.
"When it comes to AI regulation, the public's response is pretty clear: 'What regulation?'," said Edelman global technology chair Justin Westcott. "There's a clear and urgent call for regulators to meet the public's expectations head on."
This discussion has been archived. No new comments can be posted.

Public Trust In AI Is Sinking Across the Board

Comments Filter:
  • Trust? (Score:5, Interesting)

    by LindleyF ( 9395567 ) on Wednesday March 06, 2024 @08:01PM (#64295996)
    Trust has nothing to do with it. These systems are not sources of information. They seem like it enough to fool people but that's all. They do have real, powerful use-cases, but discussing trust in relation to them is a non sequitur.
    • Re:Trust? (Score:4, Insightful)

      by StormReaver ( 59959 ) on Wednesday March 06, 2024 @08:11PM (#64296012)

      AI is like the jump from the Abacus to the Computer in the specific things it can be made to do. When used appropriately, it can be a powerful tool for specific tasks. However, those specific tasks need to be very well defined, much like the first computers that needed to be hardwired do perform specific calculations.

      Unlike the computers of old, there is very little room for AI to be turned into a general purpose tool. It will aways require a human to define the parameters and review the results.

    • They aren't sources of information? That's some crazy gas lighting you are attempting. They are in fact sources of information that isn't even up for debate. You're either lying or don't understand what you are talking about.
      • I think it's either spin or wishful thinking. Clearly they shouldn't be considered trustworthy sources of information, but just as clearly they already too often are anyway. The best outcome for society in the long term depends directly on people widely recognizing that AI can't be trusted any more than Clippy.

        • Clearly they shouldn't be considered trustworthy sources of information

          LLMs are more likely to be correct than asking the guy in the next cubicle or search engine results.

          Even better, LLMs now quote their sources, so their results are easy to verify.

          The next generation of LLM-RAGs will be out soon, and they'll be even more reliable.

      • Re: Trust? (Score:5, Insightful)

        by LindleyF ( 9395567 ) on Wednesday March 06, 2024 @10:09PM (#64296206)
        You've heard of hallucinations? Here's the thing: AIs are always hallucinating. Every word. In some parts of the space it's close enough to a training input or user input that it sounds reasonable and you can't tell the difference. But there's nothing fundamentally different between that state and what we call hallucinations. So, think twice before relying on anything it says. The question isn't whether it's wrong; only how wrong.
        • Here's the thing, you've gone from asserting trust is irrelevant and they aren't sources of information to stating a concern about their trustworthiness as information sources. Now you're trying to assert that they are always wrong which is wrong itself since they can be 100% accurate in many scenarios. This is comically ironic that you are accusing them of getting everything wrong while managing to get everything wrong yourself. Hats off to you if this was purposeful satire.
          • Re: Trust? (Score:5, Insightful)

            by LindleyF ( 9395567 ) on Thursday March 07, 2024 @12:10AM (#64296382)
            You misunderstand. I'm not saying they are always wrong. I'm saying they have no concept of truth. They may happen to say something true, but not because it is true; because a lot of people say similar things. An LLMs guiding principle is basically "it sounds good." They're like the ultimate politician. So when I say they are always wrong what i mean is they are always unreliable.
            • Of the billions of parameters there would be representations of truth and validity present. Curating the dataset can also go a long way towards biasing towards truth. It also understands truth from the ground up too. It's built on math which is a base level truth for it.

              So in the end "trust has nothing to do with it" is now don't trust them because "they are always unreliable" which is still not true because they can be relied for a small number of things now and the number of things will continue to grow
              • Eehm what? It's built on math, sure, the same way you're built on physics and biology. That doesn't mean you can do either.

              • It's built on language. It learns how language tends to fit together. It doesnt understand semantics, only linguistics. That inherently limits what it can do. Now, we may overcome this. But as of today that's the situation.
                • No it's built FOR language WITH math.
                  • The neural network is math. The thing it's optimized to understand is language. Claiming that a neural net should be able to do math just because that's how it works under the hood makes no sense and shows a misunderstanding of the technology.
                    • No misunderstanding at all on my end. Optimized to understand language using mathematical truths. Truth and validity are at it's core so saying it has no understanding of truth at all is false. Pretty much everything you've written here is false. It's likely an LLM would even understand that XD

                      Are you even capable of acknowledging your original assertion that "trust has nothing to do with it" was wrong?
                    • Neural networks are based on numerical methods of approximation, not formal logic. You should take a class, it's interesting stuff.
                    • Well at least you've extended your perfect record of getting everything wrong.
                    • "Mathematics is a set of specific formal applications of logic, with each branch of mathematics starting with a different set of initial facts."

                      "While “logic” may simply refer to valid reasoning in everyday life, it is also one of the oldest and most foundational branches of mathematics"

                      "The most famous set of logical axioms for defining arithmatic operations is Peano's arithmetic."
                    • Granted, if you want to get pedantic, all math can be derived from a few axioms via logic. But that's not really relevant. To use a network analogy, axioms may be the link layer, and numerical approximation may be the transport layer, but none of that changes what's on the application layer.
                    • Try again: "Mathematics is a set of specific formal applications of logic, with each branch of mathematics starting with a different set of initial facts." "While “logic” may simply refer to valid reasoning in everyday life, it is also one of the oldest and most foundational branches of mathematics" "The most famous set of logical axioms for defining arithmatic operations is Peano's arithmetic."
                    • I never know what to think when someone posts the same thing twice. Anyway, this has been fun, but it's drifting pretty far away from relevance at this point so I'm out. You believe what you want about all this, but you seem to have some basic misconceptions.
                    • They think you didn't read it the first time based your response indicating so.

                      You tried to say there is no formal logic in numerical approximation which is funny because it numerical approximation IS formal logic.

                      Your perfect streak continues.
                    • I'm going to give this one more try. Probably a bad idea, but what the heck. But, I'm pulling it back up-level. You appear to be claiming that because math can be derived for formal logic, therefore information from LLMs can be trusted. That is, of course, absurd. But it seems to be your claim. So let's use an example. If you train a neural network to recognize shapes, it may be able to identify a certain shape as this thing called "the number 4." But being able to identify the shape as a 4 does not mean it
                    • I'm not even going to bother reading that. It's waste of time since you appear to have no integrity and are incapable of honest discussion. You couldn't even acknowledge the 180 you made on trust being an issue. LLMs are more trustworthy and reliable than anything you've written here.
                    • Dude. Two things. First, remember that in any debate, the first one to resort to insults loses. Second, don't take on someone who knows what they're talking about with an aggressive tone. It doesn't make you look good. I'll grant some of what I said could have been worded better and I understand if you misunderstood my meaning. But offering clarifications is not the same as a 180.
            • You misunderstand. I'm not saying they are always wrong. I'm saying they have no concept of truth.

              This is why I love the description of them as "the platonic ideal of the bullshitter".

              An LLMs guiding principle is basically "it sounds good.",

              Specifically, sounds good to a non-expert, because it's basically too expensive and too tedious to hire experts to take part in that part of the training step. Plus it's only ever whac-a-mole.

              For example, one of my favourite examples is asking Chat GPT to comput

            • The funny thing is that is also true of humans. We might call it misinterpreting data for a human, but for someone of sub-average to average intelligence (or someone that simply doesn't care) it can also a problem. For humans (and procedural computer programming) we solve this by creating a process; for AI, it requires specialized training.

        • You've heard of hallucinations?

          I have, and the examples are usually from over a year ago, such as the lawyers citing made-up cases.

          LLMs have improved since then, and with new RAG tech, they are on the cusp of another big leap.

          • Yeah, services such as Google's Seach Generative Experience do tend to avoid hallucinations by sticking close to the actual links they report. There are ways to minimize the problem. But the key is that the LLM itself isn't the source in that case; it's just summarizing a set of search results.
            • But the key is that the LLM itself isn't the source in that case; it's just summarizing a set of search results.

              Does that then imply that Wikipedia is not "a source of information"? I think the more appropriate term here is "primary source". Wikipedia is not a primary source. LLMs are not a primary source. You can still use them as the source for a lot of useful aggregated information.

              • Wikipedia is a decent analogy, in that you can read interesting stuff and it might* even be true. But there is a big asterisk there, and it's even bigger for LLMs than for Wikipedia.
  • by sinij ( 911942 ) on Wednesday March 06, 2024 @08:06PM (#64296004)
    Thank to Google, everyone realized how easy it is to bias the AI to deliver absolutely insane outputs without any kind of end-user transparency about that manipulation. Before Gemini Google was meddling with search and recommendation algorithms, but convincing people that it was an issue was a difficult task. What AI/Gemini did is visualized these existing biases to the point that it became impossible to deny.
    • Thank to Google, everyone realized how easy it is to bias the AI to deliver absolutely insane outputs without any kind of end-user transparency about that manipulation. Before Gemini Google was meddling with search and recommendation algorithms, but convincing people that it was an issue was a difficult task. What AI/Gemini did is visualized these existing biases to the point that it became impossible to deny.

      Lack of trust in the people BEHIND the AI leads to distrust of the AI itself, even if some of the people wouldn't immediately link the two.

  • by SeaFox ( 739806 ) on Wednesday March 06, 2024 @08:11PM (#64296010)

    Regular people have little reason to trust them because the folks operating the systems (corporations) have little interest in using them for the benefit of people. They're just another way to convince people to separate themselves from their wages, game the stock market, replace people's jobs, mislead then with misinformation, or develop new ways to kill people

    • Are stupid people regular? I've seen a great deal of stupid people blindly trusting anything ChatGPT asserts even when it gives them a fake citation with a link that doesn't work.
    • I was going to add, much better efficiency for scammers. But scamming is a part of all those things already.

      • How does it make things more efficient for scammers, but not for non-scammers? If it helps scammers, wouldn't the same things help non-scammers too? AI is just a tool, it doesn't know whether you're using it for good or for evil.

    • Regular people have little reason to use AI for things that require trust. For example, I used ChatGPT to create a job description for a specialized developer I was hiring. It did fine, all I had to do was revise a few points, and I was done. It was a huge time savings for me. Trust had nothing to do with it.

  • by vrhelmutt ( 9741742 ) on Wednesday March 06, 2024 @08:17PM (#64296022)
    We know better now and refuse to let Tech firms take us on some euphoric trip to Nirvana like they did for awhile with the Internet. We ended up with tracking devices in our pockets and our personal data in a market. Even if you don't understand the technology, you can agree that life has been irreparably damaged because of the internet. Sure we have some conveniences but in the long run we would have gotten there slower with less economic instability and a bit more security. Silicon Valley blew it.
  • by Powercntrl ( 458442 ) on Wednesday March 06, 2024 @08:19PM (#64296026) Homepage

    Sinking trust implied that there was some trust in AI to begin with. Aside from investors in the technology itself, I'm racking my brain to come up with anybody else who actually believed AI would be a good thing. It probably also doesn't help that AI has been a literal villain in so many sci-fi stories, too.

    • People loved the release of DALL-E and rejoiced about it being a good thing and looking forward to where its heading even the savvy graphic artists included.
    • I'm racking my brain to come up with anybody else who actually believed AI would be a good thing.

      You obviously haven't been reading anyone else's posts here on Slashdot in the threads about AI.

      • You obviously haven't been reading anyone else's posts here on Slashdot in the threads about AI.

        As a matter of fact I have. If I had a Dogecoin for every time someone said ChatGPT was too "woke", I'd have quite a few Dogecoins. Still no fucking clue what that'd be in real money, but I'd certainly have a lot of them.

        • You obviously haven't been reading anyone else's posts here on Slashdot in the threads about AI.

          As a matter of fact I have. If I had a Dogecoin for every time someone said ChatGPT was too "woke", I'd have quite a few Dogecoins. Still no fucking clue what that'd be in real money, but I'd certainly have a lot of them.

          Contact Musk and Snoop. They'll help you pump and dump.

    • Sinking trust implied that there was some trust in AI to begin with. Aside from investors in the technology itself, I'm racking my brain to come up with anybody else who actually believed AI would be a good thing. It probably also doesn't help that AI has been a literal villain in so many sci-fi stories, too.

      There is a certain contingent of folks, a small but somewhat vocal contingent, that seem convinced that AI will be the new god. These are the folks who decry humans as horrible at everything and claim the machines will save us. See every self-driving car story ever. Humans need to be eliminated from the workflow of all things that society needs so that we can save it for the billionaires who will own the machines that make everything, that decide everything, and that will control the masses when they finall

  • by Gideon Fubar ( 833343 ) on Wednesday March 06, 2024 @08:19PM (#64296028) Journal

    You sell someone a product that you claim can do something, and then it fails to reliably do that thing and does completely different things... so you tell the customer that they're using it wrong but also it'll improve over time.

    That's gunna set an expectation. It sure would have been unwise to do this with a technology that is... provably limited in its ability to improve.

    No matter how much fluff you put on top of it, no matter how many times you tell them to suspend their disbelief and claim that your technology is beyond comprehension (as if that wasn't also unwise to begin with)... if the customer has expectations and those expectations are not met, excuses and promises are only going to take you so far.

    • by LindleyF ( 9395567 ) on Wednesday March 06, 2024 @08:25PM (#64296036)
      The executive-level mistake was to treat LLMs models as sources of information. That just fundamentally isn't what they are. They are great at summarizing inputs like documents or search results, but as soon as you expect them to output information you didn't put in, you're doing it wrong. LLMs are not trained on facts. They do not understand the concept of facts. They are trained on language, which is inherently imprecise and inherently biased.
      • by az-saguaro ( 1231754 ) on Wednesday March 06, 2024 @08:53PM (#64296088)

        Your comments are insightful and well stated, much appreciated.
        But, I would take a nuanced exception to this statement:

        They are great at summarizing inputs like documents or search results, ...

        From what I have seen, "great at summarizing" is inaccurate or at least relative. You are correct in stating that AI might be best suited for that purpose, given the technicalities of how AI works. But the "summaries" I see are not nuanced, insightful, deep. They just accumulate words into technically correct sentences, but without a "concept of facts" as you stated. The summaries are juvenile and devoid of concept or subtlety or contextual insight. When I see such summaries, they seem written by a 3rd grader or at best a sixth grader but, but they do not come up to the level of what most average high schoolers are apt to write.

        Others may see it differently, but many of the things I've read over past 6-12 months, such as with "search" results, have a tone and tenor so different than true intelligence and the way text used to be written 2 years ago, that these often worthless 'results" are easy to spot. I rarely find them "great at summarizing" even when the results are factual and nominally correct.

      • I'd say they tried to use them as sources AND processing systems all in one... But to an extent that's what they were told to expect by the sales pitch. It should have been obvious that they can't generate new facts from just strings of data, but also people who were trying to explain how they worked were shouted down in favor of a vague consensus about the potential.

        To me, this sort of exposes the do-what-I-mean mentality of some parts of the business world, and it seems likely it's businesses whose manage

        • "...businesses whose management class have outsourced comprehension..."

          That's a nice turn of phrase.

      • by Kiliani ( 816330 ) on Thursday March 07, 2024 @12:15AM (#64296394)

        I still like the characterization of LLMs as highly compressed JPEGs â"Âthey contain the main information, but the details have been lost (potentially even altering the main information).

        Use with care. But people (including smart people, "decision makers", business leaders etc etc.) seem so incredibly gullible ... It just scares me how much power is given to often shitty "tools".

    • This is a bit early in the 2 year bell curve everything follows (everything being the bell curve, the time is average.) The falling out of love phase has begun and it's plummeting already? Guess the hype and fear are part of it? I don't know why people seem to expect results so quickly with this tech; however, people are playing with a chat bot directly themselves and usually the new fad isn't so widely accessible so quickly.

      • The bell curve is a map.

        What would it look like if several of the multipliers in the equation that generates it were from the same float variable (lets call it 'trust', just for entertainment's sake), and that variable's value started to go down?

        People, I think, can tell you that playing with a chatbot on a webpage doesn't give them the insight to see how it could be used in business except, by default, copying and pasting existing questions into the chatbot when they don't know the answer themselves. And..

  • by oldgraybeard ( 2939809 ) on Wednesday March 06, 2024 @08:20PM (#64296030)
    And it did not take long for most of the people to see there wasn't any I(Intelligence) in their AI. Now they are stuck with just Artificial.
  • by The Cat ( 19816 ) on Wednesday March 06, 2024 @08:39PM (#64296056)

    "So we developed this new thing--"

    HEATHEN! We need sweeping landmark federal legislation at once! Regulate! Seize! Ban! Destroy!

  • ...is hypemongering and fearmongering
    Just about nobody in the non-tech public has a clue
    Even among techies, the future is unclear
    I see great promise and great peril. No, I don't fear the tech, I fear people who use the tech as a weapon
    We need to develop effective defenses

    • "We need to develop effective defenses" Against current AI? No!
      Against things like Automated Gun/Weapons Platforms, Propaganda Platforms and other Here Already/Soon to be Corporation/Government/Military Automation tech? Absolutely Yes!
  • by Rosco P. Coltrane ( 209368 ) on Wednesday March 06, 2024 @09:01PM (#64296102)

    People can feel when something doesn't work in their interest.

    In the case of AI, the immediate effect they're seeing right now is fake news, distorted facts, political manipulation, invented case law, deepfake porn, degraded automated customer services... None of that is of any value to anybody, and that's what AI mostly hits the news about.

    And in the longer term, everybody knows AI is coming for their jobs. People are unlikely to trust a technology that will predictably make their lives more miserable soon.

    • And in the longer term, everybody knows AI is coming for their jobs.

      The rest of your post has merit, but let's get real about this one point here: AI isn't coming for anyone's job, it's the companies that own these AI products who are coming for everyone's jobs. By anthropomorphizing the software product you only contribute to the problem.

    • People can feel when something doesn't work in their interest.

      In the case of AI, the immediate effect they're seeing right now is fake news, distorted facts, political manipulation, invented case law, deepfake porn, degraded automated customer services... None of that is of any value to anybody, and that's what AI mostly hits the news about.

      And in the longer term, everybody knows AI is coming for their jobs. People are unlikely to trust a technology that will predictably make their lives more miserable soon.

      Just for the sake of fun, I'll argue that what you say is of no value to anybody actually is valuable to somebody or it wouldn't be getting shoved down our throats so forcefully and repeatedly.

      Fake news

      Aside from Trump's tendency to pronounce anything he disagrees, or anything that paints him in an ugly light, fake news, real fake news is used all the time to push ideals that should be abhorrent on a gullible population. It's been that way since the beginning of time, but there are several indicators in America toda

  • by Opportunist ( 166417 ) on Wednesday March 06, 2024 @09:02PM (#64296104)

    It's just a symptom of a far bigger loss of trust. The loss of trust in pretty much any and all organizations, with corporations only being on top of them. AIs by definition are tied to some corporation that designs, powers and trains it, and by now I doubt that there are many people left who consider corporations anything but the blight on our existence.

  • Have the AIs informed these people that their distrust is just human fragility lashing out?
  • by oumuamua ( 6173784 ) on Wednesday March 06, 2024 @10:32PM (#64296246)
    There was no AI until a year and a half ago! At least to the average person. A few technophiles were testing out the early chat models and image generators but not really using them for anything but curiosities. The average person only noticed ChatGPT 3.5 then DALLE2 which was soon upstaged by Imagen and Midjourney.
  • Public hates AI
    Skynet on everyone's mind
    Arnold coming soon

  • by Tony Isaac ( 1301187 ) on Thursday March 07, 2024 @12:53AM (#64296440) Homepage

    AI isn't something you would trust. You wouldn't trust a teenager to run your business, but that doesn't mean teenagers are useless to a business. They can perform many tasks--WITH SUPERVISION. AI is like a teenager. It can do a lot of time-saving stuff, but it does need supervision.

  • People with a brain: Woke lefty tyrants are going to use AI to enforce their delusional racism and socialist bullshit - everyone.
    Brainwashed lefties: noooooo, you're paranoid. Haha dumb conservatives.
    Google: hahahaha yeah, they're so nuts. Anyway, here's Gemini with black nazis.
    • by whitroth ( 9367 )

      *We're* brainwashed? Did you take too much ivermectin, the last time you had COVID?

      And enough, already - people are starting to finally get the message that chatbots are NOT Artificial Intelligence, they're nothing more than a very fancy and expensive version of the typeahead on your mobile. And they don't run their output through spellcheckers or grammar checkers.

Congratulations! You are the one-millionth user to log into our system. If there's anything special we can do for you, anything at all, don't hesitate to ask!

Working...