Public Trust In AI Is Sinking Across the Board 105
Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios. Axios reports: Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%. Tech is losing its lead as the most trusted sector. Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.
People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand. "When it comes to AI regulation, the public's response is pretty clear: 'What regulation?'," said Edelman global technology chair Justin Westcott. "There's a clear and urgent call for regulators to meet the public's expectations head on."
People in developing countries are more likely to embrace AI than those in developed ones. Respondents in France, Canada, Ireland, UK, U.S., Germany, Australia, the Netherlands and Sweden reject the growing use of AI by a three-to-one margin, Edelman said. By contrast, acceptance outpaces resistance by a wide margin in developing markets such as Saudi Arabia, India, China, Kenya, Nigeria and Thailand. "When it comes to AI regulation, the public's response is pretty clear: 'What regulation?'," said Edelman global technology chair Justin Westcott. "There's a clear and urgent call for regulators to meet the public's expectations head on."
Trust? (Score:5, Interesting)
Re:Trust? (Score:4, Insightful)
AI is like the jump from the Abacus to the Computer in the specific things it can be made to do. When used appropriately, it can be a powerful tool for specific tasks. However, those specific tasks need to be very well defined, much like the first computers that needed to be hardwired do perform specific calculations.
Unlike the computers of old, there is very little room for AI to be turned into a general purpose tool. It will aways require a human to define the parameters and review the results.
Re: Trust? (Score:2)
Oh, I think youâ(TM)re just a tad too optimistic about the future, my friend. Tell you what, Iâ(TM)ll agree with you, so long as we define âoealwaysâ as maybe a few more years.
Re: (Score:2)
I think you’ll have to continue to disagree, because the OP said “aways”, not “always”.
Re: (Score:2)
I meant to say "always".
Re: (Score:1)
Re: (Score:1)
I think it's either spin or wishful thinking. Clearly they shouldn't be considered trustworthy sources of information, but just as clearly they already too often are anyway. The best outcome for society in the long term depends directly on people widely recognizing that AI can't be trusted any more than Clippy.
Re: (Score:2)
Clearly they shouldn't be considered trustworthy sources of information
LLMs are more likely to be correct than asking the guy in the next cubicle or search engine results.
Even better, LLMs now quote their sources, so their results are easy to verify.
The next generation of LLM-RAGs will be out soon, and they'll be even more reliable.
Re: Trust? (Score:5, Insightful)
Re: (Score:1)
Re: Trust? (Score:5, Insightful)
Re: (Score:2)
So in the end "trust has nothing to do with it" is now don't trust them because "they are always unreliable" which is still not true because they can be relied for a small number of things now and the number of things will continue to grow
Re: Trust? (Score:2)
Eehm what? It's built on math, sure, the same way you're built on physics and biology. That doesn't mean you can do either.
Re: (Score:2)
Re: Trust? (Score:2)
Re: (Score:2)
Re: Trust? (Score:2)
Re: (Score:2)
Re: Trust? (Score:2)
Re: (Score:2)
Are you even capable of acknowledging your original assertion that "trust has nothing to do with it" was wrong?
Re: Trust? (Score:2)
Re: (Score:2)
Re: (Score:2)
"While “logic” may simply refer to valid reasoning in everyday life, it is also one of the oldest and most foundational branches of mathematics"
"The most famous set of logical axioms for defining arithmatic operations is Peano's arithmetic."
Re: Trust? (Score:2)
Re: (Score:2)
Re: Trust? (Score:2)
Re: (Score:2)
You tried to say there is no formal logic in numerical approximation which is funny because it numerical approximation IS formal logic.
Your perfect streak continues.
Re: Trust? (Score:2)
Re: (Score:2)
Re: Trust? (Score:2)
Re: (Score:2)
You misunderstand. I'm not saying they are always wrong. I'm saying they have no concept of truth.
This is why I love the description of them as "the platonic ideal of the bullshitter".
An LLMs guiding principle is basically "it sounds good.",
Specifically, sounds good to a non-expert, because it's basically too expensive and too tedious to hire experts to take part in that part of the training step. Plus it's only ever whac-a-mole.
For example, one of my favourite examples is asking Chat GPT to comput
Re: (Score:2)
The funny thing is that is also true of humans. We might call it misinterpreting data for a human, but for someone of sub-average to average intelligence (or someone that simply doesn't care) it can also a problem. For humans (and procedural computer programming) we solve this by creating a process; for AI, it requires specialized training.
Re: (Score:2)
You've heard of hallucinations?
I have, and the examples are usually from over a year ago, such as the lawyers citing made-up cases.
LLMs have improved since then, and with new RAG tech, they are on the cusp of another big leap.
Re: Trust? (Score:2)
Re: (Score:2)
But the key is that the LLM itself isn't the source in that case; it's just summarizing a set of search results.
Does that then imply that Wikipedia is not "a source of information"? I think the more appropriate term here is "primary source". Wikipedia is not a primary source. LLMs are not a primary source. You can still use them as the source for a lot of useful aggregated information.
Re: Trust? (Score:2)
AI turns out to be a propaganda delivery method (Score:5, Insightful)
Re: (Score:1)
Re: (Score:1)
The issue was that they tried to erase whites altogether. The sympathetic media coverage is exactly like yours - 'google denies that white people exist: minorities most harmed'. Sergey was careful not to say *what* they did wrong, his statement was a vague "people have been offended and we're sorry", so it has not been "comprehensively oops'd" either.
Re: AI turns out to be a propaganda delivery meth (Score:2)
Re: (Score:2)
But don't blow it out of proportion.
If Gemini was refusing to generate images with black people, would you still say this?
Re: AI turns out to be a propaganda delivery meth (Score:2)
Re: (Score:2)
There were no gaps in testing, they released it as-is with "won't fix".
Re: AI turns out to be a propaganda delivery meth (Score:2)
Re: (Score:2)
Re: AI turns out to be a propaganda delivery meth (Score:2)
Re: (Score:1)
> the fact that you are upset about it was definitely not intentional.
That's true! The only problem with what Google did, from Google's perspective, is that they boiled the frog too fast. I am sure that they will endeavor to advance their agenda a bit slower from now on.
Re: (Score:1)
Thank to Google, everyone realized how easy it is to bias the AI to deliver absolutely insane outputs without any kind of end-user transparency about that manipulation. Before Gemini Google was meddling with search and recommendation algorithms, but convincing people that it was an issue was a difficult task. What AI/Gemini did is visualized these existing biases to the point that it became impossible to deny.
Lack of trust in the people BEHIND the AI leads to distrust of the AI itself, even if some of the people wouldn't immediately link the two.
nothing to gain for most. (Score:5, Insightful)
Regular people have little reason to trust them because the folks operating the systems (corporations) have little interest in using them for the benefit of people. They're just another way to convince people to separate themselves from their wages, game the stock market, replace people's jobs, mislead then with misinformation, or develop new ways to kill people
Re: (Score:2)
Re: (Score:2)
I was going to add, much better efficiency for scammers. But scamming is a part of all those things already.
Re: (Score:2)
How does it make things more efficient for scammers, but not for non-scammers? If it helps scammers, wouldn't the same things help non-scammers too? AI is just a tool, it doesn't know whether you're using it for good or for evil.
Re: (Score:2)
Regular people have little reason to use AI for things that require trust. For example, I used ChatGPT to create a job description for a specialized developer I was hiring. It did fine, all I had to do was revise a few points, and I was done. It was a huge time savings for me. Trust had nothing to do with it.
It's not like the Internet.. (Score:5, Insightful)
A bar even James Cameron can't raise (Score:5, Insightful)
Sinking trust implied that there was some trust in AI to begin with. Aside from investors in the technology itself, I'm racking my brain to come up with anybody else who actually believed AI would be a good thing. It probably also doesn't help that AI has been a literal villain in so many sci-fi stories, too.
Re: (Score:2)
Re: (Score:1)
I'm racking my brain to come up with anybody else who actually believed AI would be a good thing.
You obviously haven't been reading anyone else's posts here on Slashdot in the threads about AI.
Re: (Score:2)
You obviously haven't been reading anyone else's posts here on Slashdot in the threads about AI.
As a matter of fact I have. If I had a Dogecoin for every time someone said ChatGPT was too "woke", I'd have quite a few Dogecoins. Still no fucking clue what that'd be in real money, but I'd certainly have a lot of them.
Re: (Score:2)
You obviously haven't been reading anyone else's posts here on Slashdot in the threads about AI.
As a matter of fact I have. If I had a Dogecoin for every time someone said ChatGPT was too "woke", I'd have quite a few Dogecoins. Still no fucking clue what that'd be in real money, but I'd certainly have a lot of them.
Contact Musk and Snoop. They'll help you pump and dump.
Re: (Score:3)
> LLM AI like GPT can evolve into AGI
From whose ass did you pull this assumption out of??
LLM is a glorified table lookup. It has ZERO understanding. It predicts what should come next, that's it.
There is a HUGE jump to AGI.
Re: (Score:1)
Instruct a man given machining/crafting tools and a pile of legos to create something resembling a tree. The results will be better if you give him an hour, a year, a millenium, etc but will never be something that breathes or grows.
You want AGI your best bet is hatch it from a proper nursery, not to repeatedly refine a plastic facsimile. At this time we aren't even on the right ladder; no amount of throwing years/rungs at the situation will yield AGI. Crude sims will not substitute for the survival environ
Re: (Score:2)
LLM AI like GPT can evolve into AGI, which is basically AI with human-level intelligence. This is key to reaching the Singularity - a point where AI can improve itself rapidly without human help. The Singularity could massively boost our tech and problem-solving abilities, pushing humanity towards a Type I civilization on the Kardashev Scale, where we can use all energy on Earth efficiently. This progress means solving big issues like climate change, energy shortages, and more, making life better for everyone. Reaching AGI and the Singularity also sets the stage for humanity's expansion into space. With advanced AI, we could design spacecraft, habitats, and life support systems far beyond our current capabilities. This would make settling other planets and moons more feasible, acting as a safety net for humanity. Basically, Deveoping advanced AI is a requisite for long term human survival, despite what the technophobes in hollywood would have you believe in their fearmongering FUD movies depicting AI as evil.
Jesus Fucked-up Christ. For anybody that doesn't by my "humans bad / machines good" hyperbole in other comments? Here ya go.
Re: (Score:3)
Sinking trust implied that there was some trust in AI to begin with. Aside from investors in the technology itself, I'm racking my brain to come up with anybody else who actually believed AI would be a good thing. It probably also doesn't help that AI has been a literal villain in so many sci-fi stories, too.
There is a certain contingent of folks, a small but somewhat vocal contingent, that seem convinced that AI will be the new god. These are the folks who decry humans as horrible at everything and claim the machines will save us. See every self-driving car story ever. Humans need to be eliminated from the workflow of all things that society needs so that we can save it for the billionaires who will own the machines that make everything, that decide everything, and that will control the masses when they finall
This is predictable (Score:3)
You sell someone a product that you claim can do something, and then it fails to reliably do that thing and does completely different things... so you tell the customer that they're using it wrong but also it'll improve over time.
That's gunna set an expectation. It sure would have been unwise to do this with a technology that is... provably limited in its ability to improve.
No matter how much fluff you put on top of it, no matter how many times you tell them to suspend their disbelief and claim that your technology is beyond comprehension (as if that wasn't also unwise to begin with)... if the customer has expectations and those expectations are not met, excuses and promises are only going to take you so far.
Re: This is predictable (Score:5, Insightful)
Re: This is predictable (Score:5, Insightful)
Your comments are insightful and well stated, much appreciated.
But, I would take a nuanced exception to this statement:
They are great at summarizing inputs like documents or search results, ...
From what I have seen, "great at summarizing" is inaccurate or at least relative. You are correct in stating that AI might be best suited for that purpose, given the technicalities of how AI works. But the "summaries" I see are not nuanced, insightful, deep. They just accumulate words into technically correct sentences, but without a "concept of facts" as you stated. The summaries are juvenile and devoid of concept or subtlety or contextual insight. When I see such summaries, they seem written by a 3rd grader or at best a sixth grader but, but they do not come up to the level of what most average high schoolers are apt to write.
Others may see it differently, but many of the things I've read over past 6-12 months, such as with "search" results, have a tone and tenor so different than true intelligence and the way text used to be written 2 years ago, that these often worthless 'results" are easy to spot. I rarely find them "great at summarizing" even when the results are factual and nominally correct.
Re: This is predictable (Score:2)
Re: (Score:2)
How dare you insult my cats ! !
Yes, you are right.
Re: This is predictable (Score:4, Funny)
Re: (Score:3)
I'd say they tried to use them as sources AND processing systems all in one... But to an extent that's what they were told to expect by the sales pitch. It should have been obvious that they can't generate new facts from just strings of data, but also people who were trying to explain how they worked were shouted down in favor of a vague consensus about the potential.
To me, this sort of exposes the do-what-I-mean mentality of some parts of the business world, and it seems likely it's businesses whose manage
Re: (Score:2)
"...businesses whose management class have outsourced comprehension..."
That's a nice turn of phrase.
Re: This is predictable (Score:5, Insightful)
I still like the characterization of LLMs as highly compressed JPEGs â"Âthey contain the main information, but the details have been lost (potentially even altering the main information).
Use with care. But people (including smart people, "decision makers", business leaders etc etc.) seem so incredibly gullible ... It just scares me how much power is given to often shitty "tools".
Re: (Score:2)
This is a bit early in the 2 year bell curve everything follows (everything being the bell curve, the time is average.) The falling out of love phase has begun and it's plummeting already? Guess the hype and fear are part of it? I don't know why people seem to expect results so quickly with this tech; however, people are playing with a chat bot directly themselves and usually the new fad isn't so widely accessible so quickly.
Re: (Score:2)
The bell curve is a map.
What would it look like if several of the multipliers in the equation that generates it were from the same float variable (lets call it 'trust', just for entertainment's sake), and that variable's value started to go down?
People, I think, can tell you that playing with a chatbot on a webpage doesn't give them the insight to see how it could be used in business except, by default, copying and pasting existing questions into the chatbot when they don't know the answer themselves. And..
AI is turning out bad product! (Score:5, Insightful)
Re: (Score:2)
Quite frankly, the hit/miss ratio is by no means better than the average magic-8-ball.
Is This Going to be the Pattern Now? (Score:3)
"So we developed this new thing--"
HEATHEN! We need sweeping landmark federal legislation at once! Regulate! Seize! Ban! Destroy!
Re:Is This Going to be the Pattern Now? (Score:4, Informative)
Well, if every new thing that gets developed only meant that everything's going more and more down the shitter, don't be surprised that this is the knee-jerk reaction of people. It's like when MS announces a new feature, the first thing on everyone's mind is "can you turn it off and if, how?"
All the public knows... (Score:2)
...is hypemongering and fearmongering
Just about nobody in the non-tech public has a clue
Even among techies, the future is unclear
I see great promise and great peril. No, I don't fear the tech, I fear people who use the tech as a weapon
We need to develop effective defenses
Re: (Score:2)
Against things like Automated Gun/Weapons Platforms, Propaganda Platforms and other Here Already/Soon to be Corporation/Government/Military Automation tech? Absolutely Yes!
There's a good reason for that (Score:4, Insightful)
People can feel when something doesn't work in their interest.
In the case of AI, the immediate effect they're seeing right now is fake news, distorted facts, political manipulation, invented case law, deepfake porn, degraded automated customer services... None of that is of any value to anybody, and that's what AI mostly hits the news about.
And in the longer term, everybody knows AI is coming for their jobs. People are unlikely to trust a technology that will predictably make their lives more miserable soon.
Re: (Score:2)
And in the longer term, everybody knows AI is coming for their jobs.
The rest of your post has merit, but let's get real about this one point here: AI isn't coming for anyone's job, it's the companies that own these AI products who are coming for everyone's jobs. By anthropomorphizing the software product you only contribute to the problem.
Re: (Score:2)
People can feel when something doesn't work in their interest.
In the case of AI, the immediate effect they're seeing right now is fake news, distorted facts, political manipulation, invented case law, deepfake porn, degraded automated customer services... None of that is of any value to anybody, and that's what AI mostly hits the news about.
And in the longer term, everybody knows AI is coming for their jobs. People are unlikely to trust a technology that will predictably make their lives more miserable soon.
Just for the sake of fun, I'll argue that what you say is of no value to anybody actually is valuable to somebody or it wouldn't be getting shoved down our throats so forcefully and repeatedly.
Fake news
Aside from Trump's tendency to pronounce anything he disagrees, or anything that paints him in an ugly light, fake news, real fake news is used all the time to push ideals that should be abhorrent on a gullible population. It's been that way since the beginning of time, but there are several indicators in America toda
It's not just trust in AI (Score:4, Insightful)
It's just a symptom of a far bigger loss of trust. The loss of trust in pretty much any and all organizations, with corporations only being on top of them. AIs by definition are tied to some corporation that designs, powers and trains it, and by now I doubt that there are many people left who consider corporations anything but the blight on our existence.
Your distrust is problematic (Score:2)
5 years?? WTF more like 1.5 (Score:5, Insightful)
More AI Haiku (Score:1)
Public hates AI
Skynet on everyone's mind
Arnold coming soon
What does trust have to do with AI? (Score:5, Insightful)
AI isn't something you would trust. You wouldn't trust a teenager to run your business, but that doesn't mean teenagers are useless to a business. They can perform many tasks--WITH SUPERVISION. AI is like a teenager. It can do a lot of time-saving stuff, but it does need supervision.
Let me summarize why (Score:2)
Brainwashed lefties: noooooo, you're paranoid. Haha dumb conservatives.
Google: hahahaha yeah, they're so nuts. Anyway, here's Gemini with black nazis.
Re: (Score:2)
*We're* brainwashed? Did you take too much ivermectin, the last time you had COVID?
And enough, already - people are starting to finally get the message that chatbots are NOT Artificial Intelligence, they're nothing more than a very fancy and expensive version of the typeahead on your mobile. And they don't run their output through spellcheckers or grammar checkers.