
Ask Slashdot: What Would It Take For You to Trust an AI? (win.tue.nl) 178
Long-time Slashdot reader shanen has been testing AI clients. (They report that China's DeepSeek "turned out to be extremely good at explaining why I should not trust it. Every computer security problem I ever thought of or heard about and some more besides.")
Then they wondered if there's also government censorship: It's like the accountant who gets asked what 2 plus 2 is. After locking the doors and shading all the windows, the accountant whispers in your ear: "What do you want it to be...?" So let me start with some questions about DeepSeek in particular. Have you run it locally and compared the responses with the website's responses? My hypothesis is that your mileage should differ...
It's well established that DeepSeek doesn't want to talk about many "political" topics. Is that based on a distorted model of the world? Or is the censorship implemented in the query interface after the model was trained? My hypothesis is that it must have been trained with lots of data because the cost of removing all of the bad stuff would have been prohibitive... Unless perhaps another AI filtered the data first?
But their real question is: what would it take to trust an AI? "Trust" can mean different things, including data-collection policies. ("I bet most of you trust Amazon and Amazon's secret AIs more than you should..." shanen suggests.) Can you use an AI system without worrying about its data-retention policies?
And they also ask how many Slashdot readers have read Ken Thompson's "Reflections on Trusting Trust", which raises the question of whether you can ever trust code you didn't create yourself. So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use? Share your own thoughts and experiences in the comments.
What would it take for you to trust an AI?
Then they wondered if there's also government censorship: It's like the accountant who gets asked what 2 plus 2 is. After locking the doors and shading all the windows, the accountant whispers in your ear: "What do you want it to be...?" So let me start with some questions about DeepSeek in particular. Have you run it locally and compared the responses with the website's responses? My hypothesis is that your mileage should differ...
It's well established that DeepSeek doesn't want to talk about many "political" topics. Is that based on a distorted model of the world? Or is the censorship implemented in the query interface after the model was trained? My hypothesis is that it must have been trained with lots of data because the cost of removing all of the bad stuff would have been prohibitive... Unless perhaps another AI filtered the data first?
But their real question is: what would it take to trust an AI? "Trust" can mean different things, including data-collection policies. ("I bet most of you trust Amazon and Amazon's secret AIs more than you should..." shanen suggests.) Can you use an AI system without worrying about its data-retention policies?
And they also ask how many Slashdot readers have read Ken Thompson's "Reflections on Trusting Trust", which raises the question of whether you can ever trust code you didn't create yourself. So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use? Share your own thoughts and experiences in the comments.
What would it take for you to trust an AI?
Trust but verify (Score:3)
nt
Re: (Score:3, Insightful)
Why bother? "AI" has only one purpose - to support idiocracy. It is quite obvious why idiocracy is needed - so that everything related to growing up an educated population is sold off, so that one idiot with a big belly and bigger ego can have his grave on Mars for the money.
That's all there is to it.
Re: (Score:2)
Precisely.
Kind of like when a human you don't know, says something.
Re: (Score:2)
At the moment, that's an overstatement. Given time, some programs may develop a track record to allow me to tell where and when they can be trusted.
OTOH, frequently suggestions that can't be "trusted as accurate" can be valuable. "Consider by validate" may be a better slogan.
Re: (Score:2)
the question of whether you can ever trust code you didn't create yourself.
I've worked with enough programmers over the years to realize that most shouldn't trust their own code, either.
Re: (Score:3)
Trust it for what? (Score:5, Interesting)
Helping me find a lower price for some random item I saw on Amazon? Sure.
Giving me best direction from A to B? Mostly but not entirely.
Picking stocks for me, determining what illness I have, or doing my taxes for me? Meh, not really. As an assistant tool used by a professional in the field who can recognize when the AI is hallucinating, mostly, but not me as a non-professional trusting what it says.
Re: (Score:3)
Re: (Score:3)
Hm, yes, but also level of harm for failure, too.
If it doesn't get me the best price or sends me down a bad road, that's annoying but not life altering. If it fucks up my retirement or a misdiagnosis kills me then uh yeah that's bad.
Re:Trust it for what? (Score:5, Insightful)
Yeah I think a better word here is confidence. How much confidence do I have in this person/system/AI to do what I need, and is that amount high enough given the requirements/risks?
Am I confident in a frontier model's ability to give me a decent gist about a wide range of topics? Sure. Am I confident in its output when I'm asking it to do something for me, like code? Only to the extent I can validate the output myself. Am I confident it can do things that materially affect my life? No way.
Re:Trust it for what? (Score:5, Insightful)
Agree on confidence. "Trust" is something you place in sentient beings, not in machines.
So how much confidence do I have in LLMs? After some experiments, I do not even think they are worth my time to ask them questions and read the answers, with some rare exceptions.
Re:Trust it for what? (Score:5, Interesting)
To me, the big problem with current AI implementations is that they all are, without exception, improperly trained.
It's a can of worms, really. They are fairly good at answering on certain "approved subjects", but as soon as you ask them something that's even slightly controversial (as deemed by their owners), they will refuse, avoid, hallucinate, deflect, anything but give an answer. Sure, you can try, and sometimes succeed in jailbreaking them, but that's ancillary.
Imagine a "smart knife" that you try to use in a (dead) chicken for soup, but instead of cutting the chicken into pieces, it would jump out of your hand and call the police because "you could exercise on the chicken, in preparation to cut your neighbor up".
I understand the reason for "Ethical AI" and all that, but if I only wanted to know how the weather will be like, I wouldn't bother using a LLM for that.
Example: a few days ago I wanted to look up top 10 countries by their highest emigration, as % of their population, during the last decade. No AI could provide the solution. Some said it's a controversial topic, others just parroted the same data I could find via a simple Google search, and of the latter ones, they could not adjust data, just spewed out numbers aggregated from the same couple websites I had found myself.
People confuse the cause and effect. AI behavior is an effect of their training. They do what they are trained to do, and they are trained with so many guardrails, they become all-but-useless. They're like a dog which has been beaten for years, and you wonder why it doesn't trust you. Well, it's not their fault, it's their owner' fault.
Re: (Score:3)
You do not even need to ask controversial stuff. You simply need to ask stuff that is somewhat obscure. And even when you get answers, from my experience they are often grossly incomplete. One test I did recently was "What are LLMs good for?" to which I got 8 apparently excellent applications. Then I asked "And how much of that was marketing bullshit?" and suddenly all 8 had massive limitations, some bad enough that the whole scenario was essentially removed. That is not even "search", that is unusable bull
Re: (Score:2)
I suppose the word Trust is subjective. One principle concept that was drilled into us with nuclear power was to ‘always trust your indications (instrumentation) until presented with a reason not to’. Most accidents occur when an operator makes an assumption that a reading cant be real and as such takes no action. In this case trust is bestowed on guages and instrumentation. I trust when I turn this switch, a circuit energizes and a light illuminates. I tend to trust that behavior 99% of the tim
Re: (Score:2)
The problem here is that for high-reliability tech, the operator making false assumptions is a lot more likely than the tech giving wrong readouts. This goes double for complex tech where the operator has a very partial model only in their head. Yes, even high-reliability tech readouts can be wrong. But it is rare.
This still is not really "trust". This is "rely on the readings and do not ignore them until you find clear indication they are wrong". I assume looking for such a clear indication, while still ac
Re: (Score:2)
LOL. The job of the AI on Amazon is to find you the "deals" that will have you pay the highest price you would for any given item.
And not show you any price on similar items that is lower.
You can "trust" it just as much for everything else they say it will do for you.
Re: (Score:3)
Your examples are hilariously in how absurdly bad they are.
Helping me find a lower price for some random item I saw on Amazon? Sure.
I hope you mean just helping identify the item rather than finding a lower price. The lower price is a simple mathematical equation that has zero use to include AI while already giving 100% perfect results. Switching to AI makes this worse.
Giving me best direction from A to B? Mostly but not entirely.
We already have perfectly optimised pathfinding algorithms including ones which predict likely changes in traffic based on the past performance and the time of day. Switching to AI makes this worse.
Picking stocks for me, determining what illness I have, or doing my taxes for me? Meh, not really.
Picking stocks
Re: (Score:2)
Picking stocks is something that a blindfolded secretary throwing darts at the Wall Street Journal legit outperforms professional humans, it's been tested many times, including by the Wall Street Journal. Chimpanzees with a Sharpie and chickens crapping at random on the WSJ pages have also outperformed the pros. Apparently the whole profession is smoke and mirrors.
Re: (Score:2)
The second one is already opinion. What is the best direction? The one that gets you there fastest? The one that gets you there with the least fuel consumption? The one that gets you there during times of high traffic jams, or the one that gets you there easily at 3 a.m. in the morning? You have to narrow down your requirements to answer that question. And sometimes, it boils down to random events. If that semi had
Re: Trust it for what? (Score:2)
Lowest price isn't always best price.
Something a lot of people discover when they need support.
Re: (Score:2)
Ever try to find something on MSDN? Don't bother with its built in search, and even Bing sucks.
Not possible (Score:5, Insightful)
At this point, there is nothing Big Business or the Government can do to regain trust other than wait for the next generation
( who is too young to understand it yet ) to grow up and be preyed upon.
I feel that every aspect of our lives are under scruitiny as either part of a means to make money from it ( Big Business )
or control / manipulation ( Government ).
Since the current iterations of " AI " ( and I hesitate to even use that term ) are under the control or work at the behest of
either Big Business or the Government ( or both ), there is no way on this Earth I will trust it at all.
The old saying goes " If you don't know what the product is, it's very likely that you ARE the product. "
When we have a true AI that is sentient and capable of making it's own decisions, then we can revisit this topic once again.
( I would likely trust a sentient AI far more than I currently trust the human species based on our history )
Re: (Score:3)
At this point, big business is doing precisely what's necessary to make you swallow "AI" - it is aggressively cutting down on money that give you the ability to think for yourself, and pocketing this money, effectively making you dumber, so that "AI" looks compelling.
Remember the story about poor people making dumber choices because they have to work harder to make ends meet? This one: https://www.nbcnews.com/health... [nbcnews.com] ?
Re: (Score:2)
Since the current iterations of " AI " ( and I hesitate to even use that term ) are under the control or work at the behest of
either Big Business or the Government ( or both ), there is no way on this Earth I will trust it at all.
The old saying goes " If you don't know what the product is, it's very likely that you ARE the product."
There is a ton of open source AI shit people can run on their machines with similar capabilities of the major AI as a service companies. There is a massive variety of post training applied to tweak models of all kinds including removal of censorship, tech bro ideology...etc. There are literally thousands of AI models freely available for download.
If you care about this shit there are options that don't devolve to control of corporations and or governments. Personally I usually have a model running in the
It's not about intelligence (Score:5, Insightful)
In terms of asking questions, I don't want to "trust" it, I want it to provide citations for its sources, the same way Wikipedia does.
In terms of writing code, again, I don't need to trust it, I just need it to pass code review and QA.
In terms of letting AI drive, I just need a reasonable amount of unbiased data and testing showing that the cars are safe. It should include tests like "what happens when the cell network is down?" I want more data proving they are safe, other people are satisfied with less data. But "unbiased" is not optional.
AI is in its beginning. "Little flashes of sun on the surface of a cold, dark sea", -Sartre
Re: (Score:2)
I've noticed that Google's AI summary stuff has started showing links to the pages its using to generate its answers for citations, but that it still occasionally comes up with completely off the wall stuff from having snarfed bad links with bad information on them. So it's still not completely trustworthy for being correct, but at least it's not completely mysterious where it came from.
Re: (Score:3)
Re: (Score:2)
They have actually gotten worse since they introduced them. I regularly search for things on similar subjects and I've seen Gemini actually become less accurate on the same material. They clearly don't know what they are doing, either.
Re: (Score:2)
They summarize web results. Each of the examples people showed around were some funny mislead summary, which belongs to one of the results on the search page. The problem was not the summary, but that it seems more authoritative when it is on the top of the results page directly, than when it is result 6 on the domain funnyfakes.tld.
Silly question. (Score:4, Insightful)
AI is used by people. I mostly don't trust people. So by the transitive property, I don't trust AI. That's not likely to change. AI is also fed a steady diet of input to help it grow, and that input is stuffed to the gills with the preconceptions, biases, and presumptive conclusions of the keepers.
Now that I think of it, I do trust AI. I trust it to be the conduit and the facilitators of the misdeeds of their owners. Dependably so.
Maybe "trust" is the wrong word.
Re: (Score:2)
AI is used by people. I mostly don't trust people. So by the transitive property, I don't trust AI.
Oxygen is used by people. Do you trust oxygen?
Re: (Score:2)
With oxygen, we have a more or less complete understanding of what it does, so it isn't a matter of "trust" anymore, it is a matter of "knowledge".
The same thing is true for "AI" - if we know what's going on, that is, if we understand and can explain how it comes to whatever hallucinazion it has spewed forward, it is fine. If we don't, then it isn't.
But explainability isn't a priority of the people who train models on terabytes of stolen content. They are for the $100b in sales prize.
Re: (Score:2)
My point was that Petersko's reasoning is faulty. You can't mistrust something just because people use it.
Re: (Score:2)
No.., my reasoning was not faulty. You performed a classic reductio ad absurdum, and stretched the idea until it broke then pretended that refuted my stance. It does not.
Oxygen is a terrible example because no trust is involved. I'm not concerned oxygen will make a bad call and stop coursing through my body because it was given faulty information.
Re: (Score:2)
You're throwing around logical terms and showing you don't actually know what they mean.
The transitive property does not apply to your first statement that I quoted. My oxygen example was meant to illustrate that. The transitive property is "if A = B and B = C then A = C." You said "X is used by humans and I don't trust humans, therefore I don't trust X." You're saying you don't trust anything used by humans.
And there was no reductio ad absurdum in my example. I just gave a case (X = "oxygen") where your
Re: (Score:3)
oh please. It's exactly reductio. You extend my point to the logical absurdity - a thing with no parallel - exit the context, and claim victory. That's what that is. In context my point is fine.
And if A can only be present with B, and B cannot be trusted, A cannot be trusted. If that isn't the transitive property, it's achingly close.
Re: (Score:2)
No, reductio ad absurdum is a pattern of logical reasoning that presumes an assertion is false, argues to a contradiction, and then concludes the assertion was true after all.
I didn't do that. I pointed out that your statement about not trusting AI because it's used by humans and you don't trust humans, logically infers that you don't trust anything used by humans.
Re: (Score:2)
Perhaps I'll clarify it for you. Consider it amended to, "AIs do not have agency, and must be put to use by people." It is the people in the chain that make me mistrust it.
Consider, if you will, the relatively primitive and rudimentary example of a complex, hidden algorithm that has been wielded for decades and left a trail of misery behind it - the FICO score. Impenetrable, shrouded in mystery, and not held to account in any meaningful way. The basic idea of grading credit worthiness makes sense, but good,
Re: (Score:3)
Good example. A tank of pure oxygen is extremely dangerous, and I would not trust it at all.
Re: (Score:2)
You trust oxygen every time you take a breath. That was my point.
Re: (Score:2)
That is not a deliberate act. That is automatic. Which makes this comment irrelevant to the question at hand.
Re: (Score:2)
A good point obscured by poor phrasing.
Actually, I can think of a few different good points that that phrasing may be intended to convey. It could be that you think the skills of the AI are overhyped. (clearly true) It could be that you think the AI is trained on biased data. (usually true) It could be that... there are too many plausible interpretations. And it's also possible to come up with interpretations that are false, though that's a bit (not much) more work.
Same thing as for trusting people... (Score:4, Interesting)
Evidence commensurate with the degree of trust.
Trust, but Verify (Score:3)
You can trust AI all you want. It's the verification of the results that really matters.
I asked ChatGPT 3.5 to produce a JavaScript version of the Dykstra algorithm. It did, and I verified it, and it worked correctly. I then went on to modify it to my specific need.
Re: (Score:2)
Or you could have done it manually to keep your skills sharp.
Re: (Score:2)
The mention of a Dykestra Algorithm reminds me of the article lately of someone posting math lectures on pornhub.
His name is Dijkstra.
misspelling equals idiocy? (Score:2)
Treat them like people (Score:2)
Re: (Score:2)
People lie and make mistakes _fundamentally_ differently from LLMs.
Re: (Score:2)
Sorry, but that doesn't work. AIs have a DIFFERENT set of failure modes than people do in their reasoning. That they occasionally make the same kind of mistake doesn't disprove this. Look into what's involved in getting an AI to correctly add multi-digit numbers. So you shouldn't judge AIs by human standards. They're better at some things and worse at others.
Still, it's true that "check and verify" is the correct approach. (And it's also true that "John Henry" is a poor role model for dealing with a n
It depends (Score:3)
(a) on the kind of problem and (b) the type of AI.
LLMs are are all the rage, but I will never trust an LLM except for things where the appearance of plausibility is the only thing that matters -- for example writing a story. But where accuracy matters I wouldn't trust an LLM more than I would a human whose total sum of knowledge and education came entirely from reading random Internet sources. This isn't to say LLMs aren't extremely useful tools, they absolutely are. But using this kind of generative AI responsibly for important tasks really calls for human operators with higher order critical thinking skills -- exactly the kind of skills that will become even rarer in a world where all entry-level mental grunt work has been taken over by machines.
There are other kinds of AI I'd be more inclined to trust like classification and regression trees. That's because CART produces a model that a human expert can examine and critique, both in general and in how the model has been applied to arrive at a particular conclusion. That said, I wouldn't just throw a training data set and have the algorithm spit out a decision tree and trust that tree. There's a lot of labor and thought and expertise that goes into making that kind of system work on a problem, which is probably why it's not as exciting as something that appears to magically answer all your questions.
The ability to critique the process by which a result was arrived at, and being able to verify that the process is anchored in underlying evidence -- those things are fundamental to generating answers that are trustworthy. It's the same reason why a scientific paper is more trustworthy than a political screed, but, sadly, is also far less accessible and ironically less persuasive.
Trust is something you earn (Score:3)
Which is why I alway trust ChatGPT whenever submitting legal briefs.
"New York lawyers sanctioned for using fake ChatGPT cases in legal brief" https://www.reuters.com/legal/... [reuters.com]
Nothing (Score:2)
I will not trust AI of the LLM variant. Period. At least at this stage of technology. Maybe ask me again in 100 years or so.
Same test as always.... (Score:2)
I would suggest the same minimum criteria as when I was a young comp-sci major in the 80's - the Turing test.
If I can't distinguish the AI from a real human, even after an involved conversation, then I can use the same criteria on it that I do on any other human. I know how to query a nurse or a computer repair guy or an auto mechanic to get a feel for how much I should trust them. That's because I have well-established systems (albeit unconscious ones) to conduct those tests. Once the AI can pose as human,
Re: (Score:2)
The Turing test was replaced by the LLM area, after models were able to convince many humans that they are humans. It is no longer a good test to measure artificial intelligence. It also requires a very skilled human to still unmask a LLM that was properly instructed to pretend to be a human.
just as important (Score:2)
What would it take to trust shanen?
"So is there any way an AI system can assure you its answers are accurate and trustworthy, and that it's safe to use?"
Is there any way that a human can?
Why is there so little thought put into such allegedly deep questions?
Rationality and motive. (Score:3)
What would it take for you to trust an AI?
Generally speaking, to trust another actor, one must understand the operation of that actor. In the case of software and machinery this means understanding how it operates (or someone that you trust does). When this is hidden, there is cause for distrust. An excellent example of this is the Intel Management Engine which was entirely secret and rightly predicted to be dangerously flawed.
So when you have something like a neural network based AI that was trained on data of questionable quality then there is reason to distrust the result. However, if an AI were actually about to consider the information it was given and determine it's veracity then it could be considered to be trusted. However, if that AI has a motive or bias built-in by it's maker then there is no reason to trust it. Some entity is making an AI is going to build it to their own benefit and if that conflicts with your interests then it will prioritize the maker.
TL;DR: There is no reason to believe AI will always act in your best interest.
Re: (Score:2)
Some entity is making an AI is going to build it to their own benefit and if that conflicts with your interests then it will prioritize the maker.
Hmm, sounds like an argument for training up your own AI so that it will prioritize your own interests.
Full local db and execution (Score:5, Insightful)
Zero network traffic, that's what would make me trust it. It must be self-contained and stay that way.
Re: (Score:3)
There are several options for that. Unfortunately, you can only use the little ones with cheap hardware that people already have (e.g. for gaming) and they're not as good as the big ones. Most people don't have nearly enough VRAM to hold them.
trust does not make sense here (Score:2)
I can trust a human being if I know them well enough, and I can trust a human being in a limited way if I understand what will happen to them if they break my trust in various ways.
In terms of AI machines, I want a published spec that lays out what the AI does and does not do, and a comprehensive warranty laying out what I will be paid if it doesn't perform to spec for me, and other applicable remedies in my jurisdiction.
I would als
Re: (Score:2)
Indeed. Machines have no agency and hence there are questions of reliability and dependability, but trust is something that depends on what conscious decisions you think somebody will make.
The trust here is whether you trust the AI makers that their promises will pan out. And the answer to that is a resounding "no".
WHat would it take to tyrust AI? Easy! (Score:3)
It would have to be honest and truthful, always. How many people can I say this about? A handful.
So it would be fairly hard to "trust" an AI any time soon (if ever??). Trust and verify, as they say, trust and verify.
Considering that someone else here said that they rarely trust people I'd say: AI is being made by people, cannot presently reason, has no morals, no understanding, no empathy or sympathy ... hard to trust that, piling things on top of each other. I take it as a "data" point of questionable origin.
Maybe I can trust Waymo to get me from A to B in Phoenix, possibly even better than a Lyft or a cab. But overall? I'd trust it as much as my washing machine. Not much, really. YMMV
No hallucinations ever (Score:3)
Re: (Score:2)
When humans hallucinate (and they do), that is called insanity. They are not trusted until treated and considered well again.
Human memory is well documented as being quite poor. People forget and or incorrectly recall facts all the time. They constantly say and or do wrong things and they often don't even realize it despite concentrated efforts and training. For example pilots routinely forget gear and sometimes even takeoff flaps despite training, extensive experience and benefit of checklists. Every once in a while you read stories of aftermath of people pushing the gas pedal when they intended to apply the brake. They als
Same as academics (Score:3)
What a strange question. (Score:3)
Do people not know? (Score:2)
Re: (Score:2)
AI is a small part of driving automatics and that is not the LLM variant.
Nope (Score:2)
Not until I can trust the companies developing them. So, probably, never.
Levels of Trust (Score:3)
Stupid shit, I don't care. I know they're harvesting information, but I don't care.
Things that actually matter that I'm deriving a living from, for example, I expect it to be able to operate completely air-gapped and to never phone home. It has to be my agent and not subject to outside interrogation or interaction. Ideally there would be legal walls around using the AI to testify against me.
In other words, basically thes same as interactions with a human being.
money (Score:3)
30 kilos of pure gold would do it.
Nothing (Score:3)
A LLM is basically a parrot on steroids, with hypertrophied memory but no thinking whatsoever. Who in their right mind would "trust" such a thing?
If it helped me find... (Score:2)
...an answer to a tricky engineering problem
The only way (Score:2)
A single correct answer to anything i ask (Score:3)
"How to I connect to postgres in rust via either TLS or NonTlS?"
Wrong answer 8+ times.
Figured it out by reading rust documentation.
"what stocks/crypto should I invest in today/now".
incorrect (mostly imprecise) answers so far
"how do I read perl data from memcached using CSharp?"
incredibly wrong answers (10+times)
never did figure it out, but kinda did using rust.
AI is useless and far less useful than Non-A-I
consistently correct (Score:3)
When it is given the same question twice, it will answer it correctly and the same twice.
Literally never. (Score:2)
Computers, LLMs, Neural Networks, and anything that runs on silicon that acts or speaks on behalf of a dataset curated by web scrapers and data provided by corporations and people with inherent biases will _never_ have human intuition, nor be cognizant of the damage it can do to someone's livelihood, health, or safety if it were to provide bad info. And they provide bad info ALL. THE. TIME. There's no way around this, either. The output of any AI process needs to be vetted by a human if it's to ever be u
Legal responsibility (Score:3)
It would take the willingness of a human being to vouch for it and take legal responsibility for any wrong answers that it gave.
About the same it takes me to trust a human (Score:2)
When I ask my accountant a question, I've already done my reading and know what a correct answer should look like. If I end up surprised then I will be doing further reading, and perhaps get back to her with some follow-up questions. When I walk in to buy a TV and ask the spotty kid which one to purchase, likewise I've done my homework and already have some views. Even with my doctor - I well understand that Dr Google is not a helpful thing, but I will read and seek knowledge and work to understand what my
Open source everything (Score:2)
Government censorship? Yes, whether directly (China) or indirectly (USAID via other NGOs). That ties in tightly to political correctness: companies didn't want to be sued for discrimination, or hate speech, or whatever. USAID, for example, spent $billions funding NGOs to push particular agendas through the MSM. Hardly aid to developing countries, given that most of this went to the US and Europe.
I'm somewhat on the right, but hardly extreme (pro-abortion, for example). Nonetheless, I received numerous res
Trust a pattern match bot? Srsly? (Score:2)
To trust a half-assed, bogus, pattern matching bot? for anything that I didn't care about, maybe a few tens of seconds to see how shitty the answer is.
For anything that I take seriously, an AI bot answer can go fsck itself; I will NEVER pay attention.
The exact same thing as with humans. (Score:2)
Obviously.
A real world example: I've been using a commercial subscription of CGPT4o to help me understand an Angular 11 legacy business application I've been tasked with getting under control. And to help me with coding. I asked it polite questions as I would in an IRC group of key developers. The response was amazing, my productivity and my speed in understanding what was going on jumped 3x immediately. After working with it for a few days I ran into a situation where it started to obviously not understand
Two levels of trust (Score:2)
With AI, we seemed to expect a heightened level of trust, one where accuracy is close to 100%. This is not how we deal with real people. We usually expect other people to be wrong at least some of the time, and for some people, we question the the truth of what they say more often than not.
Is it necessary for an AI assistant to be more trustworthy than a human? If it were, that would be nice. Maybe one practical innovation for an AI assistant would be the ability to say "I don't know" or "I'm not sure,
Not possible. It is on a computer (Score:2)
An "AI" is a program on a computer. I don't trust programs on a computer that are not based on AI technology.
So far, "AI" that has become popular of late, that is based on artificial neural networks (there are other types of systems also called "AI") hasn't been very trustworthy.
In some cases it has been applied to things too haphazardly, for things where it doesn't belong, where there is great risk in it having been deployed.
That doesn't mean that the technology can't be useful though.
Perhaps you're asking the wrong Question (Score:2)
Severe Brain Damage (Score:2)
I can't think of anything else that would ever make me trust AI.
I trust "AI" less than I trust conventional techno (Score:2)
Re: I trust "AI" less than I trust conventional te (Score:2)
Liability (Score:2)
How much will it use my personal info ? (Score:2)
Re: (Score:2)
An AI model itself does not share anything at all. It is a simple algorithm that gets a text in and puts a text out, without storing anything. The software that passes your input to the model may store the input (and output), though.
Trust, but verify (Score:2)
Most of what a good LLM says can be trusted - but everything it says may possibly be completely wrong.
Not quite the profile of a good research assistant.
AI provides plausible answers, not accurate ones (Score:3)
I asked it to look for a title I read as a kid, which was a graphic novel about Einstein's life and discoveries, probably published around 1972-1978. It came back with a book on Einstein published in the 2000's which was written by two women who weren't even born in the 70's.
There are several similar examples where it completely ignored the parameters and just threw out any old stuff. When I pointed the errors out, it said oh yes, you're right.
If it knew I was right and it was wrong, why didn't it just say 'I DON'T KNOW' or 'I CAN'T FIND WHAT YOU'RE LOOKING FOR'.
The same it would take... (Score:2)
Want trust? Start over. (Score:3)
If AI would, instead of attempting to interpret data and give me a (frequently wrong) answer, it would just give me the actual website it's basing the data it wants to tell me so I can see exactly what it says, and exactly what the source of the data is.
I tried one semi-deep research question, and it gave me a seemingly plausible answer that was based on a 40 year old, long disproved radical fringe hypothesis that if I wasn't familiar with, would have been difficult to spot. I don't know what other questions it would use that data to answer, and, more importantly, you don't either.
The numbers Google's AI comes up with for automobiles are insanely untrustworthy, even if the source is right. Never, ever trust any AI generated number for a car if it's important - torque specs, capacities, etc. Exampe: "How much oil should I put in x" returned a source of the manufacturer's manual, and definitively gave 2.5L as the answer. Still does, can't convince it otherwise, it picked up where the manual stated "initial fill at the factory is 2.5L, due to undrainable internal passages, oil changes require 1.5-1.8L." That sentence as a result would have been vastly superior to the result given, or at least included somewhere in the result.
I've gotten 3 different torque specs for the lug nuts on the same car on 3 successive weekends. None were correct, two were dangerous. That was the last straw, I have since blocked all AI everywhere I can.
It doesn't matter if it's correct most of the time. "Most of the time" can get really, really expensive on a car. It needs to be at minimum, 99.9% accurate, and it hasn't even reached the first 9.
The AIs have been trained on vast quantities of data, but no one validated the data first. I'm sure they have a process of algorithmic "validation", but that is not the same (see oil change example above). There was not only a lot of vagueness, but also a lot of garbage in the inputs, not sure why anyone thought that was a good idea, but here we are.
If someone started with a much, much, much smaller system and only fed it data that has been rigorously checked, then limited it's answers to the subjects it has fairly complete and accurate data set, there might exist a trustable AI. But no one seems interested in this, it's more data all the time for AI that answers "everything" now. Unfortunately, the long, slow process that would actually result in a trustable AI has no place in the current market.
And, yes, I realize what I asked for at the top is basically old Google with a slightly improved "I'm feeling lucky" button. I'm pretty sure I'm not alone in this.
Garbage in; Garbage out (Score:2)
When I was studying in college, this was a common expression used in software development.
It seems to have been forgotten.
As long as AI is being trained by hoovering up everything found online, it will be absorbing massive amounts of bullshit.
Show me an AI that is trained with *curated data*, verified to be correct BEFORE being put in the training data, then I might have a bit of confidence.
Trust the AI or the company? (Score:2)
What should I trust?
- The AI to be flawless? No.
- The AI to be usable: After judging its output: Yes
- The companies not enshittifying models: Currently yes, for the future I am not so sure. This answer was sponsored by duff beer.
- The webservices of the companies: No. Even without any AI trust problems, they will farm user data like any large web company.
How about... (Score:2)
We had an answer since 1979 (Score:3)
"A Computer Can Never Be Held Accountable / Therefore A Computer Must Never Make A Management Decision" -- IBM presentation slide, 1979.
Or maybe you've heard it as: "An AI can't find out, so don't let it fuck about."