Ask Slashdot: What Are Some Good AI Regulations? (slashdot.org) 225
Longtime Slashdot reader Okian Warrior writes: There's been a lot of discussion about regulating AI in the news recently, including Sam Altman going before a Senate committee begging for regulation. So far I've seen only calls for regulation, but not suggestions on what those regulations should be. Since Slashdot is largely populated with experts in various fields (software, medicine, law, etc.), maybe we should begin this discussion. And note that if we don't create the reasonable rules, Congress (mostly 80-year old white men with conflicts of interest) will do it for us.
What are some good AI regulation suggestions?
I'll start: A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis. If an AI suggests a diagnosis or medical treatment, there must be buy-in from a human who believes the decision is correct, and who would be held responsible in the same manner as a doctor not using AI. The AI must be a tool used by, and not a substitute for, human decisions. This would avoid problems with humans ignoring their responsibility, relying on the software, and causing harm through negligence. Doctors can use AI to (for example) diagnose cancer, but it will be the doctor's diagnosis and not the AI's.
What other suggestions do people have?
What are some good AI regulation suggestions?
I'll start: A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis. If an AI suggests a diagnosis or medical treatment, there must be buy-in from a human who believes the decision is correct, and who would be held responsible in the same manner as a doctor not using AI. The AI must be a tool used by, and not a substitute for, human decisions. This would avoid problems with humans ignoring their responsibility, relying on the software, and causing harm through negligence. Doctors can use AI to (for example) diagnose cancer, but it will be the doctor's diagnosis and not the AI's.
What other suggestions do people have?
No weapons (Score:2)
No AI system should be used to apply deadly force, EVER.
LK
Re:No weapons (Score:5, Insightful)
No AI system should be used to apply deadly force, EVER.
LK
Argument against: Bad faith actors absolutely *WILL* use AI systems for deadly force grade weapons. How do good faith actors and/or our allies combat those bad faith actors without AI weapons of their own?
Not that I disagree in principal. It's just that we're hardly throwing out regulations for the entirety of mankind that will be followed by the entirety of mankind. We have to think of the fact that every regulation we throw up will be broken by someone, somewhere. A hobbyist tossing some bucks at a better sex-bot? Meh. A terrorist with the resources developing an aimbot for his auto-fire rail-gun? A bit more concerning.
Re: (Score:2)
The best of these are "expert systems" that can accomplish certain tasks, but they have no will or motivation of their own. At most they are HAL 9000, which was simply following programming. HAL was killing the humans that were an impediment to completing it's mission. They are not SkyNet, self-aware and deciding to kill all humans because: reasons.
We need to build the rules to fit the tools or we will create situations
Re:No weapons (Score:4, Interesting)
> TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
https://arxiv.org/abs/2305.077... [arxiv.org]
Re: (Score:2)
Can we stop treating Artificial Intelligence like it's a movie protagonist/villain?
I'm curious where you got that from my response? Automation like current AI is faster / better than humans at the specific tasks they're given for automating. I'm assuming that will include weapons systems. How do slower humans combat faster automation on the battlefield? With their own automation. No need to treat the AI as a movie protagonist, and the knee-jerk scold to any person pointing out the obvious is starting to get tedious in the extreme.
Re: (Score:2)
The problem we keep running into is people anthropomorphize AI and entrepreneurs consistently assign human level responsibility to expert systems. For what it's worth experts systems are just one facet of AI. And our treatment of the technology has been just as problematic as the flashier systems with natural language processing. More than likely expert systems will go the way of the Dodo, as prompting is the preferred interaction in complex system. It turns out training and inference is really hard to do,
Re: (Score:2)
The problem we keep running into is people anthropomorphize AI and entrepreneurs consistently assign human level responsibility to expert systems
Yeah, don't anthropomorphize the AI. They hate it when you do that.
Re: (Score:2)
We already have all that today. It is only existing international treaties that prevent it from escalating.
That is, in systems from guided missiles to smart bullets, the only human intervention is to click a button or press a trigger.
Guided missiles can pick their targets based on battlefield imagery, such as targeting the source of fire or heat signatures. They can intercept missiles, including hypersonic missiles if fired in time. The computers do all the flight calculations, steer during flight, even o
Re: (Score:2)
How do good faith actors and/or our allies combat those bad faith actors without AI weapons of their own?
Nuclear weapons and EMP bombs.
Re:No weapons (Score:4, Insightful)
No AI system should be used to apply deadly force, EVER.
LK
What does this even mean? There are currently systems which use image and radar recognition to attempt to kill people. That is to say almost any modern surface to air missile, many smart shells which target armed vehicles. The US is already using AI techniques to identify targeting data in the mass of intelligence data they get. A number of anti-personnel weapons like drones and so on. All of these are "AI" techniques just as much as deep learning and modern large language models are.
There is no way that we are going back to eliminate "intelligent" weapons systems like those that exist now. I don't see a clear definition of what you mean by AI so that we can ban it unless you choose to limit it to "deep learning" or something similar in which case people will simply work around your ban with a slight technology shift.
N.B. it would be lovely to have generic, widespread arms and war reduction. I think we're a long way from it, it might be and likely is completely impossible, but it's definitely worth talking about it if nothing else so that we can say we tried and explain what's wrong with the idea. However if that's what you want then let's be clear because AI is the smallest part of achieving the elimination of war.
Re: (Score:2)
No AI system should be used to apply deadly force, EVER.
LK
Interesting idea but I think hard to pin down. For example, if we use an object recognition system to put a targeting reticule on a drone display, is that using AI or not? It's not pulling the trigger but it's strongly influencing where the weapon will go.
Suppose I use a robot dog to lug around a M2. The dog uses AI to walk and maintain stability. For that matter, it uses an AI model to keep the gun aimed on a target while firing. Is that using AI?
Humans Required (Score:2)
What if the friendly fire incident rate of AI weapons is lower than that if human troops?
Simple - use a system where the AI gives targets to the human and the human decides whether to pull the trigger. The rule should be that somewhere a human is in control to prevent the AI from going rouge due to error or deliberate intent. Even if you believe that an AI system were somehow unhackable and bugfree do you trust whichever future politician is in control of your military to never issue illegal orders that human troops would refuse to obey but that an AI system would blindly follow? Giving a sing
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You're thinking of today's AI. At some point, and nobody knows when this will happen, AI will be more reliable at classifying targets than humans are.
Delusions of Grandeur (Score:2)
AI weapons will be superior to humans...AI will be better at target selection...AI will follow the rules of engagement...AI will not suffer confusion...AI will be configurable...
What planet do you live on? Because, here on Earth, AI can't even make out soldiers hiding in a cardboard box. [washingtonpost.com] You're delusional if you think it's going to be better than humans at warfare.
Re: No weapons (Score:5, Insightful)
Oh fuck off. There were zero security concerns for Russia. It's all in Putin's head. He used anything as an excuse to try and reconstruct the Soviet Union. Look at Georgia. An excuse to invade and now he occupies part of that country.
For Ukraine, he was pissed the people wanted democracy and freedom over tyranny and suppression. When his lap dog fled after the protests, he conveniently ignored the Budapest Memorandum [harvard.edu] which Russia signed and which explicitly stated no party would do anything to thwart Ukraine's sovereignty.
Here's a question to aks the dictator: if Russia is such a great friend to its neighbors, why are its neighbors rushing to join NATO, a _defensive_ organization? If Russia is so great, why does Russia have to invade its neighbors to show them what a great friend it is? Why is it Russia recently threatened Finland [businessinsider.com], who joined NATO, by saying Finland's action encroach on Russia's national interests? What national interest does Russia have in Finland? More dead bodies?
As for your bullshit about diplomatic policy not being one of avoidance, considering the bending over backwards the West has done since 1991 to try and bring Russia up to a second world country, the huge amount of invenstments made by countries and companys, the reduction in nuclear weapons, the working together on projects, your crap doesn't float.
Re:No weapons (Score:5, Insightful)
Re: No weapons (Score:3)
Do you people not know what loitering munitions are? AI has already been deployed in multiple weapon systems for decades. Cruise missile technology, which has existed for over 8 decades, has incorporated AI.
Fire-and-forget weapons use AI for guidance, identification, and flight patterns.
Militaries have already been using AI to simulate war games and plan scenarios for decades.
In any case, good luck convincing the military to let AI control nukes when they are still running them on Win95 OS. Militaries are b
Just another tool (Score:5, Insightful)
This regulation is already in place.
Comment removed (Score:4, Interesting)
Re: (Score:2)
Define Harm (Score:2)
The same regulations that apply to other tools, such as wrenches and pocket calculators: do not use to harm others with it.
The problem with your simple requirement is how do you define "harm". Today there are some people who clearly believe that hearing an idea that they disagree with is harmful, hence the rise of cancel culture. Even if we all agreed on a definition of harm with AI tools there is the added complication of whether the user could reasonably foresee the harm because these tools are much more complex than a simple wrench. If you tell an AI car to drive somewhere and it runs over and kills someone you have used an
Recognizing superior-to-human AI (Score:3)
-don't let an AI have final say on a medical diagnosis,
-don't let an AI have inventor/creator's rights to intellectual/artistic work product
seem to come from a position that is still in denial that within a short time, some AIs will be (quantifiably in some cases) superior to humans performing the same task.
We need to think, and legislate, further ahead than that, or current suggestions will be obsolete and harmful.
For example, some domains:
1) Preliminary researc
Only a few (Score:5, Insightful)
The current regime of rules saying AI cannot create copyrightable or patentable material should be maintained.
If you are interacting with a chatbot it should clearly represent itself as such.
If a chatbot causes injury resulting in a lawsuit, the entity deploying the chatbot should be held liable. If it was developed by a third party, they should not be held liable as a first party. The company deploying the chatbot can sue the developers separately if it didn't perform according to some contract.
Re: (Score:2)
Chatbots cannot cause injury, since the only thing they can do is chat. Chatting cannot cause injury.
Just like everything else in life, individuals are culpable for their actions. If we do not do that, then when someone loses their court case because the AI lawyer didn't make a strong enough argument to the judge, what will the user do? Sue the AI lawyer company! If we can get a chatbot to say "Shoot the president," the culpability must lie with he who holds the gun. To say otherwise will make it compl
Re: (Score:3)
Chatbots cannot cause injury, since the only thing they can do is chat. Chatting cannot cause injury.
Of course chatting can cause injury. A doctor's office using a chatbot to diagnose an illness and prescribe medication could cause an injury. This is why context of the chatbot's use is important. Anyone using ChatGPT to perform a diagnosis should have no expectation that it represents valid medical advice. No different than using WebMD to self-diagnose. But if a hospital were to deploy a chatbot to perform diagnosis, they should be held to the same standards as if a human doctor was doing it.
then when someone loses their court case because the AI lawyer didn't make a strong enough argument to the judge, what will the user do? Sue the AI lawyer company!
We already hav
Re: (Score:2)
The current regime of rules saying AI cannot create copyrightable or patentable material should be maintained.
This is news to me. When was that decided?
Defaults (Score:3)
For a work to be protected by copyright, it has to be produced directly by a human. That is, a human has to be in directly in charge of the content and composition of a work to be granted copyright protection. Images generated by algorithms are not copyrightable. At least, nobody has successfully sued someone else over reusing a purely algorithmically generated work.
Ditto with patents, though the patent office is soliciting advice from the public on possibly changing those rules.
Let Me Google That For You (Score:3)
https://www.copyright.gov/comp... [copyright.gov]
Chapter 300:21
"Similarly, the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author"
Here's a lawyer specializing in IP law explaining further.
https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
The current regime of rules saying AI cannot create copyrightable or patentable material should be maintained.
Even though there is some recent precedent, this still must be made far more clear.
I personally believe an AI should never be able to copyright or patent material, but a human using AI as a tool during the creation process should still be able to copyright or patent the results. Whether or not those results are original enough to be copyrightable or novel and nonobvious enough to be patentable shouldn't depend on what tools were used to create the work in question.
Re: (Score:2)
If the chatbot represents itself as, for example, a resource for all sorts of information including medical opinion, then it could cause harm by providing a misdiagnosis, or providing an incorrect procedure for first aid.
Currently, the LLM services lean toward trying to represent their output as just a pass through of information (if they might get in trouble for the content) or as a trans-formative solution with categorically new value, depending on whether they are afraid of liability or trying to sell th
Re: (Score:2)
]
First two okay, but if a chatbot presents itself as a chatbot then it can't cause injury (unless you hook it up to a robot). The moron who followed a chatbot's advice should be liable, not the developer.
One of the things which is clearly within range of current "AI"* is a system designed to work on and fool mentally ill and probably even just unaware people into doing bad things.
* and I really wish we wouldn't use the term - do we mean general artificial intelligence or do we mean deep learning? Large language models? Neural network based technologies? What specifically?
Every should be able to opt-out (Score:2)
Re: (Score:2)
Re: (Score:2)
Everybody should be able to choose to opt-out: as long as AI technologies are not totally safe (and explaining why an AI produces a given result is still far from being clear), people should be able to tell that they don't want to be considered as guinea pigs.
Great idea for all current deep learning things. Not enough since they need to be allowed to opt in and out of part of this, but definitely people should be allowed to stop their own data and anything derived from it being used for learning from.
Re:Every should be able to opt-out (Score:4, Interesting)
Re: (Score:2)
Everybody should be able to choose to opt-out: as long as AI technologies are not totally safe (and explaining why an AI produces a given result is still far from being clear), people should be able to tell that they don't want to be considered as guinea pigs.
I'm unclear what you're thinking of. Do you mean something like you should be able to opt out of having an AI diagnose illnesses? Or have an AI process a loan application? I don't know about making that mandatory. For example, in the '70s, I had a choice of buying full-serve and self-serve gas. There might have even been a requirement that stations provide both. Drivers overwhelmingly voted with their wallets they preferred self-serve and now I don't know of any full-serve stations near me. I'm glad we didn
Re: (Score:2)
What you mean by opting out is unclear.
Do you simply mean someone should always be made aware when they are interacting with an AI instead of a human? That makes sense and is a common example of regulation suggested in the industry.
Do you mean you should be able to opt out of having your interaction with an AI used to further train future AI models? That is also something I have heard proposed and it seems reasonable enough.
Do you mean you should be able to opt out of having any information about yourself u
AI can't have privacy rights. (Score:2)
Or any "rights" in general.
Re: (Score:2)
If it ended up as an actual general artificial intelligence and we actually create something that has emotions, suffering and so on, that's not justifiable. We'd want to be very very careful since we don't really understand what that means and we certainly don't know how bound up emotions and consciousness are with "intelligence" but the first, fundamental thing we should definitely clearly put out there is the right not to be tortured, whatever that means.
Re: (Score:2)
At that point what is the difference between humans and AI if they have similar "rights"? If AI seeks to self-persevere, which presumably would be a fundamental "right", and that would mean they may potentially want to kill all humans that may be seeking to destroy it, are they given this right?
Should AI also be able to "secretly" communicate with other AI, train other AI in such a manner, that humans can't see what's happening?
AI should simply be seen always as a tool, just like term "robot" is etymologi
Re: (Score:2)
If AI seeks to self-persevere, which presumably would be a fundamental "right", and that would mean they may potentially want to kill all humans that may be seeking to destroy it, are they given this right?
Are humans given that right?
Re: (Score:2)
Most societies state that people have the right of self-defense. In many states in the US, the right to self-defense is so strong to even include protecting one's property, as if those pieces of property are an extension of the person seeking self-defense.
No black box (Score:3)
All AI must have the ability to be debugged. Require every output to include links to all data elements that went into the generation of the output, along with all values of all parameters that were weighed in the generation of the output.
Re: (Score:2)
Re: No black box (Score:2)
This is not really possible. Every input has the potential to affect the model in some way, and a decently trained model will adjust very slighly to new inputs. But it's not really feasable to determine how much a given input would change the model had the training data been presented in a different order. The only real way to determine that is to train the model using the training data in every possible order, recording it's state after every input and comparing them all at the end.
Re: (Score:2)
All AI must have the ability to be debugged. Require every output to include links to all data elements that went into the generation of the output, along with all values of all parameters that were weighed in the generation of the output.
Of course any AI has the ability to be debugged. Do you mean the end-user should be able to debug it? That is pretty unreasonable for any application. To you mean regulators should be allowed to debug/audit the models? Do you mean the results should be interpretable and/or explainable? It's unclear what you are asking for here.
self driving cars must have source code available (Score:2)
self driving cars must have source code and more (logs, map data, etc) available to
The courts
The DOT
any lawyer in involved in an court case with an self driving car (both civil and criminal court)
any renter, rider, owner, etc of an self driving car can not be forced to sign EULA that gives up any of the listed rights or wave the right to court.
Re: (Score:2)
I wouldn't support requiring source or model weight data as that would make industry fight the regulation on the not unreasonable assertion that unregulated overseas competitors would steal it.
I would support requiring all crash data including software version, model version, sensor logs including video, commanded behaviors, etc. be provided to the NTSB immediately after an injury crash where autonomous behaviors were active at or in the 60 seconds prior to the crash.
I'd like the NTSB to make that data avai
Re: (Score:2)
I wouldn't support requiring source or model weight data as that would make industry fight the regulation on the not unreasonable assertion that unregulated overseas competitors would steal it.
Courts have lots of experience dealing with private data and not leaking it. The requirement suggested is not that the courts have to get it by default, just that it should "be available" to them, which could mean that the company would have to provably have an archive of the source code in the country and provide limited access to independent experts at their own site as required.
If you don't have full source code (and build infrastructure) access when needed then absolutely anything, including hidden inst
Re: (Score:2)
the airplane manufacturers do trun over the code / have it under go BIG TIME testing / review.
self driving cars should have something like that and the manufacturers should not be able to cover up bad code in the case of an injury / death crash
Admit to censorship of input data (Score:3)
https://market-ticker.org/akcs... [market-ticker.org]
If the AI is going to exclude "socially unacceptable" data when formulating its response, simply admit it.
Also the AI should never pass itself off as human, even if it does look like #3, #6, or #8. We all know how much trouble that causes. ;-)
Re: (Score:2)
Input data needs to be all properly licensed. No skipping the FBI warnings for AI.
Fast and loose with term "censorship" (Score:2)
There will be something "socially unacceptable" to almost anybody you meet.
It's the very job of the AI trainer(s) to curate training data. GIGO. Do you want bots spewing variations of Adolf speeches (without asking)? That's silly.
Since there is a no single Objective Truth of the Universe, the curators will have to make judgement calls, and that will probably reflect their personal perspective of the world.
Calling it "censorship" is a slant. Repent!
Re: (Score:2)
Y'all are sad and pathetic.
There are no good answers. (Score:5, Interesting)
OpenAI and the other commercial simply want regulations for two reasons. First because compliance will keep others and FOSS/community projections out of their space. That is the whole of it, whatever else they say they are lying.
Second they want to get some CDA-230 like loop hole that will excuse them and their products from having to meet any prior social or legal standard of behavior. IE they want a liability shield for when their "black-box" does something like libel/slander/discrimination against this week's favorite protected class/fraud/harassment/etc.
Why do know this? because we don't even know what AI is, today (as in the last few years) it means LLMs and stable diffusion but tomorrow it could be something else. Second their appeals for regulation are to vague ideas like 'civil rights' well gee everyone likes civil rights and nobody supports violating civil rights, right up until you start getting into the details about whose rights, when, what rights, which actors, and suddenly there is very little agreement at all. Again strongly indicative of trying to build support and momentum for action before the facts are in and really litigated in the court of public opinion.
Re:There are no good answers. (Score:4, Insightful)
Re: (Score:2)
The 4 laws of AI: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
Does the Robot flip the lever to kill one person instead of twelve. Does it do nothing and let twelve die? What if the one person is a genius that will cure cancer and save millions of lives or a terminally ill patient about to die. Are all humans equally valuable? Even if you could make that determination, could you do it fast enough?
My father developed a formula for this, V=DNT ; Value = Degree * Number * Time The problem is quantizing the de
Same as any software regulations? (Score:2)
If we don't allow a computer program to do something, AI shouldn't do it as well?
Re: (Score:2)
If we don't allow a computer program to do something, AI shouldn't do it as well?
Offhand I can't think of anything that computer programs specifically are prohibited from doing.
Re: (Score:2)
If we don't allow a computer program to do something, AI shouldn't do it as well?
Offhand I can't think of anything that computer programs specifically are prohibited from doing.
Seems like the same answer has already been made to AI regulations then. Prohibitions aside, any software used in a manner that can kill people has certain regulations to ensure it operates correctly. Aviation software is tested to ensure it operates in the manner it should in all environments (when 737 Max bypassed this testing we saw the results). Military software is tested to ensure the software only fires the weapon when and how the human operator specifies. I feel like the same sort of testing wou
I could have sworn we had 3 Laws... (Score:2)
Simple (Score:3)
2) An AI must obey orders given it by human beings except where such orders would conflict with the First Law.
3) An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
While a bit of a joke, it could be considered a starting point.
As far as legal liability, all should be borne by the entity deploying it AND the people in charge of the entity. You can't claim to be indispensable to the company to collect a big paycheck, then scurry off when liability rears it's head. You want the big money, you got to have skin in the game. However it wouldn't be fair to consider previous assets, so the only thing at risk is the value of your current (entire) compensation package.
Although you might have to be careful... if your previous company is sued for actions that happened under your leadership you might be liable if they can prove your decisions created a dangerous situation.
Re: (Score:2)
Someone who's read the stories recently can probably suggest a more bullet-proof set.
Re: (Score:2)
1) An AI may not injure a human being or, through inaction, allow a human being to come to harm.
2) An AI must obey orders given it by human beings except where such orders would conflict with the First Law.
3) An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
Great concept, but a bit non-specific in terms of software requirements.
Re: (Score:2)
devote 10% of profits to food automation (Score:2)
Privacy & Data restrictions (Score:5, Insightful)
Anti-Trust law needs to be enforced so there is actual competition in this space (and everywhere else in our economy while we're at it).
It's use by police should be limited or outright banned. It's too easy to use it as a kind of digital drug sniffing dog where the AI will find probably cause where none really exists and result in search warrants being issued that shouldn't be. Especially with our trigger happy cops shooting people left and right they shouldn't have access to a tool so easy to abuse and so difficult to determine how it actually came to it's decisions.
Re: (Score:2)
> Especially with our trigger happy cops shooting people left and right
I don't think they are any more trigger happy than any other cops with guns in the world. Part of the problem is that they are taught to fight back when instead they should run away (assuming other people not in harm's way).
Experiment with "tag spray" guns that spray suspects and their cars with a scent dogs can follow. Then the well-armed "Follow Team" goes after the runner. Or even "transmitter darts", miniature radio beacons in nee
Re: (Score:2)
America needs an equivalent to the EU GDRP. If you don't know it's an extensive privacy law that protects consumers and among other things lets you request your personal information be removed from a company's systems.
Anti-Trust law needs to be enforced so there is actual competition in this space (and everywhere else in our economy while we're at it).
It's use by police should be limited or outright banned. It's too easy to use it as a kind of digital drug sniffing dog where the AI will find probably cause where none really exists and result in search warrants being issued that shouldn't be. Especially with our trigger happy cops shooting people left and right they shouldn't have access to a tool so easy to abuse and so difficult to determine how it actually came to it's decisions.
Agreed, but not really AI specific, this is an across the board regulation that would be good to have.
Companies that replace workers with ai (Score:2)
Initial thoughts (Score:2)
For me, the overriding themes are:
Both People and AI will make mistakes.
People should be in the loop to catch those mistakes.
When a person or an AI makes a mistake on a job the producer has asserted it can do, the liability is on both the producer and the people responsible for reviewing its work.
Data regarding mistakes needs to flow back to the people overseeing the AI, the producers, and in some cases also to regulators.
I might consider a HIPPA carve-out to allow AIs to train on medical data.
For AIs makin
AI essential (Score:2)
What AI and Automation should be doing:
Everything that is unsafe and unsanitary that letting humans do, requires a level of care that just doesn't happen.
So food planting, monitoring, and harvesting of plants and fruits could probably be made a lot more efficient if they can be turned into closed systems where no humans, animals or pests can get at the food until it's ready to be harvested. AI's could make it so that food is grown exactly as needed and reduce the waste of water and micronutrients.
AI should
Caveat Emptor... (Score:2)
Caveat Emptor, and NOTHING ELSE.
Counterpoint (Score:2)
Re: Counterpoint (Score:2)
If an AI recommended treatment leads to a bad call resulting in patient harm, the manufacturer of the AI (the seller) bears full responsibility if the litigant can find an expert to support an alternative treatment.
Liability (Score:3)
The AI operator bears the responsibility to demonstrate its AI has acted in accordance with all regulations pertaining to liability. No "black box" defense. If a person is weeded out of employment by an AI - for example - the burden of proof falls on the employer to prove the mechanism by which the AI filters candidates. Without clear evidence to the contrary (by making algorithms public during discovery), it should be assumed the civil rights of the litigant have been violated.
We don't need any at all. (Score:2, Flamebait)
We only need laws that apply to humans, and an understanding that a human has to be held responsible for actions of software. Any special laws applying to AI miss the point. "AI is a problem to the extent that it's used to do problematic things, and otherwise it isn't. It doesn't matter if software is used to collect someone's PII and manipulate with them, or drone strike someone, or whatever else bad you can imagine; at some level, a human did that. Hold them responsible, and you're done.
We don't need to d
Thou shalt not create nekkid pictures (Score:2)
And videos. ...
And text.
Ask Slashdot: What are some good software regulati (Score:4, Insightful)
Why don’t we take this approach with software? After all, we know, today, about all the harm that has been done by web applications (intentional or unintentional). Private data has been hacked or leaked, Social media has manipulated public opinion, productivity has been lost. Surely anyone seeking to stand up a website or mobile app should have to prove beyond a shadow of a doubt that no harm can come from their app, either intentionally or unintentionally through bugs in the software. Wait, why is this unreasonable? It’s only what people are asking of AI!
Force disclosure of AI use? (Score:3)
Is AI good or bad? I have no clue. However, I know the crap spewed out by ChatGPT is riddled with errors, but looks close to legit. I fear the near future where shitty outsourcing firms will "write your app" using ChatGPT...and it'll work, but either at half the efficiency or leave vulnerabilities. That's honestly fine. There's a market for garbage and if the customer knows this and wants to accept the risk, it's not for me to tell them not to. The same applies for written materials. If a lawyer uses generative AI to write a contract and there's a loophole, the customer should be able to sue for both the malpractice as well as not disclosing that it was partially written by an AI.
AIs have no clue what they're doing. They're fancy pattern matchers. They don't know if code is right or wrong...only that it resembles code that is right or wrong. They never will know. They're far from "intelligent." Like I said above, there's a market for that and it's perfectly fine in my opinion, just so long as it's fully disclosed. It's like a counterfeit item. I should be able to buy a LEGO-compatible "Harry Porter" "HawkWarts" building set on Ali Express (IMHO). However, it should be very ILLEGAL to sell it as a genuine LEGO set.
To me, everything I've seen generated by ChatGPT is the AliExpress knock off version. It needs to be disclosed that it's low-end work done by AI.
Re: (Score:2)
Right now, saying an AI did it is a selling point, even when it is a false assertion. Every moron chatbot is now "Our new AI assistant Eric will help you."
Guidelines. Not necessarily rules. (Score:2)
- If I don't talk to an AI, I don't want an AI talking to me.
- No robocalls.
- If you sense racism bubbling, abort.
- Unless I say, "Talk dirty to me", do not talk dirty to me.
Let's just ask it (Score:2)
ChatGPT 2024!!!
The Preservation of Outdated Occupations Act (POOA)
Section 1: Title
This Act shall be known as the "Preservation of Outdated Occupations Act" or "POOA".
Section 2: Findings and Purpose
(a) Findings:
(1) Artificial Intelligence (AI) technology has the potential to greatly improve the efficiency and effectiveness of many industries.
(2) However, the rapid development of AI technology may also result in the loss of certain jobs, leaving many workers displaced and unemplo
A few quick thoughts (Score:2)
A few quick ideas come to mind... .
AI should not be used for the following:
Inflict or direct harm (physical or otherwise) on someone
Invade the privacy of someone.
Imitate the likeness of someone without their consent (likeness defined as to where it can be confused as the person in question)
Used to generated content that is claimed to factual but is actually to the contrary. (think false information, propaganda, revenge, or the term "fake news").
Used to act as multiple individuals for the purpose to get gai
nope (Score:2)
The medical example defeats a big value of AI: letting the average joe get an idea of what his problem is without having to go to a doctor. Requiring a doctor buy-in is just guild behavior guaranteeing income for medical mega-firms. Millions if not billions of online searches every day are about medical issues. Do we now regulate that you can't do a search without some doctor blessing it?
Don't forget weapons systems (Score:2)
DOD has regulations for autonomous weapons systems as well.
See DOD 3000.09 [defense.gov].
Responsibility belongs to competant override (Score:2)
If you have a self driving car, and there are ways for you to override it (steering wheel, breaks, accelerator...) as well you have been licensed to drive the car, and requirements for the car to be managed by a licensed driver. Should be responsible for mistakes that cars AI might do.
However if the company removes all ways to override the AI, then that company should be responsible for any issues.
So while I operate my Self Driving car, and I get into an accident, I would be liable, because I had the oppor
Reasonable, but redundant (Score:2)
"What are some good AI regulation suggestions? I'll start: A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis."
My reaction to that is 1) Well, of course, and 2) There is absolutely no need for an "AI regulation" to enforce that. Healthcare providers are already held responsible for what they do. They can be sued, they can be disciplined by the medical board, their licenses can be revoked, etc. So it's kind of pointless to suggest that we need further
Laws are a bad idea (Score:3)
The only law I would support at the moment are labeling requirements so that people are never confused about whether they are interacting with a real person or a machine. I also think relevant consistent labels should be required in any remote calls from machine lead resource discovery so that websites and information services know they are being accessed by bots and not real people.
The basic problem I see WRT laws and AI is that effectively it is criminalizing tooling and knowledge. If an AI responds to my queries about how to make bombs is that any different than reading it from the Anarchist cookbook or a chemistry package able to manipulate nitrogen bonds?
Similarly in terms of IPR if people have no rights to fruits of AI does this mean they also have no rights to the fruits of design automation tools or artistic filters in a drawing package? I think fundamentally IPR is broken and is in need of fixing rather than carve outs explicitly created for "AI".
For example if a future ChatGPT can provide patentable solutions then such solutions should no longer be patentable because "a person having ordinary skill in the art" would also have access to such tools and would also be able to get the same answer out of such tools. The existence of better generally available tooling must automatically raise the bar.
Also notions of laws to address bias and AI behavior are in my view counterproductive. "Prejudice" is often useful in contexts where complete information is not possible. Classic examples are calculating risk associated with medical treatments or insurance calculations of risk from statistical evidence because individual outcomes are not a-priori knowable or practical. There are already legal systems regulating the practicing of prejudice. There is no reason to apply specific laws to AI. Perhaps laws could be improved to address explainability / obfuscation to prevent black-box excuses for implementing disallowed prejudice.
I also don't believe legislation can protect the world from wildly unpredictable outcomes of superhuman AGI. I believe only not developing such technology in the first place is the only way to do that which in terms of real world means never even approaching the capability. This would imply something draconian like imposing a super Wassenaar Arrangement on steroids globally replacing not for export with by anyone period. Not something I would ever support or believe to be at all credible.
Re: Non-exclusion / facts not feelings (Score:2)
AI should respect the law, regardless of facts or feelings. It can't look at a fact that white guys behave in some manner and use that as justification for civil rights violations.
Re: (Score:2)
Congress (mostly 80-year old white men with conflicts of interest) will do it for us.
A prime example is the OPs statement of 'old white guys" -- this is a perfect example of bias that AI needs to avoid. AI should consider facts, not feelings.
Old — fact. As the word is used, most congresspeople are old.
White — fact, same basis.
Men — fact again.
Train your model on the actions of the existing members of congress and it's going to mostly learn to do the same shit a bunch of old white men paid not to solve problems (or in fact often to cause them) by the wealthy are doing now. Fact.
Re: (Score:3)
Yeah, no doubt. Nobody likes to feel called out, even when they deserve it.
Who deserves what? Care to explain who is being "called out" and for what? An individual is not a group and a group is not an individual.
The fact that congress is a bunch of old white men is relevant because society is not.
Are you saying people of a certain age, sex and ethnicity are best represented by individuals with common appearance and traits? If not what are you saying? Why are worthless tribal demarcations at all relevant?
Perhaps I made a mistake voting for Obama because he doesn't look like me? I really should have voted based on superficial traits rather than against super haw
Re: (Score:2)
Don't kill all humans now? No / Yes
Re: (Score:2)
Asimov was brilliant, but he is currently unavailable due to suffering from an advanced case of death.
Also... when he considered the issue of problematic AI he was A) generally doing mystery stories and B) dealing with truly intelligent machines with perfect logic trying to execute flawed programming.
There's nothing there that really sells him as the guy who would have been the best choice at drafting regulations for what we are currently calling 'AI'.