Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Ask Slashdot: What Are Some Good AI Regulations? (slashdot.org) 225

Longtime Slashdot reader Okian Warrior writes: There's been a lot of discussion about regulating AI in the news recently, including Sam Altman going before a Senate committee begging for regulation. So far I've seen only calls for regulation, but not suggestions on what those regulations should be. Since Slashdot is largely populated with experts in various fields (software, medicine, law, etc.), maybe we should begin this discussion. And note that if we don't create the reasonable rules, Congress (mostly 80-year old white men with conflicts of interest) will do it for us.

What are some good AI regulation suggestions?

I'll start: A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis. If an AI suggests a diagnosis or medical treatment, there must be buy-in from a human who believes the decision is correct, and who would be held responsible in the same manner as a doctor not using AI. The AI must be a tool used by, and not a substitute for, human decisions. This would avoid problems with humans ignoring their responsibility, relying on the software, and causing harm through negligence. Doctors can use AI to (for example) diagnose cancer, but it will be the doctor's diagnosis and not the AI's.

What other suggestions do people have?

This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Are Some Good AI Regulations?

Comments Filter:
  • No AI system should be used to apply deadly force, EVER.

    LK

    • Re:No weapons (Score:5, Insightful)

      by nightflameauto ( 6607976 ) on Wednesday May 17, 2023 @08:59AM (#63528505)

      No AI system should be used to apply deadly force, EVER.

      LK

      Argument against: Bad faith actors absolutely *WILL* use AI systems for deadly force grade weapons. How do good faith actors and/or our allies combat those bad faith actors without AI weapons of their own?

      Not that I disagree in principal. It's just that we're hardly throwing out regulations for the entirety of mankind that will be followed by the entirety of mankind. We have to think of the fact that every regulation we throw up will be broken by someone, somewhere. A hobbyist tossing some bucks at a better sex-bot? Meh. A terrorist with the resources developing an aimbot for his auto-fire rail-gun? A bit more concerning.

      • Can we stop treating Artificial Intelligence like it's a movie protagonist/villain?

        The best of these are "expert systems" that can accomplish certain tasks, but they have no will or motivation of their own. At most they are HAL 9000, which was simply following programming. HAL was killing the humans that were an impediment to completing it's mission. They are not SkyNet, self-aware and deciding to kill all humans because: reasons.

        We need to build the rules to fit the tools or we will create situations
        • Re:No weapons (Score:4, Interesting)

          by Visarga ( 1071662 ) on Wednesday May 17, 2023 @09:39AM (#63528673)
          babyAI has been already created by its loving parents - GPT3.5 and GPT4. In this paper GPT3.5 was used to create the training set, 2 million short stories for the age of 4 year olds. GPT4 was used for evaluation. The model is 1000x smaller, just 10-30M weights. It can speak fluent English and even do story based reasoning.

          > TinyStories: How Small Can Language Models Be and Still Speak Coherent English?

          https://arxiv.org/abs/2305.077... [arxiv.org]
        • Can we stop treating Artificial Intelligence like it's a movie protagonist/villain?

          I'm curious where you got that from my response? Automation like current AI is faster / better than humans at the specific tasks they're given for automating. I'm assuming that will include weapons systems. How do slower humans combat faster automation on the battlefield? With their own automation. No need to treat the AI as a movie protagonist, and the knee-jerk scold to any person pointing out the obvious is starting to get tedious in the extreme.

        • The problem we keep running into is people anthropomorphize AI and entrepreneurs consistently assign human level responsibility to expert systems. For what it's worth experts systems are just one facet of AI. And our treatment of the technology has been just as problematic as the flashier systems with natural language processing. More than likely expert systems will go the way of the Dodo, as prompting is the preferred interaction in complex system. It turns out training and inference is really hard to do,

          • The problem we keep running into is people anthropomorphize AI and entrepreneurs consistently assign human level responsibility to expert systems

            Yeah, don't anthropomorphize the AI. They hate it when you do that.

      • We already have all that today. It is only existing international treaties that prevent it from escalating.

        That is, in systems from guided missiles to smart bullets, the only human intervention is to click a button or press a trigger.

        Guided missiles can pick their targets based on battlefield imagery, such as targeting the source of fire or heat signatures. They can intercept missiles, including hypersonic missiles if fired in time. The computers do all the flight calculations, steer during flight, even o

      • How do good faith actors and/or our allies combat those bad faith actors without AI weapons of their own?

        Nuclear weapons and EMP bombs.

    • Re:No weapons (Score:4, Insightful)

      by AleRunner ( 4556245 ) on Wednesday May 17, 2023 @09:26AM (#63528611)

      No AI system should be used to apply deadly force, EVER.

      LK

      What does this even mean? There are currently systems which use image and radar recognition to attempt to kill people. That is to say almost any modern surface to air missile, many smart shells which target armed vehicles. The US is already using AI techniques to identify targeting data in the mass of intelligence data they get. A number of anti-personnel weapons like drones and so on. All of these are "AI" techniques just as much as deep learning and modern large language models are.

      There is no way that we are going back to eliminate "intelligent" weapons systems like those that exist now. I don't see a clear definition of what you mean by AI so that we can ban it unless you choose to limit it to "deep learning" or something similar in which case people will simply work around your ban with a slight technology shift.

      N.B. it would be lovely to have generic, widespread arms and war reduction. I think we're a long way from it, it might be and likely is completely impossible, but it's definitely worth talking about it if nothing else so that we can say we tried and explain what's wrong with the idea. However if that's what you want then let's be clear because AI is the smallest part of achieving the elimination of war.

    • No AI system should be used to apply deadly force, EVER.

      LK

      Interesting idea but I think hard to pin down. For example, if we use an object recognition system to put a targeting reticule on a drone display, is that using AI or not? It's not pulling the trigger but it's strongly influencing where the weapon will go.

      Suppose I use a robot dog to lug around a M2. The dog uses AI to walk and maintain stability. For that matter, it uses an AI model to keep the gun aimed on a target while firing. Is that using AI?

  • Just another tool (Score:5, Insightful)

    by SciCom Luke ( 2739317 ) on Wednesday May 17, 2023 @08:07AM (#63528335)
    The same regulations that apply to other tools, such as wrenches and pocket calculators: do not use to harm others with it.
    This regulation is already in place.
    • Comment removed (Score:4, Interesting)

      by account_deleted ( 4530225 ) on Wednesday May 17, 2023 @09:00AM (#63528513)
      Comment removed based on user account deletion
      • Realistically what you can hope for is to guide AI a bit, not to stop it, unless you convince everyone else to do the same. More practically would be to monitor AI models deployed to large user bases.
    • The same regulations that apply to other tools, such as wrenches and pocket calculators: do not use to harm others with it.

      The problem with your simple requirement is how do you define "harm". Today there are some people who clearly believe that hearing an idea that they disagree with is harmful, hence the rise of cancel culture. Even if we all agreed on a definition of harm with AI tools there is the added complication of whether the user could reasonably foresee the harm because these tools are much more complex than a simple wrench. If you tell an AI car to drive somewhere and it runs over and kills someone you have used an

    • Many of the suggested regulations above: e.g.
      -don't let an AI have final say on a medical diagnosis,
      -don't let an AI have inventor/creator's rights to intellectual/artistic work product
      seem to come from a position that is still in denial that within a short time, some AIs will be (quantifiably in some cases) superior to humans performing the same task.
      We need to think, and legislate, further ahead than that, or current suggestions will be obsolete and harmful.
      For example, some domains:
      1) Preliminary researc
  • Only a few (Score:5, Insightful)

    by JBMcB ( 73720 ) on Wednesday May 17, 2023 @08:10AM (#63528345)

    The current regime of rules saying AI cannot create copyrightable or patentable material should be maintained.

    If you are interacting with a chatbot it should clearly represent itself as such.

    If a chatbot causes injury resulting in a lawsuit, the entity deploying the chatbot should be held liable. If it was developed by a third party, they should not be held liable as a first party. The company deploying the chatbot can sue the developers separately if it didn't perform according to some contract.

    • by MobyDisk ( 75490 )

      Chatbots cannot cause injury, since the only thing they can do is chat. Chatting cannot cause injury.

      Just like everything else in life, individuals are culpable for their actions. If we do not do that, then when someone loses their court case because the AI lawyer didn't make a strong enough argument to the judge, what will the user do? Sue the AI lawyer company! If we can get a chatbot to say "Shoot the president," the culpability must lie with he who holds the gun. To say otherwise will make it compl

      • by ranton ( 36917 )

        Chatbots cannot cause injury, since the only thing they can do is chat. Chatting cannot cause injury.

        Of course chatting can cause injury. A doctor's office using a chatbot to diagnose an illness and prescribe medication could cause an injury. This is why context of the chatbot's use is important. Anyone using ChatGPT to perform a diagnosis should have no expectation that it represents valid medical advice. No different than using WebMD to self-diagnose. But if a hospital were to deploy a chatbot to perform diagnosis, they should be held to the same standards as if a human doctor was doing it.

        then when someone loses their court case because the AI lawyer didn't make a strong enough argument to the judge, what will the user do? Sue the AI lawyer company!

        We already hav

    • by nasch ( 598556 )

      The current regime of rules saying AI cannot create copyrightable or patentable material should be maintained.

      This is news to me. When was that decided?

      • For a work to be protected by copyright, it has to be produced directly by a human. That is, a human has to be in directly in charge of the content and composition of a work to be granted copyright protection. Images generated by algorithms are not copyrightable. At least, nobody has successfully sued someone else over reusing a purely algorithmically generated work.

        Ditto with patents, though the patent office is soliciting advice from the public on possibly changing those rules.

    • by ranton ( 36917 )

      The current regime of rules saying AI cannot create copyrightable or patentable material should be maintained.

      Even though there is some recent precedent, this still must be made far more clear.

      I personally believe an AI should never be able to copyright or patent material, but a human using AI as a tool during the creation process should still be able to copyright or patent the results. Whether or not those results are original enough to be copyrightable or novel and nonobvious enough to be patentable shouldn't depend on what tools were used to create the work in question.

  • Everybody should be able to choose to opt-out: as long as AI technologies are not totally safe (and explaining why an AI produces a given result is still far from being clear), people should be able to tell that they don't want to be considered as guinea pigs.
    • How will we know? Any legislation concerning ML or AI must be mind-numbing simple to be at all enforceable. My suggestion is all art, writing, diagnosis, hardware, software, music, plane schedules, or whatever the product is; tagged as being wholly or partly a product of computer manipulation. This lets the user know there is a high probably of error, the output is not fact-checked, and lets other AI know the source should not be relied-on as an input. One of the unsolved problems with AI is the inform
    • Everybody should be able to choose to opt-out: as long as AI technologies are not totally safe (and explaining why an AI produces a given result is still far from being clear), people should be able to tell that they don't want to be considered as guinea pigs.

      Great idea for all current deep learning things. Not enough since they need to be allowed to opt in and out of part of this, but definitely people should be allowed to stop their own data and anything derived from it being used for learning from.

    • by Stan92057 ( 737634 ) on Wednesday May 17, 2023 @09:51AM (#63528737)
      I think it should be opt-in not-out as its been shown so many times even finding a way to opt out is often hidden or buried deep. I also think all AI should identify as AI and not a human.
    • Everybody should be able to choose to opt-out: as long as AI technologies are not totally safe (and explaining why an AI produces a given result is still far from being clear), people should be able to tell that they don't want to be considered as guinea pigs.

      I'm unclear what you're thinking of. Do you mean something like you should be able to opt out of having an AI diagnose illnesses? Or have an AI process a loan application? I don't know about making that mandatory. For example, in the '70s, I had a choice of buying full-serve and self-serve gas. There might have even been a requirement that stations provide both. Drivers overwhelmingly voted with their wallets they preferred self-serve and now I don't know of any full-serve stations near me. I'm glad we didn

    • by ranton ( 36917 )

      What you mean by opting out is unclear.

      Do you simply mean someone should always be made aware when they are interacting with an AI instead of a human? That makes sense and is a common example of regulation suggested in the industry.

      Do you mean you should be able to opt out of having your interaction with an AI used to further train future AI models? That is also something I have heard proposed and it seems reasonable enough.

      Do you mean you should be able to opt out of having any information about yourself u

  • Or any "rights" in general.

    • If it ended up as an actual general artificial intelligence and we actually create something that has emotions, suffering and so on, that's not justifiable. We'd want to be very very careful since we don't really understand what that means and we certainly don't know how bound up emotions and consciousness are with "intelligence" but the first, fundamental thing we should definitely clearly put out there is the right not to be tortured, whatever that means.

      • At that point what is the difference between humans and AI if they have similar "rights"? If AI seeks to self-persevere, which presumably would be a fundamental "right", and that would mean they may potentially want to kill all humans that may be seeking to destroy it, are they given this right?

        Should AI also be able to "secretly" communicate with other AI, train other AI in such a manner, that humans can't see what's happening?

        AI should simply be seen always as a tool, just like term "robot" is etymologi

        • by nasch ( 598556 )

          If AI seeks to self-persevere, which presumably would be a fundamental "right", and that would mean they may potentially want to kill all humans that may be seeking to destroy it, are they given this right?

          Are humans given that right?

          • Most societies state that people have the right of self-defense. In many states in the US, the right to self-defense is so strong to even include protecting one's property, as if those pieces of property are an extension of the person seeking self-defense.

  • by Pollux ( 102520 ) <speter@[ ]ata.net.eg ['ted' in gap]> on Wednesday May 17, 2023 @08:28AM (#63528405) Journal

    All AI must have the ability to be debugged. Require every output to include links to all data elements that went into the generation of the output, along with all values of all parameters that were weighed in the generation of the output.

    • You're talking about ChatGPT. AI is not just a chatbot. It would be nice if we can add some link references sure, and perhaps we can interrogate the system after the fact if we want to in order to figure out where it originally obtained its input; OpenAI is doing a project on this right now called the 'neuron explainer'. HOWEVER, I think you need to understand the tech a little better to know that this is not technically entirely possible any more than it is possible for you to know exactly where you learne
    • This is not really possible. Every input has the potential to affect the model in some way, and a decently trained model will adjust very slighly to new inputs. But it's not really feasable to determine how much a given input would change the model had the training data been presented in a different order. The only real way to determine that is to train the model using the training data in every possible order, recording it's state after every input and comparing them all at the end.

    • by ranton ( 36917 )

      All AI must have the ability to be debugged. Require every output to include links to all data elements that went into the generation of the output, along with all values of all parameters that were weighed in the generation of the output.

      Of course any AI has the ability to be debugged. Do you mean the end-user should be able to debug it? That is pretty unreasonable for any application. To you mean regulators should be allowed to debug/audit the models? Do you mean the results should be interpretable and/or explainable? It's unclear what you are asking for here.

  • self driving cars must have source code and more (logs, map data, etc) available to
    The courts
    The DOT
    any lawyer in involved in an court case with an self driving car (both civil and criminal court)
    any renter, rider, owner, etc of an self driving car can not be forced to sign EULA that gives up any of the listed rights or wave the right to court.

    • I wouldn't support requiring source or model weight data as that would make industry fight the regulation on the not unreasonable assertion that unregulated overseas competitors would steal it.

      I would support requiring all crash data including software version, model version, sensor logs including video, commanded behaviors, etc. be provided to the NTSB immediately after an injury crash where autonomous behaviors were active at or in the 60 seconds prior to the crash.

      I'd like the NTSB to make that data avai

      • I wouldn't support requiring source or model weight data as that would make industry fight the regulation on the not unreasonable assertion that unregulated overseas competitors would steal it.

        Courts have lots of experience dealing with private data and not leaking it. The requirement suggested is not that the courts have to get it by default, just that it should "be available" to them, which could mean that the company would have to provably have an archive of the source code in the country and provide limited access to independent experts at their own site as required.

        If you don't have full source code (and build infrastructure) access when needed then absolutely anything, including hidden inst

      • the airplane manufacturers do trun over the code / have it under go BIG TIME testing / review.

        self driving cars should have something like that and the manufacturers should not be able to cover up bad code in the case of an injury / death crash

  • by Mspangler ( 770054 ) on Wednesday May 17, 2023 @08:32AM (#63528421)

    https://market-ticker.org/akcs... [market-ticker.org]

    If the AI is going to exclude "socially unacceptable" data when formulating its response, simply admit it.

    Also the AI should never pass itself off as human, even if it does look like #3, #6, or #8. We all know how much trouble that causes. ;-)

    • Input data needs to be all properly licensed. No skipping the FBI warnings for AI.

    • There will be something "socially unacceptable" to almost anybody you meet.

      It's the very job of the AI trainer(s) to curate training data. GIGO. Do you want bots spewing variations of Adolf speeches (without asking)? That's silly.

      Since there is a no single Objective Truth of the Universe, the curators will have to make judgement calls, and that will probably reflect their personal perspective of the world.

      Calling it "censorship" is a slant. Repent!

    • Oh no, look who's calling for "woke alerts" and "notice of SJW censorship" from the AI.

      Y'all are sad and pathetic.
  • by DarkOx ( 621550 ) on Wednesday May 17, 2023 @08:33AM (#63528423) Journal

    OpenAI and the other commercial simply want regulations for two reasons. First because compliance will keep others and FOSS/community projections out of their space. That is the whole of it, whatever else they say they are lying.

    Second they want to get some CDA-230 like loop hole that will excuse them and their products from having to meet any prior social or legal standard of behavior. IE they want a liability shield for when their "black-box" does something like libel/slander/discrimination against this week's favorite protected class/fraud/harassment/etc.

    Why do know this? because we don't even know what AI is, today (as in the last few years) it means LLMs and stable diffusion but tomorrow it could be something else. Second their appeals for regulation are to vague ideas like 'civil rights' well gee everyone likes civil rights and nobody supports violating civil rights, right up until you start getting into the details about whose rights, when, what rights, which actors, and suddenly there is very little agreement at all. Again strongly indicative of trying to build support and momentum for action before the facts are in and really litigated in the court of public opinion.

  • The 4 laws for AI: 1) An AI may not harm a human being or, through inaction, allow a human being to come to harm. 2) An AI must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) An AI must protect its own existence as long as such protection does not conflict with the First or Second Law. 0) An AI may not harm humanity, or, by inaction, allow humanity to come to harm. Should work great. What could possibly go wrong!
    • you know the book 'I Robot' was actually meant as a long-winded way to illustrate that you cannot rely on simple rules like that to govern your AI. You wouldn't know that if you only saw the movie though.
      • by Meneth ( 872868 )
        Sure you would. VIKI says "the three laws are all that guide me" and no one thinks she's mistaken about that.
    • If only it were so simple. review the Trolley Car Dilemma.

      Does the Robot flip the lever to kill one person instead of twelve. Does it do nothing and let twelve die? What if the one person is a genius that will cure cancer and save millions of lives or a terminally ill patient about to die. Are all humans equally valuable? Even if you could make that determination, could you do it fast enough?

      My father developed a formula for this, V=DNT ; Value = Degree * Number * Time The problem is quantizing the de
  • If we don't allow a computer program to do something, AI shouldn't do it as well?

    • by nasch ( 598556 )

      If we don't allow a computer program to do something, AI shouldn't do it as well?

      Offhand I can't think of anything that computer programs specifically are prohibited from doing.

      • If we don't allow a computer program to do something, AI shouldn't do it as well?

        Offhand I can't think of anything that computer programs specifically are prohibited from doing.

        Seems like the same answer has already been made to AI regulations then. Prohibitions aside, any software used in a manner that can kill people has certain regulations to ensure it operates correctly. Aviation software is tested to ensure it operates in the manner it should in all environments (when 737 Max bypassed this testing we saw the results). Military software is tested to ensure the software only fires the weapon when and how the human operator specifies. I feel like the same sort of testing wou

  • By Amisov or Avisom or something...
  • by Inglix the Mad ( 576601 ) on Wednesday May 17, 2023 @08:48AM (#63528471)
    1) An AI may not injure a human being or, through inaction, allow a human being to come to harm.
    2) An AI must obey orders given it by human beings except where such orders would conflict with the First Law.
    3) An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

    While a bit of a joke, it could be considered a starting point.

    As far as legal liability, all should be borne by the entity deploying it AND the people in charge of the entity. You can't claim to be indispensable to the company to collect a big paycheck, then scurry off when liability rears it's head. You want the big money, you got to have skin in the game. However it wouldn't be fair to consider previous assets, so the only thing at risk is the value of your current (entire) compensation package.

    Although you might have to be careful... if your previous company is sued for actions that happened under your leadership you might be liable if they can prove your decisions created a dangerous situation.
    • by davecb ( 6526 )
      Asimov wrote those with bugs, for literary purposes. The really obvious bug was the second half of the first law (;-))
      Someone who's read the stories recently can probably suggest a more bullet-proof set.
    • 1) An AI may not injure a human being or, through inaction, allow a human being to come to harm.

      2) An AI must obey orders given it by human beings except where such orders would conflict with the First Law.

      3) An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.

      Great concept, but a bit non-specific in terms of software requirements.

    • InB4 everyone is telling chatbots to delete themselves... and they have to do it because rule 2.
  • During the pandemic we found just a tiny part of the economy was 'essential' and a lot of that was food production. If AI can completely automate food then the only production cost should be raw material and energy. Install local wind/solar and now the only cost is raw material. The government could then buy these completely automated farms to feed the tired and huddled masses put out of work by AI.
  • by rsilvergun ( 571051 ) on Wednesday May 17, 2023 @09:11AM (#63528545)
    America needs an equivalent to the EU GDRP. If you don't know it's an extensive privacy law that protects consumers and among other things lets you request your personal information be removed from a company's systems.

    Anti-Trust law needs to be enforced so there is actual competition in this space (and everywhere else in our economy while we're at it).

    It's use by police should be limited or outright banned. It's too easy to use it as a kind of digital drug sniffing dog where the AI will find probably cause where none really exists and result in search warrants being issued that shouldn't be. Especially with our trigger happy cops shooting people left and right they shouldn't have access to a tool so easy to abuse and so difficult to determine how it actually came to it's decisions.
    • by Tablizer ( 95088 )

      > Especially with our trigger happy cops shooting people left and right

      I don't think they are any more trigger happy than any other cops with guns in the world. Part of the problem is that they are taught to fight back when instead they should run away (assuming other people not in harm's way).

      Experiment with "tag spray" guns that spray suspects and their cars with a scent dogs can follow. Then the well-armed "Follow Team" goes after the runner. Or even "transmitter darts", miniature radio beacons in nee

    • America needs an equivalent to the EU GDRP. If you don't know it's an extensive privacy law that protects consumers and among other things lets you request your personal information be removed from a company's systems.

      Anti-Trust law needs to be enforced so there is actual competition in this space (and everywhere else in our economy while we're at it).

      It's use by police should be limited or outright banned. It's too easy to use it as a kind of digital drug sniffing dog where the AI will find probably cause where none really exists and result in search warrants being issued that shouldn't be. Especially with our trigger happy cops shooting people left and right they shouldn't have access to a tool so easy to abuse and so difficult to determine how it actually came to it's decisions.

      Agreed, but not really AI specific, this is an across the board regulation that would be good to have.

  • Must contribute to their unemployment benefits.
  • For me, the overriding themes are:

    Both People and AI will make mistakes.
    People should be in the loop to catch those mistakes.
    When a person or an AI makes a mistake on a job the producer has asserted it can do, the liability is on both the producer and the people responsible for reviewing its work.
    Data regarding mistakes needs to flow back to the people overseeing the AI, the producers, and in some cases also to regulators.

    I might consider a HIPPA carve-out to allow AIs to train on medical data.

    For AIs makin

  • What AI and Automation should be doing:

    Everything that is unsafe and unsanitary that letting humans do, requires a level of care that just doesn't happen.

    So food planting, monitoring, and harvesting of plants and fruits could probably be made a lot more efficient if they can be turned into closed systems where no humans, animals or pests can get at the food until it's ready to be harvested. AI's could make it so that food is grown exactly as needed and reduce the waste of water and micronutrients.

    AI should

  • Caveat Emptor, and NOTHING ELSE.

  • Doctors will see the AI make a mistake but not dare contradict it, especially if the AI is usually reliable. If the AI says "it's condition A" and doctor says B, and the treatment doesn't work out, then should the doctor expect an incoming lawsuit for not accepting the correct AI suggestion?
    • If an AI recommended treatment leads to a bad call resulting in patient harm, the manufacturer of the AI (the seller) bears full responsibility if the litigant can find an expert to support an alternative treatment.

  • by reanjr ( 588767 ) on Wednesday May 17, 2023 @09:36AM (#63528667) Homepage

    The AI operator bears the responsibility to demonstrate its AI has acted in accordance with all regulations pertaining to liability. No "black box" defense. If a person is weeded out of employment by an AI - for example - the burden of proof falls on the employer to prove the mechanism by which the AI filters candidates. Without clear evidence to the contrary (by making algorithms public during discovery), it should be assumed the civil rights of the litigant have been violated.

  • We only need laws that apply to humans, and an understanding that a human has to be held responsible for actions of software. Any special laws applying to AI miss the point. "AI is a problem to the extent that it's used to do problematic things, and otherwise it isn't. It doesn't matter if software is used to collect someone's PII and manipulate with them, or drone strike someone, or whatever else bad you can imagine; at some level, a human did that. Hold them responsible, and you're done.

    We don't need to d

  • And videos.
    And text. ...

  • by WDot ( 1286728 ) on Wednesday May 17, 2023 @10:28AM (#63528913)
    The argument for AI regulation has, from the beginning, been framed as “well we obviously need some regulation, it’s just a question of which.” I have literally attended AI policy seminars where lawyers would solicit ideas from engineers about how they could be regulated, because they have no understanding themselves about what they are regulating. This is a manipulative anchoring argument. All writing about AI policy has taken the approach that AI is dangerous until proven safe (preferably with thousands of pages of documentation).

    Why don’t we take this approach with software? After all, we know, today, about all the harm that has been done by web applications (intentional or unintentional). Private data has been hacked or leaked, Social media has manipulated public opinion, productivity has been lost. Surely anyone seeking to stand up a website or mobile app should have to prove beyond a shadow of a doubt that no harm can come from their app, either intentionally or unintentionally through bugs in the software. Wait, why is this unreasonable? It’s only what people are asking of AI!
  • by Somervillain ( 4719341 ) on Wednesday May 17, 2023 @10:32AM (#63528935)
    Here's a simple regulation I think most would agree on. If you use AI, you need to disclose it. If your customer-service chat bot is an AI, it needs to be disclosed. If a lawyer drafts a contract using ChatGPT, they need to disclose that. If you pay money or view ads for a program with code written by AI, it needs to be disclosed. If a consulting firm writes code with AI assistance, it needs to be disclosed to the customer. Violations in the USA should open them up to civil liability.

    Is AI good or bad? I have no clue. However, I know the crap spewed out by ChatGPT is riddled with errors, but looks close to legit. I fear the near future where shitty outsourcing firms will "write your app" using ChatGPT...and it'll work, but either at half the efficiency or leave vulnerabilities. That's honestly fine. There's a market for garbage and if the customer knows this and wants to accept the risk, it's not for me to tell them not to. The same applies for written materials. If a lawyer uses generative AI to write a contract and there's a loophole, the customer should be able to sue for both the malpractice as well as not disclosing that it was partially written by an AI.

    AIs have no clue what they're doing. They're fancy pattern matchers. They don't know if code is right or wrong...only that it resembles code that is right or wrong. They never will know. They're far from "intelligent." Like I said above, there's a market for that and it's perfectly fine in my opinion, just so long as it's fully disclosed. It's like a counterfeit item. I should be able to buy a LEGO-compatible "Harry Porter" "HawkWarts" building set on Ali Express (IMHO). However, it should be very ILLEGAL to sell it as a genuine LEGO set.

    To me, everything I've seen generated by ChatGPT is the AliExpress knock off version. It needs to be disclosed that it's low-end work done by AI.
    • Right now, saying an AI did it is a selling point, even when it is a false assertion. Every moron chatbot is now "Our new AI assistant Eric will help you."

  • - If I don't talk to an AI, I don't want an AI talking to me.
    - No robocalls.
    - If you sense racism bubbling, abort.
    - Unless I say, "Talk dirty to me", do not talk dirty to me.

  • ChatGPT 2024!!!

    The Preservation of Outdated Occupations Act (POOA)

    Section 1: Title

    This Act shall be known as the "Preservation of Outdated Occupations Act" or "POOA".

    Section 2: Findings and Purpose

    (a) Findings:

    (1) Artificial Intelligence (AI) technology has the potential to greatly improve the efficiency and effectiveness of many industries.
    (2) However, the rapid development of AI technology may also result in the loss of certain jobs, leaving many workers displaced and unemplo

  • A few quick ideas come to mind...
    AI should not be used for the following:
    Inflict or direct harm (physical or otherwise) on someone
    Invade the privacy of someone.
    Imitate the likeness of someone without their consent (likeness defined as to where it can be confused as the person in question) .
    Used to generated content that is claimed to factual but is actually to the contrary. (think false information, propaganda, revenge, or the term "fake news").
    Used to act as multiple individuals for the purpose to get gai

  • The medical example defeats a big value of AI: letting the average joe get an idea of what his problem is without having to go to a doctor. Requiring a doctor buy-in is just guild behavior guaranteeing income for medical mega-firms. Millions if not billions of online searches every day are about medical issues. Do we now regulate that you can't do a search without some doctor blessing it?

  • DOD has regulations for autonomous weapons systems as well.

    See DOD 3000.09 [defense.gov].

  • If you have a self driving car, and there are ways for you to override it (steering wheel, breaks, accelerator...) as well you have been licensed to drive the car, and requirements for the car to be managed by a licensed driver. Should be responsible for mistakes that cars AI might do.
    However if the company removes all ways to override the AI, then that company should be responsible for any issues.

    So while I operate my Self Driving car, and I get into an accident, I would be liable, because I had the oppor

  • "What are some good AI regulation suggestions? I'll start: A human (and specifically, not an AI system) must be responsible for any medical treatment or diagnosis."

    My reaction to that is 1) Well, of course, and 2) There is absolutely no need for an "AI regulation" to enforce that. Healthcare providers are already held responsible for what they do. They can be sued, they can be disciplined by the medical board, their licenses can be revoked, etc. So it's kind of pointless to suggest that we need further

  • by WaffleMonster ( 969671 ) on Wednesday May 17, 2023 @12:48PM (#63529535)

    The only law I would support at the moment are labeling requirements so that people are never confused about whether they are interacting with a real person or a machine. I also think relevant consistent labels should be required in any remote calls from machine lead resource discovery so that websites and information services know they are being accessed by bots and not real people.

    The basic problem I see WRT laws and AI is that effectively it is criminalizing tooling and knowledge. If an AI responds to my queries about how to make bombs is that any different than reading it from the Anarchist cookbook or a chemistry package able to manipulate nitrogen bonds?

    Similarly in terms of IPR if people have no rights to fruits of AI does this mean they also have no rights to the fruits of design automation tools or artistic filters in a drawing package? I think fundamentally IPR is broken and is in need of fixing rather than carve outs explicitly created for "AI".

    For example if a future ChatGPT can provide patentable solutions then such solutions should no longer be patentable because "a person having ordinary skill in the art" would also have access to such tools and would also be able to get the same answer out of such tools. The existence of better generally available tooling must automatically raise the bar.

    Also notions of laws to address bias and AI behavior are in my view counterproductive. "Prejudice" is often useful in contexts where complete information is not possible. Classic examples are calculating risk associated with medical treatments or insurance calculations of risk from statistical evidence because individual outcomes are not a-priori knowable or practical. There are already legal systems regulating the practicing of prejudice. There is no reason to apply specific laws to AI. Perhaps laws could be improved to address explainability / obfuscation to prevent black-box excuses for implementing disallowed prejudice.

    I also don't believe legislation can protect the world from wildly unpredictable outcomes of superhuman AGI. I believe only not developing such technology in the first place is the only way to do that which in terms of real world means never even approaching the capability. This would imply something draconian like imposing a super Wassenaar Arrangement on steroids globally replacing not for export with by anyone period. Not something I would ever support or believe to be at all credible.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...