Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI EU Technology

Europeans Take a Major Step Toward Regulating AI (nytimes.com) 25

The European Union took an important step on Wednesday toward passing what would be one of the first major laws to regulate artificial intelligence, a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly developing technology. From a report: The European Parliament, a main legislative branch of the E.U., passed a draft law known as the A.I. Act, which would put new restrictions on what are seen as the technology's riskiest uses. It would severely curtail uses of facial recognition software, while requiring makers of A.I. systems like the ChatGPT chatbot to disclose more about the data used to create their programs. The vote is one step in a longer process. A final version of the law is not expected to be passed until later this year.

The European Union is further along than the United States and other large Western governments in regulating A.I. The 27-nation bloc has debated the topic for more than two years, and the issue took on new urgency after last year's release of ChatGPT, which intensified concerns about the technology's potential effects on employment and society. Policymakers everywhere from Washington to Beijing are now racing to control an evolving technology that is alarming even some of its earliest creators. In the United States, the White House has released policy ideas that includes rules for testing A.I. systems before they are publicly available and protecting privacy rights. In China, draft rules unveiled in April would require makers of chatbots to adhere to the country's strict censorship rules. Beijing is also taking more control over the ways makers of A.I. systems use data.

This discussion has been archived. No new comments can be posted.

Europeans Take a Major Step Toward Regulating AI

Comments Filter:
  • reaction (Score:5, Insightful)

    by Archangel Michael ( 180766 ) on Wednesday June 14, 2023 @09:11AM (#63601722) Journal

    We Must Do Something!
    This Is Something!
    Therefore, We Must Do It!!!!!!

  • by ranton ( 36917 ) on Wednesday June 14, 2023 @09:20AM (#63601744)

    Requiring AI generated art to be labeled and publishing what copyrighted content was used to train the models both have the potential to be very heavy handed if not limited severely in the final legislation. Any legislation which treats AI generated art any differently than art created in Photoshop is very misguided in my opinion. As is any legislation which singles out using copyrighted images in AI training but doesn't restrict human artists from viewing copyrighted images while they are training is equally ridiculous.

    • by Rei ( 128717 )

      I don't think they understand how complicated this will get. They seem to be pushing towards a radical and dangerous form of "viral copyright".

      Example: If you ask a LLaMA model to read pages from some copyrighted work in it's dataset, at best you're going to get a hallucination. The data just isn't in there; it's diluted to the point of homeopathy, into just general rules and logic. Atop that, a lot of new models these days are trained using data created by other AI models, alongside human-labeled data to

      • by Rei ( 128717 )

        I mean, just to drive home the point about how diluted training data gets:

        Icelandic is a language spoken by under 300k people in the world. Now on one hand, that's still a LOT of content - think of how much content one person makes online every year, let alone 300k. But as a percentage of total content, it's quite small. And ChatGPT's Icelandic is bad. Not awful, but definitely problematic. This vast amount of content and it still can't learn to talk right, because it's diluted too heavily in everything e

        • This should give people the right to complain when AI systems like ChatGPT infringe on peoples rights. Isn't that a good thing?
          • by ranton ( 36917 )

            This should give people the right to complain when AI systems like ChatGPT infringe on peoples rights.
            Isn't that a good thing?

            But the legislation doesn't do that. Or at least it goes far beyond that. No one's rights are violated when they look at a Midjourney generated image without knowing how it was generated. No one's rights are generated when a copyrighted image or text is used to train an AI.

            This new tool has the potential to provide significant benefit. It will also likely hurt a lot of people's livelihoods. Let's start talking seriously about how to help all of the displaced workers instead of trying to hold back progress i

            • https://www.ri.se/en/our-stori... [www.ri.se]

              The proposal for a new regulation assigns AI systems to different risk categories:

              1. Unacceptable risk – technologies that threaten people’s safety, livelihoods and rights are prohibited. Specifically mentioned are systems that states can use for social scoring and toys that, via voice assistance, encourage dangerous behaviour.
              2. Limited risk – minimal transparency requirements for systems such as chatbots in order to make people aware that they are interacting with an algorithm.
              3. High risk – this covers a range of activities as well as some physical products. The EU Commission also proposes being able to add areas when needs arise.

              • by ranton ( 36917 )

                You list three risk categories, and only a portion of one of them cover the infringement of EU citizen rights. While protecting rights is important, as I have stated most of it goes far beyond that.

                Unacceptable risk – technologies that threaten people’s safety, livelihoods and rights are prohibited.

                This statement is precise example of what is wrong here. Holding back technological progress with regulations in order to protect existing jobs is among the worst forms of protectionism. Help those who have been displaced (free job training / education, UBI, etc.), don't hold back technology so our society cannot

                • There can't be anything wrong with protecting peoples safety, etc. Nothing wrong with making sure that technology like robot assisted surgery is safe. And for the business, it might be used and sold more if safety is assumed.

                  “For example, the rules affect certain businesses and certain products, they serve to both safeguard individual rights and ensure product safety,”
                  - Susanne Stenberg
    • by whitroth ( 9367 )

      Really? I got my new publisher to put in the contract that they will not use my work to train an AI. On the other hand, if I find someone did use my novel(s) to train one, I will personally sue them into being so wealthy that they can push all their belongings in a stolen shopping cart under the bridge they live under.,

      • by ranton ( 36917 )

        Your contract with your publisher has no impact on what people who never signed that contract do with your novel. And it is pretty absurd to think otherwise. Becoming a better writer by reading novels written by others is not illegal for humans, so making it illegal for an AI is just as absurd.

      • Really? I got my new publisher to put in the contract that they will not use my work to train an AI. On the other hand, if I find someone did use my novel(s) to train one, I will personally sue them into being so wealthy that they can push all their belongings in a stolen shopping cart under the bridge they live under.,

        Best of luck with that, given that no one who goes into a book store (or Amazon) and purchases a paperback copy of one of your novels is bound by the contract you signed with your publisher. If I purchase a physical copy of your book, what few rights you have to the physical copy are exhausted due to the first-sale doctrine [wikipedia.org].

        If I want to sell the physical copy of your book that I purchased without your authorization and without compensating you, I can.

        If I want to tear out the pages from your novels then use

    • Requiring AI generated art to be labeled and publishing what copyrighted content was used to train the models both have the potential to be very heavy handed

      It will also be impossible to enforce since any website outside the EU can just ignore it and put up whatever images they want. Unless similar rules are adopted in the other major economies it will just drive AI innovation out of the EU.

      • by ranton ( 36917 )

        It will also be impossible to enforce since any website outside the EU can just ignore it and put up whatever images they want. Unless similar rules are adopted in the other major economies it will just drive AI innovation out of the EU.

        It would be enforced with fines. You're obviously correct that the EU cannot stop the behavior, but they can levy whatever fines they like if these companies want to continue doing business in the EU. But I fully agree with your comment about driving AI innovation out of EU.

        • they can levy whatever fines they like if these companies want to continue doing business in the EU.

          Define "doing business in the EU". It would be entirely possible for a company only physically present in the US or even China to provide a service to generate AI art from text (and such companies already exist). Now if someone in the EU connects to that company and uses their services while refusing to obey EU law because, after all, they are not in the EU so why should they, exactly how is the EU going to fine them given that they have no physical presence in the EU at all?

  • If AI was regulated out of existence, the world would not be that much poorer. It would also not be that much richer. AI is a useful technology, but it still just a particular method of creating software. What frustrates me specifically about AI regulation is that it is indicative of a deep obsession with regulating every endeavor without even bothering to weigh tradeoffs of benefits vs harms. Oh sure, there are people going around claiming the harms include the destruction of all human life on Earth, but h
  • by NMBob ( 772954 ) on Wednesday June 14, 2023 @09:58AM (#63601824) Homepage
    Are they going to regulate it the same way they regulate murder? Just checking.
  • Water is wet.
    Air is mostly Nitrogen.
    Whales live in the ocean.
    Politicians steal.

  • Revenge of the language majors!

  • Could we not just ask ChatGPT to write the rules for us?

  • I get the concern.

    But I wonder how they define AI. I take it that things like ChatGPT and Dall-E would fit the definition, but what else? Are they just trying to discriminate against neural networks, or have they wisely and stupidly (both go together) attempted a general case, in order to catch people trying to use technicalities to circumvent the regulation? That would be both good and bad. If they did, then it's going to be hard to know when your optimization algorithm "counts" or not, making it just anot

  • This is what happened to the nuclear industry as well as the supersonic airplane industry.

    Overzealous regulations regarding nuclear power plants stopped innovation and got us stuck with old War era plants before the new generation plants could take off in earnest. This also contributed to the use of fossil fuels and now climate change is a thing.

    With supersonic airplanes, there was a lot of hoopla because the 707 and other non-supersonic planes were making noise, and the Concord was feared to be even more n

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...