Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
China AI

Chinese AI Stirs Panic At European Geoscience Society (science.org) 32

Paul Voosen reports via Science Magazine: Few things prompt as much anxiety in science and the wider world as the growing use of artificial intelligence (AI) and the rising influence of China. This spring, these two factors created a rift at the European Geosciences Union (EGU), one of the world's largest geoscience societies, that led to the firing of its president. The whole episode has been "a packaging up of fear of AI and fear of China," says Michael Stephenson, former chief geologist of the United Kingdom and one of the founders of Deep-time Digital Earth (DDE), a $70 million effort to connect digital geoscience databases. In 2019, another geoscience society, the International Union of Geological Sciences (IUGS), kicked off DDE, which has been funded almost entirely by the government of China's Jiangsu province.

The dispute pivots on GeoGPT, an AI-powered chatbot that is one of DDE's main efforts. It is being developed by Jian Wang, chief technology officer of e-commerce giant Alibaba. Built on Qwen, Alibaba's own chatbot, and fine-tuned on billions of words from open-source geology studies and data sets, GeoGPT is meant to provide expert answers to questions, summarize documents, and create visualizations. Stephenson tested an early version, asking it about the challenges of using the fossilized teeth of conodonts, an ancient relative of fish, to define the start of the Permian period 299 million years ago. "It was very good at that," he says. As awareness of GeoGPT spread, so did concern. Paul Cleverly, a visiting professor at Robert Gordon University, gained access to an early version and said in a recent editorial in Geoscientist there were "serious issues around a lack of transparency, state censorship, and potential copyright infringement."
Paul Cleverly and GeoScienceWorld CEO Phoebe McMellon raised these concerns in a letter to IUGS, arguing that the chatbot was built using unlicensed literature without proper citations. However, they did not cite specific copyright violations, so DDE President Chengshan Wang, a geologist at the China University of Geosciences, decided not to end the project.

Tensions at EGU escalated when a complaint about GeoGPT's transparency was submitted before the EGU's April meeting, where GeoGPT would be introduced. "It arrived at an EGU whose leadership was already under strain," notes Science. The complaint exacerbated existing leadership issues within EGU, particularly surrounding President Irina Artemieva, who was seen as problematic by some executives due to her affiliations and actions. Science notes that she's "affiliated with Germany's GEOMAR Helmholtz Centre for Ocean Research Kiel but is also paid by the Chinese Academy of Geological Sciences to advise it on its geophysical research."

Artemieva forwarded the complaint via email to the DDE President to get his view, but forgot to delete the name attached to it, leading to a breach of confidentiality. This incident, among other leadership disputes, culminated in her dismissal and the elevation of Peter van der Beek to president. During the DDE session at the EGU meeting, van der Beek's enforcement actions against Chinese scientists and session attendees led to allegations of "harassment and discrimination."

"Seeking to broker a peace deal around GeoGPT," IUGS's president and another former EGU president, John Ludden, organized a workshop and invited all parties to discuss GeoGPT's governance, ongoing negotiations for licensing deals and alternative AI models for GeoGPT's use.
This discussion has been archived. No new comments can be posted.

Chinese AI Stirs Panic At European Geoscience Society

Comments Filter:
  • Offtopic, but I think there is a parallel to code. AI is in the codebase now, open, closed, everybody says they are using AI generated snippets.

    Where did the AI get the "ideas?".. I think Github is a reasonable answer at least as a staring point

    Also, I have serious concerns over any quaint idea of "quality"... apparently nobody gives fuck about any notion of quality in software. Just my opinion. Surely that opens pandora's box of what quality is, but I think we can all agree some code is better than others.
    • There may be much more text repetition in the field of Geology research papers than in other areas.

      Conjecture in a question: Can AI LLM measure the redundancy in the language of a subject area? Like how psycho visual comparisons are made of different bit rate encodings of video streams to test lossless codec efficiency. Can lower node count LLM trained on the same input data set equivalently perform to a much higher node count LLM on the same training data set?

  • Europe is already behind China at basically every natural science by now. It's a bit late to panic.

    • I am not sure about that statement. I am fairly certain Europe is ahead in the science of chowing down using advanced nom nom nomming utensils such as fork.

      • by sudonim2 ( 2073156 ) on Thursday July 04, 2024 @09:21AM (#64600353)

        The fork is a Chinese invention. It was in use in China since the Bronze age. Whereas it was introduced to Western Europe through the Byzantine Empire after a Byzantine princess married a Venetian noble. The use of the fork traveled along the Silk Road from China, through Persia, to Byzantium. It was popularized in Western Europe first in Italy, due to their strong economic ties to Byzantium as Venice was a major port at the European end of the Silk Road and the growing popularization of pasta (another Chinese invention that traveled along the Silk Road) in Italy as the fork made eating the dish easier.

        TL;DR Chinese don't use chopsticks because they didn't know about forks. Chinese stopped using forks because chopsticks are superior to forks. Chinese were using forks while Europeans were still eating with their hands.

        • by penguinoid ( 724646 ) on Thursday July 04, 2024 @03:09PM (#64600963) Homepage Journal

          The dinner fork was invented in 18th century Germany. That's the one with four tines and a curve to it, suitable for both piercing and scooping food. They're very nice and you don't need to design your food around them like with chopsticks. Much better than the forks you are talking about with two long skinny prongs, or the one-prong forks invented in prehistory.

          • Again, you've got it backwards. Food wasn't designed around chopsticks. Stir-frying was invented, and then chopsticks were invented because they made eating it easier. Something like the invention of chipforks in Britain.

            Chopsticks are also better for eating most foods. Your hands are obviously better at picking apart and eating food from a dexterity perspective. Where they lack is in hygiene and cleanliness. Forks solve the issue of hygiene and cleanliness but they're not very dexterous. That's why you us

    • Europe is already behind China at basically every natural science by now. It's a bit late to panic.

      CERN is in Europe.

  • How the fock does conodont research turn into an international crisis? It's sillier than a systemd conference starting WW3.

  • by SuperKendall ( 25149 ) on Thursday July 04, 2024 @12:55AM (#64599799)

    Reading kind of between the lines on this story, and projecting into the future, you realize what a massive lead Chinese AI will have over every western AI.

    Why? Simple, the Chinese government has simply no concern over copyright or privacy.

    That means that Chinese AI can train across pretty much all data across the world - and that includes whatever private data stores the Chinese can break into!

    Meanwhile all western AI's will have to be trained on siloed datasets.

    Even then a good siloed dataset might beat Chinese future AI for some targeted tasks, but if the goal is general AI there is simply no way the west can compete without the data...

    Possible good news though is the NSA is probably training giant general purpose models also built on vast quantities of similarly stolen data, and they might at least be better at stealing U.S. private data.

    • by Evtim ( 1022085 )

      I might be wrong but the unspoken assumption in your post is that someone out there is actually interested in the truth(s) that AI can/will discover.

      Nothing can be further from the truth :)

      No matter what useful revelations the AI will bring the governments and societies of the world will not not listen, evaluate and agree, let alone act.

      Imagine what happens when the CPP goes " AI , AI on the wall, is this not the most harmonious society of them all? " and the answer is " No. And it will never be as long as

    • Re: (Score:3, Interesting)

      by sudonim2 ( 2073156 )

      1) There's no such thing as "general purpose AI". Even if they were, there's no indication that LLMs could accomplish anything close to that.

      2) There's no such thing as "AI". It is a term that's a science fiction trope, not an actual description of technology. It's been co-opted as a marketing term by unscrupulous actors in the tech industry. It is ill-defined and not based on any understanding of actual reality. It's just a literary device that might as well be magic.

      3) In your classic AM/FM scam, when t

      • by HiThere ( 15173 )

        Actually I believe that term AI (or at least "artificial intelligence") traces back to the Dartmouth "LISP Lab". I certainly heard of it in technical documents before I encountered it anywhere else.

        The CONCEPT goes back to at least the Greeks. (Hephaestus had a few artificial girls to help him move around and at least one robot sentry that patrolled the shore of ??Crete??.) However as far as I know they weren't called "Artificial Intelligence"s. Neither was the Golem of Budapest (and similar stories fro

        • Athena also had a clockwork owl.

          The Dartmouth paper borrowed heavily from sci-fi writers of the 30s and 40s. Though they tended to use the term "android" as they were imagining anthropomorphic robots, mostly.

    • by RobinH ( 124750 )

      So what? What we colloquially call an AI (which is really an LLM) is nowhere close to AGI. It's just a machine that can generate text that looks like something a human might write. Even so, the best LLMs only pass the Turing Test about 54% of the time, where an average human can pass it about 2/3rds of the time. Even if you could make an LLM that passes as human as much as a human can, so what? By definition it's not thinking or reasoning in the sense we would care about. I suppose most humans don't r

      • by HiThere ( 15173 )

        Sliderules were quite revolutionary (though a development of Napier's bones). But you wouldn't expect one to make coffee for you.

        Being revolutionary doesn't mean omnicompetent. I expect that there will be a large number of things that LLMs and their close derivatives will be extremely good at. Usually this will only be a part of a job, but that "part" will mean that fewer workers are needed...and of the ones that are needed, there will be a severe bifurcation of required skill levels. At a wild guess 2/

        • by RobinH ( 124750 )
          The closest example I can think of is the No Code and Low Code attempts from a couple decades ago (VB6 being a prime example of Low Code). I would argue that this was a tool that allowed a few more low end coders to be more productive (myself included at the time) because it simplified a lot of the boilerplate code for making GUIs. But real big and complicated software development isn't about stringing together symbols that look like something you've read before. It's about thinking about what pieces of
          • by HiThere ( 15173 )

            It's not just that. I did some pretty fancy stuff pretty quickly with MSAccessDB...but debugging took analysis, and occasionally I'd write a small routine in Eiffel. (I was a fan of Eiffel back then.)

            MSAccess let me do things a LOT quicker (probably by several months), but getting it all debugged wasn't easy (And the debugging happened after it was already in use.)

            If I'd had to do that in C and Oracle, it just wouldn't have been done.

    • Why? Simple, the Chinese government has simply no concern over copyright or privacy.

      A lack of restrictions is a temporary advantage at most. Competition is a natural evolutionary system in which the fittest survives. The result of a restriction is an evolutionary pressure which is eventually overcome via and adaptation (e.g. new/better tools). When there is no pressure to adapt the progress is slow which is why China had been complacent until they found themselves suddenly surpassed by LLMs. This is how races are run, with each competitor constantly catching up to and surpassing the leader

    • by tlhIngan ( 30335 )

      Not possible, because it makes it too easy to exclude data from the Chinese AI.

      China is still sensitive to discussion of certain topics. If they're just blindly scanning the Internet for work, it doesn't take much to insert those topics into the poison pool and poison the Chinese AI with "forbidden knowledge".

      When this happens it doesn't really matter - China will be forced to vet all their AI source documents to prevent their AIs from being poisoned. You can't have their AIs returning pro-Democracy messagi

  • All they have is political objections to China, generic copyright infringement objections which should have no place in science, and generic AI issues like biases which afflict all models no matter what country made them.
  • If they are worried about chinese models getting better I would understand, but why would anyone panic?

  • Find potential mining sites?

  • Paul Cleverly, a visiting professor at Robert Gordon University, gained access to an early version and said in a recent editorial in Geoscientist there were "serious issues around a lack of transparency, state censorship, and potential copyright infringement."

    But no concerns were raised about the quality of the chatbot's answers, which will sometimes include wildly incorrect or totally "hallucinated" answers. Which scientists and government and businesses will be reliant upon. And every single reference or citation that the model gives will be completely fictitious, cuz that's how that works.

    Sometimes you get what you deserve.

  • "serious issues around a lack of transparency, state censorship, and potential copyright infringement.".

...this is an awesome sight. The entire rebel resistance buried under six million hardbound copies of "The Naked Lunch." - The Firesign Theater

Working...