Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Google

Google Builds AI 'Co-Scientist' Tool To Speed Up Research (ft.com) 13

Google has built an AI laboratory assistant to help scientists accelerate biomedical research [non-paywalled source], as companies race to create specialised applications from the cutting-edge technology. From a report: The US tech group's so-called co-scientist tool helps researchers identify gaps in their knowledge and propose new ideas that could speed up scientific discovery. "What we're trying to do with our project is see whether technology like the AI co-scientist can give these researchers superpowers," said Alan Karthikesalingam, a senior staff clinician scientist at Google.

[...] Early tests of Google's new tool with experts from Stanford University, Imperial College London and Houston Methodist hospital found it was able to generate scientific hypotheses that showed promising results. The tool was able to reach the same conclusions -- for a novel gene transfer mechanism that helps scientists understand the spread of antimicrobial resistance -- as a new breakthrough from researchers at Imperial. Imperial's results were not in the public domain as they were being peer-reviewed in a top scientific journal. This showed that Google's co-scientist tool was able to reach the same hypothesis using AI reasoning in a matter of just days, compared with the years the university team spent researching the problem.

Google Builds AI 'Co-Scientist' Tool To Speed Up Research

Comments Filter:
  • The amount of bugs I am getting recently in both Chrome, Google Home, Nest, Youtube, Gemini API, Android, etc. is astounding.

    Did they start writing their code with AI now?

    • by HiThere ( 15173 )

      It's much safer to build an AI that makes suggestions than to build an AI that goes out and does things...like fixing code. Suggestions are expected to have a high error rate.

  • Here's a link from Google that explains their "co-scientist": https://research.google/blog/a... [research.google]

    Also, the universal approximation theorem says that given certain conditions, a family of neural networks can find an approximation to any given function if it's a Euclidean continuous space that's compact and finite. This seems to be a strict constraint on the type of data that can be approximated by a family of neural networks. I don't work in the field. I've asked quite a few researchers if the universal appro

  • by Roger W Moore ( 538166 ) on Wednesday February 19, 2025 @11:07AM (#65179285) Journal

    helps researchers identify gaps in their knowledge and propose new ideas that could speed up scientific discovery.

    ...or alternatively waste their time with lots of "hallucinated" information. Getting factual information correct is currently not something current LLMs do well at so far but this is essential for scientific research.

    • helps researchers identify gaps in their knowledge and propose new ideas that could speed up scientific discovery.

      ...or alternatively waste their time with lots of "hallucinated" information. Getting factual information correct is currently not something current LLMs do well at so far but this is essential for scientific research.

      However, they're good finding the relevant snippet of information from a limited corpus of documents, and linking to it in the source. Google notebooks are quite decent and this (not surprising, as it's their core capacity; they can use the good parts of their search engine, unhindered by the SEO attacks).

      The result can be used similarly to Wikipedia: you wouldn't trust any specific factoid, but for s not of information that seems interesting, you can follow the reference to find out the exact wording at th

      • Not in research context where there's siloed lexicons all over the place it doesn't.

        Semantic tokens are not sufficient to understand research, just as they wouldn't be if you got an uneducated human to do the work... every field (and in many cases, every lab) tends to use terms differently. Sometimes those differences can be subtle in ways that matter, and other times they can be completely overt in ways that matter.

        We are not talking about the level of detail someone demands from an encyclopedia in order t

  • by MpVpRb ( 1423381 ) on Wednesday February 19, 2025 @11:21AM (#65179319)

    We don't need AI "art" or "music", artists and musicians already do this well
    We need AI to do the things we can't do

    • Robots can't pick strawberries, grapes, or apples as humans do. They also can't do the dishes, load a dishwasher, or clean a hotel bathroom as humans do. The success of AI in making progress in language skills and playing finite games with clear rules, chess, go, etc., is because the computational complexity of the space of all moves has been lowered to polynomial bounds by methods such as feed forward neural networks, monte carlo tree search, etc. There are no methods, however, to improve the efficiency of

    • To get to a post-scarcity future, nothing must be scarce. If there is some holdout group demanding payment for their oh-so-valuable-services that screws things up. They will vote against things like UBI because they don't need it. Artists and musicians can still create in a post-scarcity future, they just won't get paid for it. When you think about it, that is already the case 1% of the famous musicians get 99% of the profit, 1% of the famous artists get 99% of the art profit.

System going down at 5 this afternoon to install scheduler bug.

Working...