Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming Technology

AI Slashes Google's Code Migration Time By Half (theregister.com) 34

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.

AI Slashes Google's Code Migration Time By Half

Comments Filter:
  • by Seven Spirals ( 4924941 ) on Thursday January 16, 2025 @02:01PM (#65094159)
    I simply call bullshit. This is an AI vendor that says AI's juice is worth the squeeze. Talk about an unwelcome and untrustworthy source for "research", and from "don't be evil.... PSYCH!" Google, even.
    • by Junta ( 36770 ) on Thursday January 16, 2025 @02:09PM (#65094183)

      Eh, the cited tasks seem to be credibly within the reach of LLM, very tedious, very obvious tasks. The sort of scope that even non-AI approaches often handle to be fair, but anyone who has played with a migration tool and LLMs I think could believe there's a lot of low hanging fruit in code migrations that don't really need human attention but suck with traditional transition tools.

      Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).

      • Python and Ruby change their runtimes and their package formats constantly and it's forever broken if you aren't on the bleeding edge. Other systems like Javascript + NPM or Lua + Rocks do a lot better, but still suck hind tit for the most part. For me, C is the go-to language and have a lot less features (no network package manager) but a lot more survivability and reliability (the shit will actually work without throwing Python tracebacks for the first half hour I fight with it).
        • by Junta ( 36770 )

          Biggest concern I have with C (and Go) is that when the Java or Python attempt is throwing tracebacks like crazy and the C or Go is going just fine, the C or Go *should* be reporting errors like crazy. Lazy programmers not checking the return code/errno result in a program that seems pretty happy even as it compounds failure upon failure. Go has panic/recover, but that is so frowned upon third party code would never do it even if you would have liked it to.

        • I run home assistant for my home automation stuff. I run it on a FreeBSD box I have. Home Assistant is on the bleeding edge of python and FreeBSD is the opposite.

          it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.

          • That's one opinion. In my world (older systems and embedded systems) it's not annoying, it's unacceptable and Python, Perl, and Ruby scripts are considered to be a pile of time wasting garbage until proved (by someone else) that they work. Even then I'm still super skeptical. Seen too many tracebacks for too many supposed shrink-wrapped packages.
      • Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).

        Google often chooses to do very large migrations, in all of the languages the company uses. Google uses a build-from-head monorepo strategy for almost everything, which has a lot of benefits but it also means that when the core libraries are improved the amount of client code that's impacted is enormous. Not being willing to make regular large-scale migrations would mean that the core libraries are not allowed to improve, which just motivates project teams to write their own variants, or add layers on top,

    • by masterz ( 143854 )

      Having used it, I have to say that some of the AI suggested modifications are simply magic. I have typed '// This should' and it fills in exactly what I was thinking. Blocks of code are very often filled in automatically, and correctly.

      • by narcc ( 412956 )

        I've heard a lot of wild claims, but this is the first time I've seen anyone claim that AI was psychic...

        • by masterz ( 143854 )

          LLMs just predict the next likely thing. I think humans often do the same, we just don't know it.

    • What's noteworthy is that this was the same set of changes across multiple repos. The applicability of this solution for other problems is limited. If I know there's a bug in a system, it makes way more sense to me to dig into that code to find the one bug rather than create an LLM in an attempt to find that same bug in all possible repos. That approach generally doesn't make sense.
  • Translation: We can now screw up twice as bad in half the time.

  • That's why almost every Google service has gone to shit lately. Yandex gives better results than Google now. Was in a Google meet earlier today that had several problems. Don't get me wrong, I love when developers push code out that was made by someone or something else that they don't understand.
    • by Njovich ( 553857 )

      I was having the same thought, I've had shitty issues lately across a range of Google apps that I use.

  • by laughingskeptic ( 1004414 ) on Thursday January 16, 2025 @02:07PM (#65094177)
    Why after acknowledging that the generic typing (int) made finding all of the places needing changing hard ... did they not `typedef int userId` and replace all pertinent int declarations and THEN `typedef long userId`? Instead they used their LLM to help change certain declarations from int to long.
  • This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

    • Right. It's like a car that's 95% full self-driving. Since the output isn't deterministic, the whole process needs human review and mistakes are easier to miss.

      Doing this algorithmically would have been consistent and where it fails it would fail in a predictable way.

    • This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

      Haven't you heard? The type of "find and replace" that every IDE has had in it for decades already is now referred to as AI. Anything that the machine does is AI. Booting the computer is handled by AI. Login is actually AI. Opening a Word document is AI. IT'S AI ALL THE WAY DOWN!

    • by drnb ( 2434720 ) on Thursday January 16, 2025 @02:40PM (#65094267)

      This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

      From playing around with AI coding systems. AI seems to be about the level of a sophomore CS student who has had the data structures class, has not had the algorithms class yet, and can copy code from the internet, but may lacks a real understanding of the code implementation its copying. Which is still kind of impressive from the perspective of someone who studied AI at the grad school level.

      Copy/paste coders beware, AI is coming for you. :-)

      • by RobinH ( 124750 )

        That's what a LLM does. It outputs text that is statistically indistinguishable from the text it's been trained on. But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it. A real programmer is modelling the CPU and memory in their head (or at least a greatly simplified model of it) and thinking about what each step does to the state of the machine.

        Take a look at the real-time AI-generated minecraft game. It's really trippy. It predicts the next fram

      • . AI seems to be about the level of a sophomore CS student

        More like the first answer that came from stackoverflow whether or not it was the highest-ranked or correct answer.

      • by kick6 ( 1081615 )
        Ah to be a CS sophomore in the "copy code from the internet" era.
    • I was thinking the same thing, my goodness, they've reinvented sed!

      In a well designed codebase, this would have been a one-line change. The fact that they're bragging about using AI for this just shows that there are yet entire departments at Google ignorant of basic software engineering practices.

      • From the FA, Whether there is a long-term impact on quality remains to be seen.

        Just FYI Google: a software engineer can quantify the impact on quality using process controls. Just thought you might like to know.

  • LLMs are pretty good at low risk fairly consistent edits that can easily be mechanically verified as correct. With the size of Google's codebase and the requirements that your one "pull request" be up to date and verifiable, this seems like a case where it could be a win, and reduce your workload and the amount of pain to do it. I spend a lot of time on the cases where it doesn't work, and I call those out vigorously, but this seems like one where LLMs would help. There are many more complex cases where t
  • How much did it slash the coding accuracy? They should write an AI to investigate this question.
  • Wait... code is a migratory species?

    It flies south for the winter?

  • Company with a bloated codebase says that they can now have a bigger bloated codebase because of AI.

    find . -type f -exec sed -i 's/old-pattern/new-pattern/g' {} +

  • Back in the early '90s, I migrated a system from 16 to 32 bit. I wrote scripts to do this (this code base was in hundreds of thousands lines of code).

    From memory, I'd say the 80% automation number sounds about right. I can easily see this being a decent use of so-called "AI" in development.

Can anyone remember when the times were not hard, and money not scarce?

Working...