Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Programming Technology

AI Slashes Google's Code Migration Time By Half (theregister.com) 45

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.

AI Slashes Google's Code Migration Time By Half

Comments Filter:
  • by Seven Spirals ( 4924941 ) on Thursday January 16, 2025 @02:01PM (#65094159)
    I simply call bullshit. This is an AI vendor that says AI's juice is worth the squeeze. Talk about an unwelcome and untrustworthy source for "research", and from "don't be evil.... PSYCH!" Google, even.
    • by Junta ( 36770 ) on Thursday January 16, 2025 @02:09PM (#65094183)

      Eh, the cited tasks seem to be credibly within the reach of LLM, very tedious, very obvious tasks. The sort of scope that even non-AI approaches often handle to be fair, but anyone who has played with a migration tool and LLMs I think could believe there's a lot of low hanging fruit in code migrations that don't really need human attention but suck with traditional transition tools.

      Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).

      • Python and Ruby change their runtimes and their package formats constantly and it's forever broken if you aren't on the bleeding edge. Other systems like Javascript + NPM or Lua + Rocks do a lot better, but still suck hind tit for the most part. For me, C is the go-to language and have a lot less features (no network package manager) but a lot more survivability and reliability (the shit will actually work without throwing Python tracebacks for the first half hour I fight with it).
        • by Junta ( 36770 )

          Biggest concern I have with C (and Go) is that when the Java or Python attempt is throwing tracebacks like crazy and the C or Go is going just fine, the C or Go *should* be reporting errors like crazy. Lazy programmers not checking the return code/errno result in a program that seems pretty happy even as it compounds failure upon failure. Go has panic/recover, but that is so frowned upon third party code would never do it even if you would have liked it to.

        • I run home assistant for my home automation stuff. I run it on a FreeBSD box I have. Home Assistant is on the bleeding edge of python and FreeBSD is the opposite.

          it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.

          • That's one opinion. In my world (older systems and embedded systems) it's not annoying, it's unacceptable and Python, Perl, and Ruby scripts are considered to be a pile of time wasting garbage until proved (by someone else) that they work. Even then I'm still super skeptical. Seen too many tracebacks for too many supposed shrink-wrapped packages.
          • by Entrope ( 68843 )

            it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.

            Also ones that have a hard-on for massive dependency trees. I wanted to build Jujutsu [github.com] on an Ubuntu 22.04 box, but that Ubuntu only has Rust 1.80 and some dependency in the jj stack already requires Rust 1.81.

            jj is nice because it's very focused on doing its job and being usable, with none of the stereotypical in-your-face "we use Rust" attitude. But its dependency set forces you -- I assume inadvertently -- to the bleeding edge.

        • Python and Ruby change their runtimes and their package formats constantly and it's forever broken if you aren't on the bleeding edge. Other systems like Javascript + NPM or Lua + Rocks do a lot better, but still suck hind tit for the most part. For me, C is the go-to language and have a lot less features (no network package manager) but a lot more survivability and reliability (the shit will actually work without throwing Python tracebacks for the first half hour I fight with it).

          Having been a dairy farmer, the hind tit is the one you want on most cows. Fronts tend to have less milk. Rears tend to output more. While you have to wait for the milk to drop in some nervous milkers, the layout of the entire udder is such that the rear teats have larger "containerization" as it were. The front tend to be slightly smaller / raised above the rear. This educational moment brought to you by hundreds of early mornings and hot afternoons in the milk barns and parlors.

      • Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).

        Google often chooses to do very large migrations, in all of the languages the company uses. Google uses a build-from-head monorepo strategy for almost everything, which has a lot of benefits but it also means that when the core libraries are improved the amount of client code that's impacted is enormous. Not being willing to make regular large-scale migrations would mean that the core libraries are not allowed to improve, which just motivates project teams to write their own variants, or add layers on top,

    • by masterz ( 143854 )

      Having used it, I have to say that some of the AI suggested modifications are simply magic. I have typed '// This should' and it fills in exactly what I was thinking. Blocks of code are very often filled in automatically, and correctly.

      • by narcc ( 412956 )

        I've heard a lot of wild claims, but this is the first time I've seen anyone claim that AI was psychic...

        • by masterz ( 143854 )

          LLMs just predict the next likely thing. I think humans often do the same, we just don't know it.

          • by narcc ( 412956 )

            You wrote: "I have typed '// This should' and it fills in exactly what I was thinking"

            I weep for the future...

    • What's noteworthy is that this was the same set of changes across multiple repos. The applicability of this solution for other problems is limited. If I know there's a bug in a system, it makes way more sense to me to dig into that code to find the one bug rather than create an LLM in an attempt to find that same bug in all possible repos. That approach generally doesn't make sense.
  • Translation: We can now screw up twice as bad in half the time.

  • That's why almost every Google service has gone to shit lately. Yandex gives better results than Google now. Was in a Google meet earlier today that had several problems. Don't get me wrong, I love when developers push code out that was made by someone or something else that they don't understand.
    • by Njovich ( 553857 )

      I was having the same thought, I've had shitty issues lately across a range of Google apps that I use.

  • by laughingskeptic ( 1004414 ) on Thursday January 16, 2025 @02:07PM (#65094177)
    Why after acknowledging that the generic typing (int) made finding all of the places needing changing hard ... did they not `typedef int userId` and replace all pertinent int declarations and THEN `typedef long userId`? Instead they used their LLM to help change certain declarations from int to long.
    • Who says they didn't do exactly what you suggest here? They had code where the types were int32_t (platform independent not the platform dependent int type you suggest). Presumably they did a two step process exactly as you describe. Figuring out which int32_t to change to userId (or userId_t) is the hard part. If you change things that you shouldn't, you'll likely break unrelated functionality. And it's hard to test this in small batches because you have to change the producers and consumers simultaneo
  • This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

    • Right. It's like a car that's 95% full self-driving. Since the output isn't deterministic, the whole process needs human review and mistakes are easier to miss.

      Doing this algorithmically would have been consistent and where it fails it would fail in a predictable way.

    • This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

      Haven't you heard? The type of "find and replace" that every IDE has had in it for decades already is now referred to as AI. Anything that the machine does is AI. Booting the computer is handled by AI. Login is actually AI. Opening a Word document is AI. IT'S AI ALL THE WAY DOWN!

    • by drnb ( 2434720 ) on Thursday January 16, 2025 @02:40PM (#65094267)

      This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

      From playing around with AI coding systems. AI seems to be about the level of a sophomore CS student who has had the data structures class, has not had the algorithms class yet, and can copy code from the internet, but may lacks a real understanding of the code implementation its copying. Which is still kind of impressive from the perspective of someone who studied AI at the grad school level.

      Copy/paste coders beware, AI is coming for you. :-)

      • by RobinH ( 124750 )

        That's what a LLM does. It outputs text that is statistically indistinguishable from the text it's been trained on. But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it. A real programmer is modelling the CPU and memory in their head (or at least a greatly simplified model of it) and thinking about what each step does to the state of the machine.

        Take a look at the real-time AI-generated minecraft game. It's really trippy. It predicts the next fram

        • by drnb ( 2434720 )

          But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it.

          I'd say there is some very simplistic reasoning in some of the AI coding systems. It seems to be able to combine a couple simple concepts well enough to "merge" the respective pieces of code it's seen.

        • Unless you're coding kernel or driver level code i doubt very few programmers are "modelling the CPU and memory in their head" No programmer writing gives 2 shits about whats going on under the hood.
      • . AI seems to be about the level of a sophomore CS student

        More like the first answer that came from stackoverflow whether or not it was the highest-ranked or correct answer.

      • by kick6 ( 1081615 )
        Ah to be a CS sophomore in the "copy code from the internet" era.
        • by drnb ( 2434720 )

          Ah to be a CS sophomore in the "copy code from the internet" era.

          We were so much more skilled having to open a Knuth book and translate his pseudo-assembly into compilable code. :-)

    • I was thinking the same thing, my goodness, they've reinvented sed!

      In a well designed codebase, this would have been a one-line change. The fact that they're bragging about using AI for this just shows that there are yet entire departments at Google ignorant of basic software engineering practices.

      • From the FA, Whether there is a long-term impact on quality remains to be seen.

        Just FYI Google: a software engineer can quantify the impact on quality using process controls. Just thought you might like to know.

  • LLMs are pretty good at low risk fairly consistent edits that can easily be mechanically verified as correct. With the size of Google's codebase and the requirements that your one "pull request" be up to date and verifiable, this seems like a case where it could be a win, and reduce your workload and the amount of pain to do it. I spend a lot of time on the cases where it doesn't work, and I call those out vigorously, but this seems like one where LLMs would help. There are many more complex cases where t
  • How much did it slash the coding accuracy? They should write an AI to investigate this question.
  • Wait... code is a migratory species?

    It flies south for the winter?

  • Company with a bloated codebase says that they can now have a bigger bloated codebase because of AI.

    find . -type f -exec sed -i 's/old-pattern/new-pattern/g' {} +

    • by swsuehr ( 612400 )
      Came here to say this. The task they're bragging about sounds like a job for sed and awk. There's this absolute amnesia or maybe just complete cluelessness about how to actually use the tools. People just seem to want to write new tools rather than learn what's already available.
      • I take it you didn't read TFS. They had int32_t typed data that they wanted to make int64_t. You could make *everything* in the system int64_t but that would introduce breakage in unintended areas. The process seems to be that the programmers would use various techniques, yes including grep, to find entities that needed to change from int32_t to id_type and then ask the LLM to figure out where these values got passed. Yes you could write an IDE refactoring tool to do this (I think commercial offerings l
        • ask the LLM to figure out where these values got passed.

          Back in 2006, as a new hire I wrote a tool which would scan a codebase for identifiers and cross reference every usage of those. It was a fun little project - took about a week - and was the first application I'd written which actually used a substantial amount of memory - more than 700MB, IIRC.

          Once you have the dependency graph, it's a relatively simple matter to automate the textual changes. The clincher comes when you have aligned or byte-p

  • Back in the early '90s, I migrated a system from 16 to 32 bit. I wrote scripts to do this (this code base was in hundreds of thousands lines of code).

    From memory, I'd say the 80% automation number sounds about right. I can easily see this being a decent use of so-called "AI" in development.

He who has but four and spends five has no need for a wallet.

Working...