Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming Technology

AI Slashes Google's Code Migration Time By Half (theregister.com) 68

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.

AI Slashes Google's Code Migration Time By Half

Comments Filter:
  • by Seven Spirals ( 4924941 ) on Thursday January 16, 2025 @02:01PM (#65094159)
    I simply call bullshit. This is an AI vendor that says AI's juice is worth the squeeze. Talk about an unwelcome and untrustworthy source for "research", and from "don't be evil.... PSYCH!" Google, even.
    • by Junta ( 36770 ) on Thursday January 16, 2025 @02:09PM (#65094183)

      Eh, the cited tasks seem to be credibly within the reach of LLM, very tedious, very obvious tasks. The sort of scope that even non-AI approaches often handle to be fair, but anyone who has played with a migration tool and LLMs I think could believe there's a lot of low hanging fruit in code migrations that don't really need human attention but suck with traditional transition tools.

      Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).

      • Python and Ruby change their runtimes and their package formats constantly and it's forever broken if you aren't on the bleeding edge. Other systems like Javascript + NPM or Lua + Rocks do a lot better, but still suck hind tit for the most part. For me, C is the go-to language and have a lot less features (no network package manager) but a lot more survivability and reliability (the shit will actually work without throwing Python tracebacks for the first half hour I fight with it).
        • by Junta ( 36770 )

          Biggest concern I have with C (and Go) is that when the Java or Python attempt is throwing tracebacks like crazy and the C or Go is going just fine, the C or Go *should* be reporting errors like crazy. Lazy programmers not checking the return code/errno result in a program that seems pretty happy even as it compounds failure upon failure. Go has panic/recover, but that is so frowned upon third party code would never do it even if you would have liked it to.

          • My experience, as a single data point, is that the Java programmers were shoving things out faster, but not necessarily with the same quality. I would be upset at the C code with it's overreliance on assert() to catch each and every case they didn't want to deal with, but the amount of asserts were really few compared to the Java dumps. Beleive me, the Java guys were also lazy and not checking return codes and instead just throwing exceptions caught by the top level loop, essentially just as stupid as doi

            • If you have an assert covering a case you are able to deal with you need to alter the assert, and every layer between the assert and wherever you are able to actually deal with that state. If you throw an error that you later able to deal with you âoejustâ put the handling in at the layer that you want to handle it. Having to alter each call site is a pain, but not unique. Frequently you need to do similar things in systems with Dependncy Injection when you add a new dependncy you have to route
              • Yes, but a lot of devs didn't do that. Thus a non trivial number of field bugs of crashes were due to them. Often the fix was to remove the assert and just carry on because the code dealt with it just fine.

                It's C, so we can't throw errors. But given that Java throws errors but then the coders dn't bother to catch the exceptions, they act just like the "crash now" asserts do. Only with a back trace that goes on and on...

        • I run home assistant for my home automation stuff. I run it on a FreeBSD box I have. Home Assistant is on the bleeding edge of python and FreeBSD is the opposite.

          it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.

          • That's one opinion. In my world (older systems and embedded systems) it's not annoying, it's unacceptable and Python, Perl, and Ruby scripts are considered to be a pile of time wasting garbage until proved (by someone else) that they work. Even then I'm still super skeptical. Seen too many tracebacks for too many supposed shrink-wrapped packages.
            • I am not familiar with Ruby, but I use the embedded python interpreter on customer systems that are locked down for SOX complaince. The client's auditors check screenshots of installed applications, file timestamps, object timestamps, etc. I'm able to the pip "hack" for installing packages and I have never had a compatibility issue within a major version of Python, and only a the rare exception when updating. It has always been as simple as copy over and extract. No installation necessary.

              Can you name some

              • Tracebacks with a vanilla install of pretty much 90% of anything installed with 'pip'. Complete functional breakdown of anything simple that's supposed to "just work" and no documentation or troubleshooting info after the things shit themselves. Nearly a 100% chance that any pip item with dependencies will fail to install dependencies, etc... The most recent Pyturd that exploded on me was cve-bin-tool.
          • by Entrope ( 68843 )

            it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.

            Also ones that have a hard-on for massive dependency trees. I wanted to build Jujutsu [github.com] on an Ubuntu 22.04 box, but that Ubuntu only has Rust 1.80 and some dependency in the jj stack already requires Rust 1.81.

            jj is nice because it's very focused on doing its job and being usable, with none of the stereotypical in-your-face "we use Rust" attitude. But its dependency set forces you -- I assume inadvertently -- to the bleeding edge.

          • Refactoring is a great way to look busy while doing nothing.

        • Python and Ruby change their runtimes and their package formats constantly and it's forever broken if you aren't on the bleeding edge. Other systems like Javascript + NPM or Lua + Rocks do a lot better, but still suck hind tit for the most part. For me, C is the go-to language and have a lot less features (no network package manager) but a lot more survivability and reliability (the shit will actually work without throwing Python tracebacks for the first half hour I fight with it).

          Having been a dairy farmer, the hind tit is the one you want on most cows. Fronts tend to have less milk. Rears tend to output more. While you have to wait for the milk to drop in some nervous milkers, the layout of the entire udder is such that the rear teats have larger "containerization" as it were. The front tend to be slightly smaller / raised above the rear. This educational moment brought to you by hundreds of early mornings and hot afternoons in the milk barns and parlors.

      • Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).

        Google often chooses to do very large migrations, in all of the languages the company uses. Google uses a build-from-head monorepo strategy for almost everything, which has a lot of benefits but it also means that when the core libraries are improved the amount of client code that's impacted is enormous. Not being willing to make regular large-scale migrations would mean that the core libraries are not allowed to improve, which just motivates project teams to write their own variants, or add layers on top,

      • Treat it like typical programming experience. 99% of the work is done quickly, and the remaining 1% takes forever to complete. What AI is going to do is get 25% of that 99% done a bit faster.

    • by masterz ( 143854 )

      Having used it, I have to say that some of the AI suggested modifications are simply magic. I have typed '// This should' and it fills in exactly what I was thinking. Blocks of code are very often filled in automatically, and correctly.

      • by narcc ( 412956 )

        I've heard a lot of wild claims, but this is the first time I've seen anyone claim that AI was psychic...

        • by masterz ( 143854 )

          LLMs just predict the next likely thing. I think humans often do the same, we just don't know it.

          • by narcc ( 412956 )

            You wrote: "I have typed '// This should' and it fills in exactly what I was thinking"

            I weep for the future...

          • Also people are not reporting the times that they typed "// this should" and got the wrong suggesting completion.

            (Reminds me of a facebook image I saw yesterday, someone was praising God for wroking in small ways because she needed 2/3rds a cup of milk and that was exactly the amount of milk left in the carton... That is, you see the small coincidence and you think it's special, while ignore all those times when the coincidence didn't happen.)

    • What's noteworthy is that this was the same set of changes across multiple repos. The applicability of this solution for other problems is limited. If I know there's a bug in a system, it makes way more sense to me to dig into that code to find the one bug rather than create an LLM in an attempt to find that same bug in all possible repos. That approach generally doesn't make sense.
      • But a good search/replace using regular expressions can do a LOT of automated changes. Adapt that to allow something more complex than regular expressions is doable, do the search using the intermediate compiled code instead and you can catch a lot. Many IDEs do exactly that, they can search by parsing the code instead of just regex matches being done lexically. But this is NOT the same as AI.

        Reminds me of talking with my AI prof way back, and talking about all the exciting cutting edge stuff that the imag

  • Translation: We can now screw up twice as bad in half the time.

  • That's why almost every Google service has gone to shit lately. Yandex gives better results than Google now. Was in a Google meet earlier today that had several problems. Don't get me wrong, I love when developers push code out that was made by someone or something else that they don't understand.
    • by Njovich ( 553857 )

      I was having the same thought, I've had shitty issues lately across a range of Google apps that I use.

  • by laughingskeptic ( 1004414 ) on Thursday January 16, 2025 @02:07PM (#65094177)
    Why after acknowledging that the generic typing (int) made finding all of the places needing changing hard ... did they not `typedef int userId` and replace all pertinent int declarations and THEN `typedef long userId`? Instead they used their LLM to help change certain declarations from int to long.
    • Who says they didn't do exactly what you suggest here? They had code where the types were int32_t (platform independent not the platform dependent int type you suggest). Presumably they did a two step process exactly as you describe. Figuring out which int32_t to change to userId (or userId_t) is the hard part. If you change things that you shouldn't, you'll likely break unrelated functionality. And it's hard to test this in small batches because you have to change the producers and consumers simultaneo
    • by Dan667 ( 564390 )
      I would love to see how close you could achieve the same results with a script of if / replace statements.
  • This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

    • Right. It's like a car that's 95% full self-driving. Since the output isn't deterministic, the whole process needs human review and mistakes are easier to miss.

      Doing this algorithmically would have been consistent and where it fails it would fail in a predictable way.

    • This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

      Haven't you heard? The type of "find and replace" that every IDE has had in it for decades already is now referred to as AI. Anything that the machine does is AI. Booting the computer is handled by AI. Login is actually AI. Opening a Word document is AI. IT'S AI ALL THE WAY DOWN!

      • I mean, that has been the case since Siri and Google Assistant were introduced. Every app went from using an algorithm to using "AI", no matter what it was really doing.

        Isn't an "AI" model just evaluating a function with billions of parameters, and the training generated the coefficients? It's just math all the way down.

    • by drnb ( 2434720 ) on Thursday January 16, 2025 @02:40PM (#65094267)

      This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.

      From playing around with AI coding systems. AI seems to be about the level of a sophomore CS student who has had the data structures class, has not had the algorithms class yet, and can copy code from the internet, but may lacks a real understanding of the code implementation its copying. Which is still kind of impressive from the perspective of someone who studied AI at the grad school level.

      Copy/paste coders beware, AI is coming for you. :-)

      • by RobinH ( 124750 )

        That's what a LLM does. It outputs text that is statistically indistinguishable from the text it's been trained on. But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it. A real programmer is modelling the CPU and memory in their head (or at least a greatly simplified model of it) and thinking about what each step does to the state of the machine.

        Take a look at the real-time AI-generated minecraft game. It's really trippy. It predicts the next fram

        • by drnb ( 2434720 )

          But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it.

          I'd say there is some very simplistic reasoning in some of the AI coding systems. It seems to be able to combine a couple simple concepts well enough to "merge" the respective pieces of code it's seen.

        • Unless you're coding kernel or driver level code i doubt very few programmers are "modelling the CPU and memory in their head" No programmer writing gives 2 shits about whats going on under the hood.
          • by RobinH ( 124750 )
            That sounds like something an AI would say. ;)
          • by drnb ( 2434720 )

            Unless you're coding kernel or driver level code i doubt very few programmers are "modelling the CPU and memory in their head" No programmer writing gives 2 shits about whats going on under the hood.

            You are mistaken. Having a very basic understanding of the underlying architecture lets you write better code, even when using a high level language. Compilers often benefit from "hints", structuring one's code and data with the architecture in mind. This includes application level code.

            That computer architecture class is rightfully a core class.

      • . AI seems to be about the level of a sophomore CS student

        More like the first answer that came from stackoverflow whether or not it was the highest-ranked or correct answer.

      • by kick6 ( 1081615 )
        Ah to be a CS sophomore in the "copy code from the internet" era.
        • by drnb ( 2434720 )

          Ah to be a CS sophomore in the "copy code from the internet" era.

          We were so much more skilled having to open a Knuth book and translate his pseudo-assembly into compilable code. :-)

    • I was thinking the same thing, my goodness, they've reinvented sed!

      In a well designed codebase, this would have been a one-line change. The fact that they're bragging about using AI for this just shows that there are yet entire departments at Google ignorant of basic software engineering practices.

      • From the FA, Whether there is a long-term impact on quality remains to be seen.

        Just FYI Google: a software engineer can quantify the impact on quality using process controls. Just thought you might like to know.

  • LLMs are pretty good at low risk fairly consistent edits that can easily be mechanically verified as correct. With the size of Google's codebase and the requirements that your one "pull request" be up to date and verifiable, this seems like a case where it could be a win, and reduce your workload and the amount of pain to do it. I spend a lot of time on the cases where it doesn't work, and I call those out vigorously, but this seems like one where LLMs would help. There are many more complex cases where t
  • How much did it slash the coding accuracy? They should write an AI to investigate this question.
  • Wait... code is a migratory species?

    It flies south for the winter?

  • Company with a bloated codebase says that they can now have a bigger bloated codebase because of AI.

    find . -type f -exec sed -i 's/old-pattern/new-pattern/g' {} +

    • by swsuehr ( 612400 )
      Came here to say this. The task they're bragging about sounds like a job for sed and awk. There's this absolute amnesia or maybe just complete cluelessness about how to actually use the tools. People just seem to want to write new tools rather than learn what's already available.
      • Re:News Flash! (Score:4, Interesting)

        by Ed Tice ( 3732157 ) on Thursday January 16, 2025 @06:17PM (#65094693)
        I take it you didn't read TFS. They had int32_t typed data that they wanted to make int64_t. You could make *everything* in the system int64_t but that would introduce breakage in unintended areas. The process seems to be that the programmers would use various techniques, yes including grep, to find entities that needed to change from int32_t to id_type and then ask the LLM to figure out where these values got passed. Yes you could write an IDE refactoring tool to do this (I think commercial offerings like C-scope would go a long way but I'm not affiliated with that product) Instead of writing a refactoring tool (which also might not be perfect), the LLM could make the changes and generate test cases. It seems like a great use of LLM as it would be much faster than developing a refactoring tool that will be used once.
        • ask the LLM to figure out where these values got passed.

          Back in 2006, as a new hire I wrote a tool which would scan a codebase for identifiers and cross reference every usage of those. It was a fun little project - took about a week - and was the first application I'd written which actually used a substantial amount of memory - more than 700MB, IIRC.

          Once you have the dependency graph, it's a relatively simple matter to automate the textual changes. The clincher comes when you have aligned or byte-p

          • Except that an LLM can do exactly what you are talking about which is find the consumer on the other end of the network connection. It took you a week to do it in a way that is meaningless (won't cross network connections). The LLM could do it nearly instantaneously and has no trouble crossing network connections. And yet you are still claiming the inferior solution is better. I think that most of what we are saying around LLM is hype but the one good thing is that they don't advocate for inferior soluti
            • Even if the LLM could find the consumer on the other end of the network connection, would it even have legal authority to change the code there? What if the consumer is a third-party contractor? How would it know the difference? How would it even know who is connecting to its service, if the connections weren't logged?

              Emotional attachment has nothing to do with the fact that changing code may not even be possible in all of the necessary scenarios. It could be politically, legally, or financially risk

              • We aren't talking about non-Google consumers. We are talking about multi-tier applications within Google. SMH
        • by swsuehr ( 612400 )
          Meh. I worry about the accuracy and visibility of AI for this vs. using tried and true methods as the post stated with the sed command.
          • sed will help you find all int_32t in the program, sure. But if you are changing about 10-15% of them, how to identify which set to change? That's a nasty manual process. The LLM can help
  • Back in the early '90s, I migrated a system from 16 to 32 bit. I wrote scripts to do this (this code base was in hundreds of thousands lines of code).

    From memory, I'd say the 80% automation number sounds about right. I can easily see this being a decent use of so-called "AI" in development.

  • by grolschie ( 610666 ) on Thursday January 16, 2025 @06:57PM (#65094749)
    Who's code was the LLM trained on, and under what license was said code released?

"Laugh while you can, monkey-boy." -- Dr. Emilio Lizardo

Working...