AI Slashes Google's Code Migration Time By Half (theregister.com) 45
Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
The research looks extremely weak and thin. BS. (Score:4, Insightful)
Re:The research looks extremely weak and thin. BS. (Score:5, Insightful)
Eh, the cited tasks seem to be credibly within the reach of LLM, very tedious, very obvious tasks. The sort of scope that even non-AI approaches often handle to be fair, but anyone who has played with a migration tool and LLMs I think could believe there's a lot of low hanging fruit in code migrations that don't really need human attention but suck with traditional transition tools.
Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).
Re: (Score:1)
Re: (Score:2)
Biggest concern I have with C (and Go) is that when the Java or Python attempt is throwing tracebacks like crazy and the C or Go is going just fine, the C or Go *should* be reporting errors like crazy. Lazy programmers not checking the return code/errno result in a program that seems pretty happy even as it compounds failure upon failure. Go has panic/recover, but that is so frowned upon third party code would never do it even if you would have liked it to.
Re: The research looks extremely weak and thin. BS (Score:2)
I run home assistant for my home automation stuff. I run it on a FreeBSD box I have. Home Assistant is on the bleeding edge of python and FreeBSD is the opposite.
it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.
Re: (Score:1)
Re: (Score:2)
it really is annoying to deal with "modern" programmers that want to "refactor everything, all the time", breaking APIs with no regard.
Also ones that have a hard-on for massive dependency trees. I wanted to build Jujutsu [github.com] on an Ubuntu 22.04 box, but that Ubuntu only has Rust 1.80 and some dependency in the jj stack already requires Rust 1.81.
jj is nice because it's very focused on doing its job and being usable, with none of the stereotypical in-your-face "we use Rust" attitude. But its dependency set forces you -- I assume inadvertently -- to the bleeding edge.
Re: (Score:2)
Python and Ruby change their runtimes and their package formats constantly and it's forever broken if you aren't on the bleeding edge. Other systems like Javascript + NPM or Lua + Rocks do a lot better, but still suck hind tit for the most part. For me, C is the go-to language and have a lot less features (no network package manager) but a lot more survivability and reliability (the shit will actually work without throwing Python tracebacks for the first half hour I fight with it).
Having been a dairy farmer, the hind tit is the one you want on most cows. Fronts tend to have less milk. Rears tend to output more. While you have to wait for the milk to drop in some nervous milkers, the layout of the entire udder is such that the rear teats have larger "containerization" as it were. The front tend to be slightly smaller / raised above the rear. This educational moment brought to you by hundreds of early mornings and hot afternoons in the milk barns and parlors.
Re: (Score:2)
Of course, this is generally a self-inflicted problem from chosing more fickle ecosystems, but those fickle ecosystems have a lot of mindshare (python and javascript are highly likely to cause you to do big migrations, C and Golang are comparitively less likely to inflict stupid changes for less reason).
Google often chooses to do very large migrations, in all of the languages the company uses. Google uses a build-from-head monorepo strategy for almost everything, which has a lot of benefits but it also means that when the core libraries are improved the amount of client code that's impacted is enormous. Not being willing to make regular large-scale migrations would mean that the core libraries are not allowed to improve, which just motivates project teams to write their own variants, or add layers on top,
Re: (Score:1)
Having used it, I have to say that some of the AI suggested modifications are simply magic. I have typed '// This should' and it fills in exactly what I was thinking. Blocks of code are very often filled in automatically, and correctly.
Re: (Score:1)
I've heard a lot of wild claims, but this is the first time I've seen anyone claim that AI was psychic...
Re: (Score:1)
LLMs just predict the next likely thing. I think humans often do the same, we just don't know it.
Re: (Score:2)
You wrote: "I have typed '// This should' and it fills in exactly what I was thinking"
I weep for the future...
Limited Applicability (Score:2)
We've heard this song before (Score:1)
Translation: We can now screw up twice as bad in half the time.
That explains it.... (Score:1)
Re: (Score:2)
I was having the same thought, I've had shitty issues lately across a range of Google apps that I use.
But they didn't change the type (Score:5, Interesting)
Re: (Score:2)
Sounds trivial? (Score:2)
This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.
Re: (Score:2)
Right. It's like a car that's 95% full self-driving. Since the output isn't deterministic, the whole process needs human review and mistakes are easier to miss.
Doing this algorithmically would have been consistent and where it fails it would fail in a predictable way.
Re: (Score:2)
This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.
Haven't you heard? The type of "find and replace" that every IDE has had in it for decades already is now referred to as AI. Anything that the machine does is AI. Booting the computer is handled by AI. Login is actually AI. Opening a Word document is AI. IT'S AI ALL THE WAY DOWN!
AI can manage 2nd year CS student stuff (Score:4, Interesting)
This sounds kind of trivial, in the sense that if the code is well written, the changes should also be very formulaic.
From playing around with AI coding systems. AI seems to be about the level of a sophomore CS student who has had the data structures class, has not had the algorithms class yet, and can copy code from the internet, but may lacks a real understanding of the code implementation its copying. Which is still kind of impressive from the perspective of someone who studied AI at the grad school level.
:-)
Copy/paste coders beware, AI is coming for you.
Re: (Score:2)
That's what a LLM does. It outputs text that is statistically indistinguishable from the text it's been trained on. But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it. A real programmer is modelling the CPU and memory in their head (or at least a greatly simplified model of it) and thinking about what each step does to the state of the machine.
Take a look at the real-time AI-generated minecraft game. It's really trippy. It predicts the next fram
Re: (Score:2)
But it doesn't actually "know" or "understand" what the code is doing. It's not actually reasoning about it.
I'd say there is some very simplistic reasoning in some of the AI coding systems. It seems to be able to combine a couple simple concepts well enough to "merge" the respective pieces of code it's seen.
Re: (Score:2)
Re: (Score:2)
. AI seems to be about the level of a sophomore CS student
More like the first answer that came from stackoverflow whether or not it was the highest-ranked or correct answer.
Re: (Score:2)
Re: (Score:2)
Ah to be a CS sophomore in the "copy code from the internet" era.
We were so much more skilled having to open a Knuth book and translate his pseudo-assembly into compilable code. :-)
Re: (Score:2)
I was thinking the same thing, my goodness, they've reinvented sed!
In a well designed codebase, this would have been a one-line change. The fact that they're bragging about using AI for this just shows that there are yet entire departments at Google ignorant of basic software engineering practices.
Re: (Score:2)
From the FA, Whether there is a long-term impact on quality remains to be seen.
Just FYI Google: a software engineer can quantify the impact on quality using process controls. Just thought you might like to know.
Seems feasible, well scoped, verifiable (Score:1)
Great idea (Score:2)
Wait-- (Score:2)
Wait... code is a migratory species?
It flies south for the winter?
Re: (Score:3)
But then of course, uh, African code bases are non-migratory.
Re: (Score:2)
So, what is the airspeed velocity of unladen code?
Re: (Score:2)
WHAAAAAAAaaaaaaaaaa
Re: (Score:2)
I don't know that!
[falls into the Gorge of Eternal Peril]
Re: (Score:2)
Now I will say Ni! to you until you get me a shrubbery.
News Flash! (Score:2)
Company with a bloated codebase says that they can now have a bigger bloated codebase because of AI.
find . -type f -exec sed -i 's/old-pattern/new-pattern/g' {} +
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
ask the LLM to figure out where these values got passed.
Back in 2006, as a new hire I wrote a tool which would scan a codebase for identifiers and cross reference every usage of those. It was a fun little project - took about a week - and was the first application I'd written which actually used a substantial amount of memory - more than 700MB, IIRC.
Once you have the dependency graph, it's a relatively simple matter to automate the textual changes. The clincher comes when you have aligned or byte-p
Sounds Reasonable (Score:2)
Back in the early '90s, I migrated a system from 16 to 32 bit. I wrote scripts to do this (this code base was in hundreds of thousands lines of code).
From memory, I'd say the 80% automation number sounds about right. I can easily see this being a decent use of so-called "AI" in development.