Interview with Programmer Steve Yegge On the Future of AI Coding (sourceforge.net) 73
I had the opportunity to interview esteemed programmer Steve Yegge for the SourceForge Podcast to ask him all about AI-powered coding assistants and the future of programming. "We're moving from where you have to write the code to where the LLM will write the code and you're just having a conversation with it about the code," said Yegge. "That is much more accessible to people who are just getting into the industry."
Steve has nearly 30 years of programming experience working at Geoworks, Amazon, Google, Grab and now SourceGraph, working to build out the Cody AI assistant platform. Here's his Wikipedia page. He's not shy about sharing his opinions or predictions for the industry, no matter how difficult it may be for some to hear. "I'm going to make the claim that ... line-oriented programming, which we've done for the last 40, 50 years, ... is going away. It is dying just like assembly language did, and it will be completely dead within five years."
You can watch the episode on YouTube and stream on all major podcast platforms. A transcription of the podcast is available here.
Steve has nearly 30 years of programming experience working at Geoworks, Amazon, Google, Grab and now SourceGraph, working to build out the Cody AI assistant platform. Here's his Wikipedia page. He's not shy about sharing his opinions or predictions for the industry, no matter how difficult it may be for some to hear. "I'm going to make the claim that ... line-oriented programming, which we've done for the last 40, 50 years, ... is going away. It is dying just like assembly language did, and it will be completely dead within five years."
You can watch the episode on YouTube and stream on all major podcast platforms. A transcription of the podcast is available here.
I'd argue same problem exists... (Score:3)
Can't have an effective conversation without precise terminology. And that same terminology was what held back inexperienced people from getting what they wanted from previous internet searches (against forum posts, etc).
Both knowing what can be easily done, and what to call that, are both very important to any software changes... with or without AI. Best case AI will eventually be able to translate paragraphs of talking around something into the actual thing, but that's just doing the PM's work too (translating requirements from the business owner).
Re: (Score:3)
It's neat that AI will be able to code in five years, since the AI we have now sure can't.
Interview to self-promote his product (Score:3)
/. should put these near-advertorial interviews into their own category.
He has a product to help developers us AI for development
He is giving an interview about using AI for development
Self promotion.
Re: (Score:3)
Yup. I've been hearing the line:
"I'm going to make the claim that ... line-oriented programming, which we've done for the last 40, 50 years, ... is going away.
in one form or another every few years since the 1980s. Anyone else here old enough to remember The Last One, the last programming thing you'll ever need released in 1981 and also using "AI" to do this? That's about as far back as I go, but I'm pretty sure there'll have been similar claims in the 1960s and possibly even 1950s depending on how you class the different types of automatic programming.
Would I be correct in thinking that whoever made this pronouncement is someo
Re:Interview to self-promote his product (Score:5, Insightful)
These approaches all have the same basic flaw: If you're the kind of human who has trouble thinking clearly and rationally, then you'll never be able to specify a solution to a problem that you can't even grasp properly. It doesn't matter if you're specifying the solution using C++, Python, or a natural language attached to a coding library via AI. The problem lies with you.
Re: (Score:2)
COBOL was the first (AFAIK) language designed for non-programming business majors,
That is a myth.
COBOL is a pretty ordinary programming language. With many many interesting main frame features.
A business person, or a person that can not program: can not use it. It is not excel. It is a real programming language aiming for mainframes.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:1)
Even excel requires some degree of technical proficiency to do anything even a little abstract. How many people that "know excel" even know how to use a lookup function or conditional sums or counts? I have a very simple test that doesn't even require knowledge of how to do these things, and it's amazing how many people that claim to know how to use Excel can't even make a decent shopping list that allows you to put in the quantities and prices for various line items and get a total price out.
Re: (Score:2)
And those guys would not be able to program anything in Amy language.
If you have to do obscure C stuff on an embedded device, you have something like a linker definition file.
It tells the linker at what addresses you want read only memory, where read write memory should be, and where the code goes.
Those files are not complicated. But 90% of the time you are shielded away from them. If you write C/C++ for a desktop computer, the linker uses reasonable defaults.
COBOL was one of the first languages where a lit
Re: (Score:2)
When COBOL was invented, programming was largely done in machine language on punched cards. LISP, FORTRAN and COBOL were a step up in abstraction and user friendliness because it was felt that ordinary people shouldn't have to know how to program the hardware directly (like the professional programmers of the time did). BASIC too was an attempt at natural language communication with computers, and it was wildly successful.
TBH,I can to
Re: (Score:2)
Those languages are pretty far away from natural.
But you could look at AppleScript or HyperTalk.
Re: (Score:2)
Re: (Score:2)
There are two books, volume 1 and 2.
"History of Programing Languages"
Often abbreviated as HOPL.
You might be interested in them. I only have HOPL 1, though.
It is chapters of about 10 pages or so covering one language,
And contains languages most people never heard about, like SNOBOL or Icon.
I used 'The Last One' (Score:2)
Re: (Score:2)
Re: (Score:2)
How much have you actually used Claude 3.5 Sonnet? Because I find it usually does a great job.
We're not at the stage where a human programmer can be eliminiated. But we are at the stage where the human can hand increasing portions of the task off.
To me what will be interesting is... okay, so we're having the AI write code... but they're still writing it to a higher-level languages. Surely eventually though we'll just have them write it straight to bytecode, which should allow for way beyond the optimizat
Re: (Score:2)
And just to be clear.... since the human cannot be eliminated at present, the human has to know how to code.
Indeed, with a tool like Cursor.sh, the human's main job is reviewing (and sometimes modifying) diffs from the AI. You can't do that if you don't know what you're doing.
Re: (Score:2)
But the end of this trail is surely compiling directly to bytecode.
I think AI can eventually code but outputting directly bytecode does not make sense. The code is a communication channel between an AI and a developer. This channel must contain proper variable names and be sufficiently high level to be effective. Bytecode is low level and often strips the names.
Perhaps, by bytecode, you only meant that it internally works with special tokens for programing language keywords and is without any optimization. Otherwise I doubt the current LLM based AIs can do proper translat
Re:I'd argue same problem exists... (Score:5, Insightful)
Can't have an effective conversation without precise terminology. And that same terminology was what held back inexperienced people from getting what they wanted from previous internet searches (against forum posts, etc).
I'd go much, much further deeper than that.
Computer interactions (in order to create something using them as tools, not just using them for entertainment) require discipline. This applies to most, if not all types of work, from drawing something to creating the next operating system. Anyone can "use", say, MS Paint, at its basic level. Click this, drag that, and you get a very basic drawing. But if you want to be proficient with it, you need to develop certain skills.
Coding with help of LLMs is no different. Yes, it can be successful, even in its current state, but there are both prerequisites to achieving that, as well as problems.
Issue: using natural language is inherently vague. "Make me the best game in the world" just doesn't work.
Solution/Prerequisite: You need to know how to break a request into its tiniest parts, as well as how to put them back together. Some people can't change their remote batteries, FFS. Also, you need to understand and be experienced in proper prompting, a task made more difficult by the fact that each different LLM understands the same prompting differently. This is not necessarily visible for very simple tasks, but as the task becomes more complex, prompting needs to be more and more specific and tailored to that specific LLM.
Issues: It can be much more difficult to gain coding skills if you use a LLM to help you code. I, paradoxically, find it easier, because I can quickly iterate, experiment and test small code validity, especially for languages I don't know well, or at all. But that ability goes back to the previous entry: I can analyze, check, verify and rephrase, simplify and extrapolate, etc. Most people can't. Just listen to people order stuff at McDonalds, sometimes it's a pain to hear them bumble and struggle with something so simple.
Solution/Prerequisite: "git gud" - and that takes time and a ton of effort.
Re: (Score:3)
Re: (Score:3)
Jevon's Paradox. If software development gets cheaper because of fewer man-hours per project, there will be more of it, thus increasing man-hours back up (to not-as-far-reduced, about-the-same, or even more than you started with, depending on the order of the stimulus effect).
And if you have to hire someone to do work, you're always going to hire someone who is the most experienced at said work. What a programmer's job may be may be different, you're still going to hire a programmer over, say, a janitor,
Re: (Score:2)
ChatGPT >
bro please
respond in valid json format
without errors and make super sure the syntax is extra correct
i'm begging you
and please, please, pretty please, don't make up answers
my career depends on it bro
Certainly, here is a app to produce valid JSON format:
(code craps out a valid but useless JSON file, overwriting the production database, and the prompt writer isn't qualified to notice)
just like assembly language did? (Score:4, Interesting)
If this guy thinks that, he's not really a programmer. Also, "assembly language" did not go away. He's obviously not worked as a programmer on the 80% of programming jobs.
But hey, who can tell the difference when "people who are just getting into the industry" call themselves programmers but don't know programming and will be incapable of knowing whether AI generated code works or not.
What a society we live in where the most important things to repeat lies over and over.
Re: just like assembly language did? (Score:5, Interesting)
Re: (Score:3)
the link was in the abstract.
he's an opinionated sw engineer and aspirant influencer, now "head of engineering" for a company that sells "ai assistants" explaining that "ai assistants" are the future, on a platform owned by slashdot media. what else?
now please would you look at this blue light ...
Re: (Score:2)
Also, "assembly language" did not go away.
*rolls eyes*
Re: just like assembly language did? (Score:5, Insightful)
Yep, what an idiot. The other thing that happens is that most people relying on AI to code simple things never acquire the expertise to learn how to do harder things. And harder things are what LLMs _cannot_ help you with anymore. Coding needs practice to get good. AI prevents that practice. Great idea.
Re: (Score:2)
Even in the summary we have a crock of shit. (Score:2)
No. We're not. That's not a thing LLMs can actually do. They seem like they're having a conversation because each piece of text seems like it should flow from the previous text, but it's not actually a conversation in the sense that it has an idea, and your replies affect and change that idea. There's no dialectic to it. It's just responsive to a rollin
Re: Even in the summary we have a crock of shit. (Score:2)
Ah we've found someone who has no idea how llms actually work to condescend to me.
Let me stress that you are completely full of shit.
Re: (Score:2)
Let's hear your rundown. This should be amusing. Start with neurons, both from a technical basis, and what they are in effect doing during inference. Then discuss the impact of successive layers, including the processing of superpositions of "questions" / "detectors". Then move on to latent spaces, starting with e.g. word2vec and GloVe. What is the origin of latents, what they in effect represent, what you can do with them, etc. Then move o
Re: (Score:2)
The fact that you're bringing up attention, the way to provide feedback during training, in the context of running an already trained model does not bode well for your understanding.
In fact, most of what you've talked about here isn't particularly relevant. For example, subcomponents like question detection? Doesn't affect the core mechanics at work here, and is in fact, a technique to better hide the fact that the AI is 100% stateful.
So, latents. They're actually relevant, they're the numeric values in
Re: (Score:2)
Oh dear YHVH, we're off to a roaring start [medium.com] ;)
You'll need to understand latents before you try to fix your understanding of what the attention mechanism is. And just in case I haven't been abundantly clear already: no, the attention mechanism is NOT a "way to provide feedback during training". It's a core component of the inner
Re: (Score:2)
Indeed. I should add that the idea of "instead of coding, have a conversation with the computer" is _very_ old. I already heard it as old when I studied CS 35 years ago. This is the proverbial pipe-dream that people desire but never get in reality.
Obviously some no-honor scum will try to sell you something that looks like it but essentially does not deliver.
Software companies should be scared (Score:2)
Imagine if everyone could write their own software. The days of this stupid shit where EULA's and regulation stop people from protecting themselves against anti-competitive software companies. All the shit they do like phone home routines, and those coupled with stuff like pinned certificates that prevent you from seeing what traffic companies are sending about you from software that is on your machine. Imagine when EULA's that are restrictive and prevent you from doing the things you want to do are all irr
Re: (Score:2)
Hahahaha, no. We are about as far removed from that as ever. I.e. "not in the next few decades and maybe never".
Esteemed programmer! (Score:2)
I checked his LinkedIn account, he's one of my 3rd-degree connections. Hot dog! Maybe one day I'll graduate to being a 2nd-degree connection!
That last mile is H A R D! (Score:3)
We can all see the vision of being able to just "talk to" the AI when creating code. It's tantalizingly close, we can almost taste it. I mean, if they'd just fix those pesky little glitches, where it pastes a bunch of HTML tags in the middle of my javascript, or adds a new function definition inside of the function I'm working on. Then we'd be there, right?
Not so fast. Getting AI assistants to the point that they can be a big help with productivity is great, and that's already happening, but you've still got to know what you're doing. Getting to where you can *trust* the AI to do what you meant for it to do...that's going to be about has hard as...getting self-driving cars to stop colliding with pedestrians.
Re: (Score:2)
I'm not sure what language the latter statement is in reference to, but it's perfectly normal in many languages, and often recommended, to use subfunctions.
Also, what's your context? Are you using some sort of IDE when you talk about "it pastes a bunch of HTML tags in the middle of my javascript"? Are you talking about a code merging issue? IMHO, we need to get past the use of mergers (currently, the model generates code, and then a
Re: (Score:2)
You're right, I didn't include some of the necessary context.
The primary tool I've used is GitHub Copilot in Visual Studio. In VS, you right-click on a location in your code and ask it to do things at that point.
Yes, I know that functions within functions can be legal or even recommended (though I'd debate whether it's really a bonus). But when you're editing a function that's called, say, "GetHash()" and you ask it to write a hash function based on a certain encryption strategy, and it creates a new GetHas
Re: (Score:2)
GitHub Copilot is BTW way behind other tools like Cursor. And the issues you're seeing sound more like merger issues than generator issues. Look at the raw generated code to be sure.
Re: (Score:2)
When I'm in the middle of a javascript function, and I ask it to add code to change the color of a field on certain conditions, and it spits out a bunch of HTML tags instead of javascript code, that's not just a "merge" issue.
Maybe it is behind other tools. But as with self-driving cars, those edge cases are going to be a problem for Cursor too.
Re: (Score:2)
Yeah, it very well can be. Or it can be the IDE sending the wrong info to the model (there's a ton of prompt creation work that goes on behind the scenes, sometimes even involving extra models). That is not "normal behavior" for the models themselves. Trust me, I use cursor with Claude extensively every day and have been for like a month. It doesn't do that.
Re: (Score:2)
So, with Cursor and Claude, can you write *all* your code using AI prompts? Or do you still have to type some code yourself? What kind of code do you have to hand-craft?
I'm not trying to pick a fight, I'm honestly curious, because if Cursor is *that* good, I just might switch.
Re: (Score:2)
Yeah, by and large I just have Claude do it. I mean, I still review the diffs, and if there's any merge errors or I don't like its solution I'll change them, or if it's really struggling to do a task (not common), I'll do it myself, but even in that case, I rarely do it "alone" - like, I'll still have it instrument the code with debugging statements (or remove them when I'm done), have it do simpler subtasks, etc.
There's a two week free trial if you want to try Cursor. I've used Github Copilot, Cody, and
Re: (Score:2)
Thanks, I just might give it a shot.
The real qustion... (Score:5, Insightful)
In my 30+ years of coding experience I found that debugging someone else's code normally takes longer than it would if I just wrote it and debugged it myself.
Re: (Score:2)
Re: (Score:2)
Perhaps you are not good in debugging ...
On the other hand: no one is probably stopping you from rewriting it.
After all: you have version control, right?
Re: (Score:2)
In my 30+ years of coding experience I found that debugging someone else's code normally takes longer than it would if I just wrote it and debugged it myself.
That "someone else's code" must be very terrible :) Get better teammates :D
But I don't work that way (Score:2)
I don't do this "line oriented coding". I _design_ things, as algorithms, and then I translate that to code. Having an AI would not save time, because to tell the AI what I want, I would have to describe the algorithm. But that's when I am normally almost done anyway.
I can only see it being useful if, say, the algorithm contained some steps that can be summarized, such as, "Extract data from excel into a set of MySQL tables". If an AI can do that, it would save me time.
One thing I _don't_ do it code by tria
Re: (Score:2)
Have you actually met many humans? :D
More seriously put, this won't affect you much, but it may well replace a lot of lower-tier code monkeys.
I've seen a similar shift over the years in the localization industry where I work. Increasing automation has put more pressure on the lower end of the job market. We don't need Bumbling Bob and Crappy Carl as freelance translators anymore, when Google or DeepL have comparable (or better!) error rates. Bob and Carl are tools, and not very goo
Re: (Score:2)
Well said.
Re: (Score:2)
"Extract data from excel into a set of MySQL tables"
Apparently it can do that. And probably lots of other menial jobs if you know how to ask it nice.
https://medium.com/@sayaleedam... [medium.com]
Re: (Score:2)
One thing I _don't_ do it code by trial and error, which seems to be the norm today - designing as you code. But that's just stupid.
Yep. Unless you do very simply "business code" only, the actual coding is a minor part of the work. The major part is architecture and design. I guess there is a market for simplistic code, but it is not one that any real coders are to be found in.
My take all this "AI coder" and "everybody should learn to code" nonsense is just a step on the way to idiocracy.
Re: (Score:2)
Re: (Score:2)
But that's exactly what it's about**. You design. It translates to code. It saves you the translating to code. You however get to stay focused on higher-level design and not have to focus on the small stuff. With Claude you don't even have to be specific. You can say things like, "So, okay
Re: (Score:2)
Re: (Score:2)
Nah, Claude handles math notation just fine. You can also paste images in Cursor, though I've never personally tried incorporating that into a workflow.
Re: (Score:2)
Re: (Score:2)
I actually have no idea what to ask an AI about what to code for me.
I mean: I think/write a sentence. What should I ask an AI to do, to write that for me?
As soon as I have formulated what I want, I already have written it in my IDE.
Re: (Score:2)
"Completely dead"? No, only "mostly dead". (Score:2)
Hmm. Folks still purchase vinyl records. Paper books. Buggy whips, even.
Very little ever disappears completely.
That said, changes in the job market are inevitable. Everything changes anyway. We should all plan accordingly.
Re: (Score:2)
There are two kinds of people in the world, people who can listen to somebody - somebody they might even be predisposed to think of as stupid - say something like "buggy whips are completely dead" and nod their head in agreement like a sane, well adjusted adult capable of inferring context and implied qualifiers, or people who just can't help themselves and go, "aaaaakkkkshhuaallly" ...
The latter kind of people are super fucking annoying.
Re: (Score:2)
On the other hand, this prediction is just bullshit. And it is not even the first time it has been made. With a somewhat variable time-horizon (usually 5...10 years) I must have heard it regularly over the last 35 years since I got my CS degree. Apparently, it was also made well before. Never panned out, will not pan out this time. But that guy wants to sell something, so he thinks blatantly lying is acceptable.
Re: (Score:2)
Somehow that brings this other line to mind. :)
Another Death Prediction (Score:3)
Wait, what? (Score:2)
Not only is SourceForge still a thing, but enough of a thing to have a podcast?
this guy is an obvious joke (Score:2)
and I have a bridge on the moon to sell you
A bit meta.... (Score:1)
The podcast episode from SourceForge features host Beau Hamilton interviewing Steve Yegge, Head of Engineering at Sourcegraph, on the future of programming and the evolution of coding tools. Yegge, with a rich background in companies like Amazon and Google, shares his journey and insights on modern programming, highlighting Sourcegraph's shift to an AI-driven company. He discusses Sourcegraph’s products, like the AI coding assistant Cody and C