Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Programming

Millions of Coders Are Now Using AI Assistants. How Will That Change Software? (technologyreview.com) 78

AI coding assistants are here to stay -- but just how big a difference they make is still unclear. From a report: Thomas Dohmke, GitHub's CEO: "You've got a lot of tabs open, you're planning a vacation, maybe you're reading the news. At last you copy the text you need and go back to your code, but it's 20 minutes later and you lost the flow." The key idea behind Copilot and other programs like it, sometimes called code assistants, is to put the information that programmers need right next to the code they are writing.

The tool tracks the code and comments (descriptions or notes written in natural language) in the file that a programmer is working on, as well as other files that it links to or that have been edited in the same project, and sends all this text to the large language model behind Copilot as a prompt. (GitHub co-developed Copilot's model, called Codex, with OpenAI. It is a large language model fine-tuned on code.) Copilot then predicts what the programmer is trying to do and suggests code to do it. This round trip between code and Codex happens multiple times a second, the prompt updating as the programmer types. At any moment, the programmer can accept what Copilot suggests by hitting the tab key, or ignore it and carry on typing. The tab button seems to get hit a lot. A study of almost a million Copilot users published by GitHub and the consulting firm Keystone Strategy in June -- a year after the tool's general release -- found that programmers accepted on average around 30% of its suggestions, according to GitHub's user data.

[...] Copilot has changed the basic skills of coding. As with ChatGPT or image makers like Stable Diffusion, the tool's output is often not exactly what's wanted -- but it can be close. "Maybe it's correct, maybe it's not -- but it's a good start," says Arghavan Moradi Dakhel, a researcher at Polytechnique Montreal in Canada who studies the use of machine-learning tools in software development. Programming becomes prompting: rather than coming up with code from scratch, the work involves tweaking half-formed code and nudging a large language model to produce something more on point.

This discussion has been archived. No new comments can be posted.

Millions of Coders Are Now Using AI Assistants. How Will That Change Software?

Comments Filter:
  • Even shittier (Score:4, Insightful)

    by parityshrimp ( 6342140 ) on Wednesday December 06, 2023 @02:33PM (#64061221)

    It will make software even shittier.

    Compare to the following counterfactual: Millions of software engineers are now using formal methods. How will that change software?

    • Re:Even shittier (Score:5, Insightful)

      by fahrbot-bot ( 874524 ) on Wednesday December 06, 2023 @02:47PM (#64061299)

      Millions of software engineers are now using formal methods. How will that change software?

      I tried wearing a tuxedo while programming, but it didn't help my coding much. :-)

      To be fair to the actual point, AI assistants *might* help weaker / less experienced programmers, but probably less and less for stronger / more experienced ones and as they become so. Personally, I probably wouldn't use one as I like thinking thing through and problem solving. Also I imagine debugging AI assistant generated code would be tedious, unless it self-documents the rational.

      • Re:Even shittier (Score:5, Interesting)

        by Darinbob ( 1142669 ) on Wednesday December 06, 2023 @03:15PM (#64061429)

        It might help weaker programmers stay below the radar and hold onto their jobs longer before the manager realizes that the skills are lacking.

        Already I see too much of copy-paste of terrible code. Because the programers don't know how to programs and are using existing code as templates to follow. But they follow bad code as their guidelines. Ie, there's a lot of code in the codebase I'm in that does "goto done;" and it's done because they're trying to avoid multiple returns from a function; so... engage in one strong taboo to avoid a weak taboo. Or possibly it's done because 1 time out of 10 you need clean up. Whatever the reasons, I see people copy it. And take the code base into a new project and suddenly the bad style shows up again, and in a code review I say "don't do it this way" and the response is always "but the existing code does that!"

        All programmers need to stop and slow down, pay attention to what they're doing. That's for good or bad programmers alike. Relying on AI to speed up coding isn't going to increase the quality. If they don't take the time to review their own code, will they take time to review the AI supplied code?

        • ... there's a lot of code in the codebase I'm in that does "goto done;

          How bad could one little goto [xkcd.com] be? :-)

          ... and it's done because they're trying to avoid multiple returns from a function ...

          I've seen (and written) functions structured with and w/o multiple returns and either can be clear and good if done well. It mostly depends on what's going on. Sometimes short-circuiting is cleaner than nesting, sometimes it's the reverse. Absolute rules are usually a bad idea.

          • True, but what I see is pointless code, something that makes the code uglier and difficult to understand.
            I am not making this up, but I see code like this, clearly a templated copy:

            int func(...)
            {
            some boring code here;

            if (!success) {
            status = -1;
            goto err;
            }
            return 0;
            err:
            return status;
            }

            • Well, that example is admittedly pretty silly, but having a labeled cleanup and bail is common practice in all code that doesn't want to get assfucked by a missed cleanup before return.

              I'm not sure when it became a common perception that having a goto err; is some kind of taboo or evil, but it comes from ignorance, and avoiding it has caused far more problems than the zero it has ever caused.
              • Right, it's a common practice. However my original point was with programmers who just copy code without understanding the code first. This is sort of what students did when I taught programming, and it's strange to see people with a few decades of experience still programming that way.

      • I've been using co-pilot for a few months, and you don't really use it for any tough problems. What it does really well though is churning out the boiler plate stuff while you're coding - making it significantly faster than doing it all yourself.
        • I've been using co-pilot for a few months, and you don't really use it for any tough problems. What it does really well though is churning out the boiler plate stuff while you're coding - making it significantly faster than doing it all yourself.

          Granted, although I've been around for a while and can often copy from my older, already debugged code for a lot of that. :-)

          • although I've been around for a while and can often copy from my older, already debugged code for a lot of that. :-)

            As can I. But that takes a lot more effort to go dig it up than the AI recognizing what you're doing and just being able to hit the tab key to get instant boiler plate that's 90% right.

      • by NFN_NLN ( 633283 )

        Technically lint works against existing code. There's no reason an advanced AI lint couldn't identify potential bugs or holes.
        Also, it is hard to keep up to date on new libraries / modules so AI could help there. Or going a step further if AI looked at code and created libraries that would make the majority of coders life easier that may also help.

        • by Anonymous Coward

          What if it could tell you that there are unpatched bugs in the libraries? Or that you're 18 revisions behind and should speak to your repo manager?

    • It will make software even shittier.

      Compare to the following counterfactual: Millions of software engineers are now using formal methods. How will that change software?

      Not only will it make software shittier and more likely to just be copies of copies of copies of copies of .... other code, it will also slowly be used to train the AI to be *slightly* better than any individual coder, without offering any truly creative solutions to anything. This headline just as well read, "Millions of coders are training their future replacements."

      • Not only will it make software shittier and more likely to just be copies of copies of copies of copies of .... other code ...

        I'm imagining it like copying video tape, each subsequent copy generation is worse than the previous ...

        [Noting that (usually home) video tape systems often don't copy every frame of video, so subsequent copies have fewer and fewer good frames...]

        [ For you youngsters, "video tape" is ... :-) ]

        • by narcc ( 412956 )

          The recently coined term for this is 'model collapse', though the phenomenon was well-known before that paper was published. I described it here [slashdot.org] a few months earlier, for example.

          What this means is that AI generated content is effectively poison for future models. AI generated code, then, is a problem that should eventually solve itself.

          • by jvkjvk ( 102057 )

            >What this means is that AI generated content is effectively poison for future models. AI generated code, then, is a problem that should eventually solve itself.

            Only if people feed the training model the output of the AI generated code. If they spend the time to correct all the errors and possibly redo the architecture then it won't poison future models, it will simply make them better and better, because the coder is still fixing all the errors.

    • It will make software even shittier.

      Another part of the answer to "Where are they?". The Great Filter is the result of would-be galactic civilizations stagnating after they invent LLMs, social media, reality TV, internet porn, etc.

      • It will make software even shittier.

        Another part of the answer to "Where are they?". The Great Filter is the result of would-be galactic civilizations stagnating after they invent LLMs, social media, reality TV, internet porn, etc.

        I've been working on my own addendum to the Great Filter theory. If a specie manages to make it to technology, and begins to set aside religion, they need a common goal the technology should be helping them achieve or very bad things will happen. Belief in a deeply ingrained part of the human psyche. For hundreds of thousands of years we made up our beliefs, or had them handed to us by elders. As science showed us that more and more of those beliefs were complete and utter nonsense, those that didn't dig in

    • I think we might be overestimating the capabilities of a typical developer. YOU might be an excellent developer who consistently writes good quality code. But let's visualize that guy down the hall (or virtually down the hall) who is just barely good enough to be part of the team. Every team has one or two of those, in my experience. A tool that provides decent boilerplate code for a lot of common situations, just might bring quality up a notch overall.

    • by Dadoo ( 899435 )

      It will make software even shittier.

      Yup. One of my coworkers recently spent several hours working with an AI to write a 50-line (Python) program that I duplicated in 10 minutes with a single shell pipeline (with three commands).

  • by gweihir ( 88907 ) on Wednesday December 06, 2023 @02:34PM (#64061225)

    I am literally waiting for the first group of crap software pieces to have the same or very similar vulnerabilities despite using different languages, libraries and tool-chains, because "AI" did mess it up in the same way in different places. I am also sure coding model poisoning has started a while ago.

    Obviously, the usual nil wit "managers" will think they can not do complex software with basically "no skill" coders. Historically, coders needed at least some absolute minimal skills to get software running. That time is past. And hence we will get this here on steroids: https://blog.codinghorror.com/... [codinghorror.com]

    What a disgrace.

    • by GreaterTroll ( 10422652 ) <greatertroll247829&aol,com> on Wednesday December 06, 2023 @02:51PM (#64061323) Homepage Journal

      There is a right way to use this software and a wrong way. The right way would be building out the CASE functionality in IDEs with better code completion, expert systems for common software engineering tasks and basically let an experience engineer manage and mentor this stuff like a team until good software comes out.

      What’s gonna happen is they’re gonna hire people who can pass coding interview questions with copilot turned on. Which will have all the drawbacks that coding puzzles ever had but even stupider people will pass. Now when they get to their jobs instead of having a team to show these superjr devs how git, change management, patterns, frameworks and whatever else isn’t taught in school. They’ll have some overbuilt IDE with clippies popping out of the woodwork and a manager who expects the output of a team.

      It’ll take some time for the disaster to percolate up through the upper management feedback loop and will take however long for enough of the suits responsible to declare the whole thing a great success and move on to greener pastures before serious discussion starts to happen about what a mistake this was. Eventually one of them will say everything outloud and a NYT bestseller management book will come out and they’ll all act like they were against it all along.

      But it won’t change anything because the number of developers who can single handedly manage a bunch of bots through a project lifecycle is far less than the demand will be for such people. It’ll be at least to the end of my career before this shit gets straightened out like the whole MCSE sysadmin, Indian callcenter support, or comptia security department crises took 20 and 30 years to right themselves.

      • by gweihir ( 88907 )

        I think that is a very insightful, very accurate prediction.

        This is one reason core why new technologies take somewhere in the area of 100-300 years to mature. The more complex, the longer. The steam engine took something like 200-300 years, so software may well take 300 years (about 70 of those having already passed) or maybe software will take even longer.

    • by Speare ( 84249 )

      I am also sure coding model poisoning has started a while ago.

      I had a coffee mug from the 80s printed in a micr font, filled with several well-known computer aphorisms of the day. One was

      If computers get too powerful, organize them into a committee. That will do them in.

      I am also thinking about what regulatory and IP compliance officers in various companies make of the summary where it said "The tool tracks the code and comments in the file that a programmer is working on, as well as other files that it links to or that have been edited in the same project, and

  • This implies very little about its usefulness though. Unlike regular text synthesis, you can't always judge the answer without further work and testing. I bet a lot of that 30% is either newbies who are happy that get something useful or requests for trivial code that looks right.
    • If you write good inline documentation this sort of technology can write decent code and test cases but I don’t imagine the practice will get formalized until there’s a lot of experimentation with trying to get enterprise production code out of boot camp grads or other such silly dreams of cost cutting.

      • by vyvepe ( 809573 )

        If you write good inline documentation this sort of technology can write decent code and test cases

        The problem is that good comments describe why the code is written as it is. Good comments do not describe what the code does. That would be a duplication. What the code does is easier to see from the code directly.
        Are good comments enough for this sort of technology?

        • If you write javadoc or doxygen comments they describe what the function does. It is not easier to parse a 15 line function than one or two lines of english especially as an IDE mouseover popup or as a code completion hint. It’s especially worth doing now that you can just read the copilot suggestion and hit tab and get your function and move down to the next one. Even without IDE enhancements writing comments that describe what functions do facilitates “design by wishful thinking” work

  • we are now entering the age of Illiterate Programming

    https://www-cs-faculty.stanfor... [stanford.edu]

    • by gweihir ( 88907 )

      Nice! Yes, that is _very_ accurate. From low-skill coders (because that works so well) to zero-skill coders! Progress!

    • we are now entering the age of Illiterate Programming

      What do you mean "entering"? Microsoft has been around for decades.

  • by jamienk ( 62492 ) on Wednesday December 06, 2023 @02:40PM (#64061259)

    I use AI to help me code a lot. Not sure I like it. Very often leads me down dead ends. Or guides me into spending way too much time explaining. Or trying to cobble together incompatible pieces that were spit out at me. Or getting burned by bad information, out-of-date documentation, pseudo-code, etc.

    I think a big problem in general is the way the AIs are presented, largely based on a Wikipedia model. The idea of "information" or explainers also infected, for example, Google, where instead of giving search results they try to fake their way through giving ANSWERS.

    Same with AI. I feel like it should be seen (and present itself) with more of a "maybe you could try this...?" attitude? Or "some people seem to claim that..." etc. Or "what about maybe...?" Instead of posing as a know-it-all. In a lot of good stuff I read, the writer doesn't present themselves as an authority. And even the best authorities hedge with caveats and seem excited to get you thinking about *how* to approach things, rather than giving an ANSWER.

    I think it's a version of mansplaining, mixed with marketing, mixed with hoodwinking. It betrays a set of VALUES (one's I don't hold), and makes the whole thing baldly IDEOLOGICAL. I doubt that this attitude naturally emerges from GPT training, I suspect it's finely-tuned (though maybe largely unconsciously).

    It really is detrimental to the interfaces.

    • by gweihir ( 88907 )

      Interesting. My take is the main factor that drove the current AI hype is exactly that approach of projecting authority and competence, while being somewhere between mostly and completely clueless. Business consultants have used that approach successfully for decades ("project competence while being completely clueless"). Too many people cannot judge the merit of answers for crap, so they go by the stance of the one giving them. And hence too many people somehow got convinced these "AI" tools are much more

      • by nightflameauto ( 6607976 ) on Wednesday December 06, 2023 @04:32PM (#64061705)

        Interesting. My take is the main factor that drove the current AI hype is exactly that approach of projecting authority and competence, while being somewhere between mostly and completely clueless.

        Well then, this form of AI hit us at precisely the right moment in time for maximum impact. That seems to be the main driver of everything from politics to sales to management to you name it. Have little to no clue but be extremely confident? "They got it figured out. We should put them in charge!"

        Intelligence tends to weigh one down. You have doubts because you're aware that you don't know things. Be dumb enough to not know that you don't know things? Maybe we'll make you president!

    • by vux984 ( 928602 )

      Well said. I've said in the past that I find the AI responses obnoxiously grating and couldn't quite put a finger on why... and i think you've more or less identified the same issue.

      The answers are usually mediocre at BEST, frequently conxtextually wrong, usually incomplete, frequently drawing from inappropriate / outdated or wrong-dated* information, and frequently straight out wrong, or terrible. Having them all presented as THE ANSWER is ... infuriating.

      I find i prefer the raw search results. Skimming t

      • by vux984 ( 928602 )

        * wrong-dated == if I am asking a question about .net3.5 or linux kernel 5.1 or Windows Server 2012 R2 and the ANSWER is to make use of some feature that wasn't introduced until .net5 or kernel v6.6 or Windows Server 2022 then it may be 'current', but guess what AI: its not THE ANSWER.

        This is one of the biggest issues with internet search results, I frequently want information that is applicable to a particular version range, and all search engines are obnoxiously bad at that contextualization -- knowing wh

      • You want the ANSWER? OK, I'll give it to you, but you're not going to like it: the ANSWER is 42.
      • by jbengt ( 874751 )

        I find i prefer the raw search results. Skimming the actual article, the conversation back and forth, seeing the date stamps, seeing the linked references is far more useful.

        This.
        I'm a mechanical engineer by trade, with little coding experience, but in general about half of what I've learned over the years came from looking up something else.

    • It's pretty clear that many APIs have terrible documentation, and any extraction from them as AI inputs will lack deep understanding of the ecosystem in which the functions operate. AI bot technology does NOT extract proper deep models from its input text; it deals in shallow representations and surface patterns. I as an AI researcher see numerous pitfalls in what seems good and shiny and productive but actually is built upon reclaimed garbage dump land, subject to future underground collapses. And for god
    • by coop247 ( 974899 ) on Wednesday December 06, 2023 @03:18PM (#64061459)
      I code in a multiple languages and too often am googling basic shit like "how to turn array into string" and just asking CoPilot to turn a specific array into a comma-delimited string and getting a working line of code back in milliseconds is absolutely helpful. For languages you barely know it's quite nice.

      But, asking it to consume complex API's or write functions is definitely not great and buyer beware. It just starts spitting out useless/extemporaneous line after line.
      • I use it to generate functions in javascript for testing issues on webpages. One example is that sometimes analytics software can have issues when an iframe from the parent window domain is served. This can lead to doubling up events. I asked chatGPT to generate a function that checks for iframes on the page with the same domain as the parent window. It output something that did exactly what I wanted really quickly. Also these smaller functions are easy to spot check for bugs, so it definitely works for tha

    • by dargaud ( 518470 )
      Yeah. My own experience is limited, I've used it only once to code a communication protocol I was totally unfamiliar with. It took no less than 8 tries to get a basic working program with chatGPT. Each time I had to show it its (obvious) errors and ask for another suggestion. It wasn't a close call, each of its suggestions were shit, until the last one which put me on track.
    • It's faster for me to just search Stackexchange myself, rather than let the AI steal from it.

    • I suspect it's finely-tuned (though maybe largely unconsciously).

      Hard to say. We have started playing with training LLMs with all of our internal documentation. It's not a serious project, but we've got GPU cycles to spare.
      We find the results to be very authoritative as well, and we definitely are not tuning them to be so. I think it's just a fact that on the internet in general, and even in our own docs, jackasses speak with authority they don't have.

      Your examples of non-authoritative answers, I think, are because you're wise enough to search for good answers.
      TL;DR,

  • Has to be marginally better than blindly copy-pasting from StackExchange. Half the time people copy-paste from the question side instead of the answer side. I've seen it.

  • by Rosco P. Coltrane ( 209368 ) on Wednesday December 06, 2023 @02:57PM (#64061351)

    I maintain a rather large Python module to communicate with the devices we manufacture. This module serves as a reference implementation, but is also the backbone of many of our internal production and test tools.

    One of my colleagues, a C# jockey who's rather unfamiliar with Python, was tasked to create a new calibration tool. Rather than try to call the "low-level" Python routines from a C# project, he elected to code his stuff in Python too: the tool has zero performance needs, it's just easier to keep the entire thing in Python, and the guy was given ample time to complete the project.

    But, instead of reading the documentation and looking at the examples, he decided to ask the Microsoft GitHub Copilot thing to read my code and explain him what it does (and also explain to him how certain Python constructs work).

    And sure enough, Copilot told him a bunch of crap. Well, not complete crap, but it totally missed gotchas in the API that I made very obvious and very clear in the documentation.

    Copilot also totally confused my colleague on mutability issues that were really obvious for any half-gifted Python programmer looking at the module's code, but that it totally misunderstood,

    Long story short: after wrestling with non-functional code for a few days, he finally got some Python code that worked. Badly. Then he called me, I took a look at the disaster, and we both agreed he was better off starting over - this time reading the doc properly and trying to understand the fucking API in-depth.

    This whole coding assistant thing is a disaster waiting to happen: at least before, clueless engineers couldn't wing it and produce code: they didn't understand or they didn't know and they didn't do anything. Obvious problem, obvious symptom. With the coding assistant, they can actually produce code: the problem is, it's terrible and nonsensical.

    • by bjb ( 3050 )

      This is basically what I've seen come out of the tools. To be honest, I've only found it useful for simple boilerplate things like "open a CSV $f and create a list of the 2nd column entries", but in those cases a seasoned programmer would probably spit that out faster than the time that they write the comment to trigger the GenAI. However, you do have the bonus of that you are now producing commented code?

      But anyone who thinks that the GenAI tools will generate good code architecture or even respond to t

  • This is an easy one to figure out. Coders will become dependent on AI to do a lot of the basic stuff and eventually be dependent on it like a crutch much like people need a calculator to do basic math or their phone to call people because they can`t recall any number save for their mom's phone number.
  • by Walt Dismal ( 534799 ) on Wednesday December 06, 2023 @03:06PM (#64061389)

    Suddenly millions of offshore voices cried out as the Code Force engaged. It was as if brain cells liquified and surrendered to shitty code suggestions but made it easier to earn the $5/hr fee benevolently dispensed by Tata.

    Later, some planes crashed, medical patients died from 100 roentgen doses, and a few trains collided, but overall the shitty applications ran mostly without crashing. Although the disappearance of millions from bank accounts was such a mystery.

  • I work in security. If you thought cargo-cult programmers (aka "Stack Exchange copy/paste programmers") were a convenient low hanging fruit to stuff your reports without having to do any relevant work, this is the mother lode.

    • I have trouble believing the people who keep their Coding LLM assistant handy will ever be as bad as the fucking cargo-cult programmers.
      Well, I suppose if they train the LLM off of stack exchange they will be... But I like to think they're not that fucking stupid.
  • Looking at the comments, I couldn't help but think the AI bandwagon cheapening and stupifying solving the problem that the code is only a tool to help solve.

    This attitude is becoming replete in modern life. In politics, we get slogans with no thought behind them or nefarious thought if any. Science is also suffering in some areas with "cheap" science as somehow being actual science. Go to Amazon and read the ad-copy for products and find you cannot really tell enough about the gizmo or product that answers

  • The wrong results more often, but the variable naming conventions will be down to less than 50. At least the forum sites at least had corrections for the really bad examples.
  • by TuringTest ( 533084 ) on Wednesday December 06, 2023 @04:12PM (#64061643) Journal

    First, new AI-friendly programming languages will be created, that are simpler for LLMs to learn. Once developers have assistance of the model to create and understand code, easy-to-read concise PLs won't be that essential; but unambiguous precision will be. Code snippets will become more write-only.

    Second, business-level programming will become even more dependent on composing pre-built, well-tested library components. Programs will become less of a logically coherent cathedral with solid pillars that (tries to) solve all the needs of an organisation, and more of a camp of communicating, loosely connected tools that each serve one single concern and actually solves the needs of a worker or small team.

    Third, thanks to the previous two, most programs won't be built and run with the current enterprise lifecycle of writing code to specs, debug in a development environment, then release; it will become integrated with the AI platform instead, running in the background without the code ever abandoning the environment where it was written. Multimodal AIs like Gemini are capable of writing ad-hoc user interfaces, built on the spot to explore and consume the data according to some current user business needs. Many of the tools will be transient, single use widgets that solve a simple task at hand following the specifications provided by the end user, just like information workers now are able to create spreadsheets to define ad-hoc processes to keep and model data to solve their work. In this scenario, exposing the code is not something that will be needed usually; only to the point that it's needed to check that operations on the data are using the right logic.

    Will traditional programming disappear? Of course not; just like system programming is still needed, someone will have to write the well-tested components that the AI combines; and someone will need to keep in check the architecture of more complex combinations of tools that are used in a stable way through time. But for most small tasks, users will finally able to create their own tools for most of their simple and moderately complex information processing needs.

    • First, new AI-friendly programming languages will be created, that are simpler for LLMs to learn. Once developers have assistance of the model to create and understand code, easy-to-read concise PLs won't be that essential; but unambiguous precision will be. Code snippets will become more write-only.

      Unlikely, this isn't RL, the LLMs need human written examples to work from.

      If some languages are fundamentally harder for LLMs to deal with I can see them declining, but I haven't seen strong evidence of this (except that current LLMs do much better with concise languages contained in a single file).

      Second, business-level programming will become even more dependent on composing pre-built, well-tested library components. Programs will become less of a logically coherent cathedral with solid pillars that (tries to) solve all the needs of an organisation, and more of a camp of communicating, loosely connected tools that each serve one single concern and actually solves the needs of a worker or small team.

      Business-level programming is about using pre-built, well-tested library components. I think the big change here is that LLMs make new APIs and libraries much more accessible, so NIH (Not-Invented-Here) syndrome

  • ... L33t coding snob comments here. Would like to be a fly on the wall when a coding bot takes their job.

    Guys, you should really snap out of your denial. The bots are coming for you too and if you think that a sufficiently knowledgeable bot that can code circles around any human for 99% of scenarios isn't a possibility real soon now you're being silly or have been living under a rock the last few years.

    This 9 year old video sums things up pretty well [youtube.com] and might help you get off the high horse. It's still the

    • by Anonymous Coward

      At least some of the high & mighty here are speaking from experience. LLMs are not going to replace most programmers for a number of reasons... including the fact that LLMs are just statistical lookup engines with randomization thrown in. Hey yeah, that's *exactly* what we want writing our mission-critical code, right?

      "Oh sorry sir, the electrical grid went down because CodeBot 12.6 hallucinated when writing our electrical substation code and thought that the power transformers were actually water pumps

  • Just like how I still have to debug and modify code I find on SO, I have to debug and modify code I get from ChatGPT. It's just with ChatGPT I spend less time searching for a solution in various links that Google decided to show me.

  • How will it change? Well, for starters, the code-generated sections will no longer be enclosed in a comment that says "do not modify this code, it was generated". Instead we will have to maintain mountains of this bullshit boilerplate code by hand that we didn't write and shouldn't have to look at or think about in the first place. And that's if it even works at all.

  • If they can tell the AI to write it. Of course it might not be the most accurate documentation. But, maybe reading errors in the documentation will inspire people to correct it. And, maybe correcting the AI's take on the documentation will lead the AI to make "this code doesn't actually do that because..." suggestions.
  • For many years, I see developers are unwilling to type something that is not suggested / confirmed by their IDE. Often their laptop is "slow" due to humongous crap installed by corporate IT, and IDE can simply not keep up with my thoughts. So occasionally I dictate a small sample of code to developers, or send a snippet to make them use it so that I can explain a concept. But developers type a few letters, and keep waiting until the IDE wakes up.

    These developers will surely refuse to do anything that the AI

  • I want to know, do the developers creating these Als use the same Al to develop their Al?

  • I find ChatGPT hugely helpful. However, I only use it to remind me of obscure methods, to figure out what API calls are available, etc.. I don't use it to create algorithmic solutions to problems.

    Just today, I didn't remember how to tell Java to sort List of numbers in descending order. I knew there was a simple way ("Collections.reverseOrder"), but I didn't remember what it was called, or which class it was in. Hunting for the answer in the API documentation would have taken a while. StackOverflow delive

  • Sure - we have all had a go. None of us stick with it. I don't believe that there are millions regularly using assistive AI for coding. One good reason, for me, is that I get a human response on SO and normally someone has already asked the same problem. Likewise modern IDE give me all the assistance I could want, without needing a gimmick. Lastly, assistive AI generates so much noise that it's far more distracting than useful. Here is a test that I have given various AI assistants - all of whom have f
  • Bugs everwhere.
  • It just means that everyone's code will have the same bugs. That could be a bug or a feature. A feature if fixing a bug fixes everyone's code. A bug if the whole world crashes at once. This is already a problem with opaque libraries that everyone uses.

  • ...it will result in a lot more time wasted on debugging gibberish code that "hallucinates" solutions of nonexistent libraries that really need to be written from scratch.

    I'm already at the point where I dismiss any solution a colleague "found" using AI instead of other types of search as it has proven to be the least reliable

Remember: Silly is a state of Mind, Stupid is a way of Life. -- Dave Butler

Working...