Can You Measure Software Developer Productivity? (mckinsey.com) 157
Long-time Slashdot reader theodp writes: Measuring, tracking, and benchmarking developer productivity has long been considered a black box. It doesn't have to be that way." So begins global management consulting firm McKinsey in Yes, You Can Measure Software Developer Productivity...
"Compared with other critical business functions such as sales or customer operations, software development is perennially undermeasured. The long-held belief by many in tech is that it's not possible to do it correctly—and that, in any case, only trained engineers are knowledgeable enough to assess the performance of their peers.
"Yet that status quo is no longer sustainable."
"All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving," McKinsey advises companies starting on a developer productivity initiative. "Assess your systems. Because developer productivity has not typically been measured at the level needed to identify improvement opportunities, most companies' tech stacks will require potentially extensive reconfiguration. For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run."
Before getting your hopes up too high over McKinsey's 2023 developer productivity silver bullet suggestions, consider that Googling to "find a tool that can track code executed during a test run" will lead you back to COBOL test coverage tools from the 80's that offered this kind of capability and 40+ year-old papers that offered similar advice (1, 2, 3). A cynic might also suggest considering McKinsey's track record, which has had some notable misses.
"Yet that status quo is no longer sustainable."
"All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving," McKinsey advises companies starting on a developer productivity initiative. "Assess your systems. Because developer productivity has not typically been measured at the level needed to identify improvement opportunities, most companies' tech stacks will require potentially extensive reconfiguration. For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run."
Before getting your hopes up too high over McKinsey's 2023 developer productivity silver bullet suggestions, consider that Googling to "find a tool that can track code executed during a test run" will lead you back to COBOL test coverage tools from the 80's that offered this kind of capability and 40+ year-old papers that offered similar advice (1, 2, 3). A cynic might also suggest considering McKinsey's track record, which has had some notable misses.
Sounds like (Score:5, Insightful)
McKinsey is trying to drum up business. "Here, let our consultants show you how to measure things. Only $700/hour. Should only take a year or two. Maybe three. If you're not satisfied we'll keep working til you are."
Re:Sounds like (Score:5, Interesting)
If you pay McKinsey a lot of money.
** No refunds. Satisfaction not guaranteed.
Re:Sounds like (Score:5, Interesting)
It's quite easy to measure developer productivity.
Are they doing what they are supposed to be doing and are they doing it within a reasonable timeframe and quality?
To answer that they need to have a boss that actually understand what being a developer entails - which seldom is the case. Adding more metrics means there will be an increase in micromanagement that leads to developers being less productive since they are forced to chase lagging metrics.
As always, YMMV...
Re: (Score:3)
Most software is vertical, i.e. it's only used internally by the company that developed it. All that matters is that it does the required job, and doesn't have any tangible costs that could be easily eliminated.
It might be horrible to use, it might be inefficient, it might be a nightmare to maintain, but businesses don't care about that. If employees have to suffer to use it, too bad. Fixing it costs money and doesn't increase their profits.
That's why you used to see so many Internet Explore only web apps u
Re: (Score:2)
Re: (Score:2)
It's quite easy to measure developer productivity.
Are they doing what they are supposed to be doing and are they doing it within a reasonable timeframe and quality?
Fundamentally the issue with productivity metrics is that they are an attempt to deliver a clear and simple solution to a complex problem and we all known how that tends to work.
Take the latter evaluations suggested: they are not "quite easy" at all. Even among experienced developers, there can be very different opinions on what constitutes "reasonable timeframe" and "quality" and the answer is going to be very dependent on a myriad of factors on any non-trivial project.
The problem with most productivity me
Re: (Score:3)
And even then there will be side-effects: even the seemingly "best" code out there may in fact be riddled with inefficiency, if not outright bugs. Or maybe it simply wasn't forward-thinking enough, something you often
Re: (Score:2)
Which reminds me of the classic statement....
Fast, cheap, good: pick two.
Re: (Score:2)
The problem is that any assortment of numbers you can come up with are at best an approximation of what you actually want to measure, and since you're applying these metrics to humans who have agency and are clever, they will figure out how to maximize the numbers regardless of the effect on the thing you actually want to measure.
That's true but I'm not sure Goodhart's Law even matters here. Just look at their penetrating insights about test coverage:
For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run.
Great, they discovered coverage tools! Well done. A few decades behind the curve, but better late than never. With a bit more experience they'll discover that programs include conditional logic and so tools that track which code has been executed during a test run are not necessarily tracking which important code paths have been exercised during the test run. Then they'll figure out tha
Re: (Score:2)
Re: (Score:2)
> Yes ... If you pay McKinsey a lot of money.
I doubt that. They ask you to measure e.g. test coverage. They did that in one project. When I looked at the tests made by Indian developers, I noticed that there was a catch-block. They had made a test that actually finds a bug that crashes the software, but they just catched it and ignored it to make test percentage look good, instead of actually fixing the bug and making a test that actually tests something.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Brought to you by the makers of Enron!
Re: Sounds like (Score:2)
Easier/cheaper way than throwing away $700/hr. Just feed all PRs, nay, all keystrokes that a programmer makes along with the current time and ask ChatGPT to rank by a score of productivity.
Re: (Score:2)
Hey! You're not supposed to pay attention to the man behind the curtain!
Re: (Score:2)
(Number of improvements gained from them) divided by (hours spend by them + hours they wasted the time of others.)
Why would I get my hopes up (Score:4, Insightful)
Regardless programming language (Score:3)
Human programmer productivity has historically been measured twelve lines/day.
Re:Regardless programming language (Score:4, Interesting)
Sometimes I am at my most productive while blankly staring out the window while running the problem over in my head. Sometimes I am just looking at view and wondering when I can go home. How do you measure that?
Re: (Score:2)
The reality is only a person who has done the job before, and has looked the work being done and evaluated it in detail, can make a meaningful assessment
I'm not even sure about this. Take such a person, have them evaluate the performance of another person given that task, and they'll undoubtedly hate it, say it took too long, and the solution was bad. So that person gets fired and this guy takes over. Then you hire another person who has done the job before, and have HIM evaluate the solution, and he'll s
Re:Regardless programming language (Score:4, Informative)
I have managed teams before and after a while you get a sense of each team members general skills, their strong and weak points. If you create a positive work environment where people feel ok discussing the challenges they are having and asking for guidance you soon get a feel for where their skills are at. I have been lucky to mostly work with skilled people. The few 'difficult' developers, who make expensive mistakes, are typically the ones who never seek help when they are out of their depth or just need some guidance to keep on track.
Re: (Score:3)
First of all, we should make some categories:
- Superior developers
- Average developers
- Under performers
We know that average developer can't tell if someone is superior, but we know that average developer will ask help from the superior developer or from another average developer.
We also know that under performers don't get any work done and people don't seek their help. If their work is given to someone else, that person will do it in hours or days usually.
Based on this, we should be able to spot under per
Easy measurement (Score:5, Funny)
Easy way to find the programmer productivity:
Just multiply the wavelength of the programmer by the voltage drop across, then divide by the speed of light.
I have a bunch of pat answers to use when some damn-fool of an interviewer (or reporter) asks a nonsensical question. My favorite is "three", as in:
Her: "How does one make an intelligent program?"
Me: "There's no simple answer to that question."
Her: "Can't you just give us a quick overview?"
Me: "Okay, Three".
Her: "Um... what do you mean, three?"
Me: "It's three, the number three. You don't know what the number three is?"
Her: "I mean, I don't understand how three is the answer".
Me: "You wanted a quick answer. The subject is so big that any attempt to give a simple answer loses most of its meaning."
Me: "But the answer you want, is three. Most people know what three is, so it's an answer they'll understand."
Her: "Um... let's move on to another topic..."
Re: Easy measurement (Score:5, Funny)
Re: Easy measurement (Score:2)
Re: (Score:3)
This is the first reference i've seen to Sluggy Freelance in ... ever? (or at least 10 years ?)
let me check my notes
Re: Easy measurement (Score:2)
Goodhart's law (Score:5, Insightful)
https://en.wikipedia.org/wiki/... [wikipedia.org]
tells us that 'When a measure becomes a target, it ceases to be a good measure'. In programming that means that if developers are measured on a specific measure, they will address that measure - to the detriment of other aspects of the software development.
Re:Goodhart's law (Score:5, Insightful)
Exactly. If you measure me by lines of code? I'll create 100 lines of code to do something simple. Measure me by Jira tickets closed? I'll open more and then do minor things to fix them. Measure things by reliability in production? I'll spend months testing simple changes to ensure nothing goes wrong no matter what. Measure me by thorough code reviews? I'll spend a week on each one.
Those were good examples (Score:2)
Or when they say I have to do a certain amount of training a year. Oh OK, we'll drop actual program work and do the training.
Re: (Score:2)
"Lines of code written" has long been recognized as a terrible way to measure software developer productivity.
See also https://www.folklore.org/Story... [folklore.org]
Re:Goodhart's law (Score:4, Interesting)
Anecdote time: They introduced measurement by tickets in our company. We pentesters were measured by the number of tickets we created from our tests, operations was measured by the number of tickets they would manage to close successfully.
It was a wonderful symbiosis between the teams, I can tell you that. It kinda fell apart when we overdid it and spent more time opening and closing tickets than actually doing any work. They started to notice when we opened/resolved a few 100 tickets every day.
Admittedly, we wanted them to notice that their metric is stupid. And yes, it worked. They noticed. After THREE FUCKING MONTHS!
Re: (Score:3)
If you manage by numbers, people will make sure you get numbers. If you manage by results, people will deliver results.
Hmmm.... (Score:2)
True but unhelpful for the average coder for whom there is no measurable link between the results which can be measured without creating distortion and his coding activities. Which is why lots of other measures have been tried - and failed - to measure the output of coders...
The need for virtue (Score:2)
Large scale endeavours require effort by all to succeed. Not least because of the employer / employee class conflict, it is necessary to try and regulate employees to work hard at their jobs (there's nothing new in this, the bible struggles with this issue). Anecdotally there are clearly those who are skivers - and employers want to prevent this. That's the challenge that McKinsey is seeking to address in programming - and universities are addressing with academic article production etc.
Given the pressure o
Maybe? (Score:2)
Re: (Score:2)
For productivity, yes.
For debugging, and how much it pisses me off, the Gin Tonic number is a way more accurate measurement.
Reminded of the saying about accountants (Score:5, Interesting)
They know the cost of everything, and the value of nothing.
And I remember arguments about "lines of code added" as an argument (in both mathematical and philosophical senses) for productivity. There's the friend who took over a compiler project, refactored it, got rid of 20k SLOC. I told him, "Huh. By the productivity models, you owe your employer a couple years worth of work."
Depends (Score:2)
Can You Measure Software Developer Productivity?
Depends on the task: writing new code, editing existing code, debugging, porting, testing, etc ... Some people are better at some things than others, and some are *way* better. I'm also going to add: in what programming language(s) and/or on what OS(es).
I have experience in several languages on several OSes and have been a sysadmin and software developer, mostly systems-type programming, on all the platforms I've administered and also have ported code (compiled and scripted) between many of those platf
Parrots (Score:5, Insightful)
Re: Parrots (Score:3)
Perfectly put. I have seen this exact pattern in several cases. Sometimes it takes years to run its course, but typically as soon as the "consultants" left, it was back to whatever was done before.
The worst bit was the inevitable employee who sees the whole thing as an opportunity to self-promote by strongly embracing whatever the consultant says, which then the consultant tells management that this person is key to their transformation and should be promoted. Anyone who questions the consultant openly is "
Re: (Score:2)
Norwegian Blue Parrots, no doubt!!
I hope not (Score:4, Insightful)
Sometimes you're typing a million characters per second, and sometimes you spend an hour or two (or more) thinking about what you should be typing.
I can see no way to judge productivity other than comparing completion time and bug count against similar past projects.
Find employees you trust, pay them enough to keep them, and unless they're surfing for porn or something... leave them the hell alone as long as milestones are being hit approximately as quickly as for similar past work.
Give a manager a metric for judging coder performance, and you will ultimately have low morale, bad code, and high churn.
Accounting vs better products (Score:5, Informative)
This reminds me, by contrast, of Waddell and Bodek's "Rebirth of American Industry", in which they compared the management practises of American and Japanese carmakers in the 1980s. The Americans were focused on breaking down every step of manufacturing in accounting terms so that they could figure out exactly how much profit could be attributed to the assembly of a steering wheel or the installation of a taillight. The Japanese were focused on responding as quickly as possible to worker productivity suggestions so that they could build better cars faster.
It was an attitude difference between "your employees are cogs who must be measured and controlled" and "your employees are doing the actual work and probably have the best ideas about how to do it better."
Anyway, hopefully some software developer subject to these McKinsey ideas can write up a program to easily measure the productivity of their management.
I really hope not. (Score:2)
How do you measure "thinking"? (Score:5, Informative)
A lot of good software development requires sitting down and well, thinking. Be it trying to come up with a creative solution to the problem, or just "a solution". I would love to have a way to know if my sitting around and thinking is getting me closer to the solution I seek. I mean, imagine how much better my life would be if I had a progressbar indicating my progress. If I wander down the wrong path, the progress bar wouldn't move or move backwards, showing my path is wrong and I should back up and try the other way.
So much time wasted going down the wrong way, knowing that would make me much more productive.
And what about when I need to research the problem space more to understand the problem and potential solutions?
Then again, why do companies waste money buying these non-solutions when for similar amounts of money, they could spend it improving the lives of their employees? Millions spent on consultants vs. those same dollars spent making people's working lives less miserable? It could be simple things - standing desks are stupidly cheap, as are nicer monitors and keyboards and even things like computers. Or even better chairs. Or better lighting. Or walls, or better venilation, or remote work?
I mean, I was tasked to do a relatively simple thing, but the most straightforward solution to the problem didn't work (ended up in a dependency loop in the build system). So I tried a less straightforward solution, which ended badly before I came with a third solution that ended up being the most elegant solution given the restrictions I discovered during the first two attempts. I think over the 3 weeks I worked on it, I might have written maybe 2000 lines of code, but when I finally committed my solution to the repository, I really added maybe 100 lines total. Negative lines, if you consider that I got rid of some code.
It took me 3 weeks, but I avoided adding a ton of technical debt and came up with something small and elegant along the way in that things worked the way everyone expected, the changes were self-contained and small and not exploding across the entire repository, and I removed a bunch of #ifdefs as the conditional code no longer applied, streamlining the source tree. No more special case code.
How do you measure that? I spent nearly a month on a stupid problem that was an iceberg in disguise and ended up writing a tiny amount of code that cleaned up a lot of the code base.
Re: (Score:2)
If the problem was predictable, then means it was repeatable, that means it should have been abstracted away and packaged into some component, and the issue (bug/feature) won't arise again. The next issue will be a new issue, not
Re: (Score:2)
Re: (Score:2)
Kolmogorov Complexity Is not computable. Essentially it's impossible to know how _hard it is to write_ a particular algorithm if given the desired output. We have heuristic measures but that's all they are, heuristics. What can be measured is how an algorithm performs and that should be the measure of a programmer. Does the algorithm produce the desired output and does it perform optimally? But I feel like that's not what a programmer is truly measured on. They are more measured on how well they bend to the
Re: (Score:2)
If we can't measure "thinking productivity" or "thinking effort" how can we reward for it fairly?
That is the basic idea behind "free markets", right? That information will allow us to fairly reward people, and the rewards will motivate us toward higher productivity.
Heck, SHOULD we reward fairly like people claim we've been trying to? I'll bet a lot of people would go hungry if we truly did. If people were paid what they just produced, and lost based on wasting other's productivity.
Re: (Score:3)
You've only moved the problem up the chain a bit. Sure, it's easy to measure the productivity of "code monkey" work. How do you measure the productivity of the people making the "well described designs" in the SDS and their partitioning of the work among the code monkeys? The hard thinking that's hard to measure has to happen somewhere or you're not really doing much of value.
Re: How do you measure "thinking"? (Score:3)
No, you are ignoring the issue. You want someone to create the SDS to pass onto the coders, while prohibiting that designer from actually examining the problem via trial implementations. Effectively you seem to be assuming that the ancient waterfall model is the only suitable model. Far too many times have I talked to a customer about their requirements and things have gone thus:
1. Customer asks for X.
2. I say "OK, and where do I get the data to determine X?"
3. Customer answers question, specifying where th
Re: (Score:2)
You're talking past each other. Yes, with a perfect spec, it's easy to measure productivity of the implementation of tiny chunks.
The question is, "how do you measure the productivity of people preparing the perfect spec?"
Put another way, is it okay if collecting business requirements and producing a design takes 10k man-hours, if implementing that final design only takes 10 hours? What about 100k man-hours of "pre-work"? 200k man-hours of "pre-work"?
Re: (Score:2)
So, what metric would you use to compare your work with an SDS, compared to someone else trying to do the same SDS, or compared to you working on some other SDS?
Are you measured by # of requirements you produce per unit time? The % of change in the requirements between drafts? The number of questions needed to clarify each requirement?
If you were managing someone doing your job, what would you measure them by?
Rhetorical wank vs reality (Score:2)
If you could not measure productivity of something, that means that you could put a monkey to do that job. As in a literal fucking ape. And not one of the great ones. One of the stupid little ones would do fine.
The question here isn't if you can measure software developer productivity. You absolutely can, because we can measure that said small monkey cannot do the job as well as a typical software developer. And no you racist twits, there's a reason why American Indians outearn you by such a massive margin
Can't tell day to day (Score:2)
Yes, but only indirectly and retrospectively (Score:4, Interesting)
You can tell they were productive if you're still using their code two, five, or even ten years later.
You can tell they were unproductive if you're not, or if you had to fire them, or if they got frustrated for whatever reason and quit.
Re: (Score:2)
Re: Yes, but only indirectly and retrospectively (Score:2)
It can't be that shitty if it's still running.
At the very least it does what it needs to do while the alternatives either don't, or just don't exist.
Re: (Score:2)
Re: Yes, but only indirectly and retrospectively (Score:2)
That statement is true of all code.
The Apollo guidance computer famously couldn't handle anomalous input from the rendezvous radar, which had to be bypassed in-flight, during the first landing.
Re: (Score:2)
This sounds like the kind of toxic optimism Pixar warned kids about in the movie Inside Out (the Joy character)
You can tell they were productive if you're still using their code two, five, or even ten years later.
The dangerously faulty code had been in place so long, no one could determine its origins. "Probably some jr dev", they excuse. "This is definitely Bad Practice. Even if we work around it, this could cause problems for someone else in the future. It could be causing problems now that we're unaware of."
So why wasn't it fixed? Fear. Fear of their own chaos, for it controlled them now. The managers co
Go ahead, measure by line (Score:2)
Measure my productivity by lines of code. I can pump shit out at a phenomenal rate. But it's going to be exactly that: shit.
I'm paid to solve problems. Sometimes a problem requires me to change a single variable. But finding that single variable could take hours, sometimes days. Debugging is often a time consuming process.
got to fill out the TPS reports (Score:2)
got to fill out the TPS reports
They found a potential niche (Score:2)
Another thing that's maybe as or more important than productivity is trustworthiness, which is not usually measured, but I much rather work with a trustworthy but mediocre engineer than a brilliant and highly productive but unreliable one.
This sound they're fishing for business.
Aren't these the same pricks targeting Diversity H (Score:2)
Leadership Program or whatever. Louis Rossmann covered this.
Yes. (Score:2)
Software is not an assembly line. There isn't some blue-print design that gets made repeatedly. If something in software development repeats, it's reused again.
This means everything in software is always new, untested output. Especially at shitty web development jobs were people continually reinvent new Javascript frameworks. That means there is no applicable historical data that can
Re: (Score:2)
That means there is no applicable historical data that can be meaningfully applied to a new project that would predict how long it would take.
Indeed. And even more critically, you cannot predict whether a software project will succeed or fail, whether the code will be maintainable to a reasonable degree, whether it will have potentially catastrophic security problems, whether usability will be good, etc.
Face it, writing software is custom engineering. In any other engineering disciplines, that is reserved for experienced highly capable engineers. In software it is often done by people that do not really even understand the basics. That cannot and
Re: (Score:2)
I've always found a good way of framing it is to consider software engineering collective opera writing. The final product includes sheet music for every instrument and vocalist, costume design, blocking, lighting, all the set construction, and just to put a cherry on top, the negotiated contracts of every employee.
Re: (Score:2)
Interesting. Also should be stated that the music needs to harmonize in the end and be a valid opera that people actually want to listen to. You probably have some musical background (I do not), so that would be obvious to you.
Not quite sure whether I would use this comparison, but it is definitely an interesting one.
The problem is the consequences. (Score:2)
What happens when you figure out that 10% of programmers are 10x more efficient than 90% of the programmers?
Are you going to pay your high performers 10x more?
Are you going to pay your low performers 10x less?
We don't have nearly the range of compensation to deal with the consequences of truthfully measuring developer productivity.
Re: (Score:2)
You help the other 90% get 10x more productive. You hired those people, you know they have the capability, just figure out what's causing them to not be productive.
Or, you really look at your hiring process.
Re: (Score:2)
If that was possible, it would have happened already.
Sadly, not everyone can be a x10 programmer, no matter how hard they try.
Re: (Score:2)
A friend of mine got a full time contract and works about 20 hours a week.
His boss, when asked by someone else how this is fair, said "I pay him to produce results. Not to keep a chair from flying off into space. You produce what he produces in half the time, you only have to work 10 hours a week for full pay".
Then again, he has a boss who is now 50 and was a developer in the same line of work for 30 years. He knows what assignments should need what time for an average programmer. And if he's done with it i
Re: (Score:2)
Yeah, that sounds about as good as it gets - maybe you comp the x4 programmer by only making them work 10 hours a week.
And I guess the x10 coder just shows up for one morning a week for 4 hours :)
Not sure how you institute that across a large bureaucratic organization though - some bean counter is not going to like that arrangement.
Re: (Score:2)
Well, being twice as productive as the average programmer is already pretty impressive, I doubt anyone will make 10 of them obsolete. :)
My understanding is that the bigwigs in the company know about the arrangement and also know that losing this guy would be a loss, and that he can and does get up and leave if he isn't happy with the arrangements.
If you have a very special skill set and are really good at what you do, it's usually not hard to get your demands met.
Re: (Score:2)
I have regularly noticed programmers that are literally orders of magnitude better than others. In some cases, you could put 1000 regular programmers on the job for any amount of time, and they'd never succeed where the one ace programmer can.
I've certainly seen some people spend weeks working on a problem that I've been able to solve in an hour. Those people might be useful doing standard changes that don't require strong problem solving skills, but they simply aren't as fast (i.e., productive) as people
Nope (Score:2)
Or rather not meaningfully. You need to look at quality as well, at level of insight in the solutions, at maintainability, and more important than ever, security. You can measure some of these in isolation but the result will be meaningless.
McKinsey. Nuff said. (Score:3)
Ah, yes. McKinsey, where management advice goes to die (after being billed).
I've been involved in I think 4 McKinsey "interventions" in my career. 2 were outright harmful, and the other 2 were merely a massive waste of money and time.
Can you measure software productivity? Well, maybe, depending on how you *define* productivity. For sure, you can't apply any naive metric; the real wizards in any given organization are the ones who might spend a day making the proverbial chalk mark where the part isn't working, where nobody else even knows where to look.
There's a wonderful book, which is alas in a storage unit at the moment and I can't find the name, about measurement and organizational dysfunction. The thrust is that if what one measures isn't aligned with the organization's goals, and the latter is very often misunderstood, one will lose track of the organization's goals and favor the (organizationally irrelevant) measurable metrics. In simple words, the drunk under the lamp-post syndrome. "I dropped them over there, but the light is better here."
I suspect that too many productivity measurers imagine (or hope!) that software is linear, akin to piecework like making skirts or hats. Alas for them, it's not. Software is everywhere discontinuous and so is software development - especially when it's bug fixing.
Re: McKinsey. Nuff said. (Score:2)
As regarding measuring the wrong thing.
Military bases frequently have a "base exchange". In a nutshell, they're department stores where military members can purchase goods. Quite useful since many military bases are fairly isolated and junior enlisted frequently don't have cars to travel to nearby cities or towns. In any case, these stores need managers and these managers need to be evaluated. At one point some higher up decided that a good criteria would be so see how well the shelves were stocked. After a
GPT4 can measure it (Score:2)
I'm sure AI will be used to measure it in the future. However, developers will also write code using AI.
What an exciting paper! (Score:2)
Linked not once, but twice:
https ://dl.acm.org/action/cookieAbsent
Good leadership, not good metrics (Score:3)
The key to good productivity comes down to good leadership.
Find good people
Remove bad people
Give clear requirements
Create a great environment
Allow the time needed for people to do their best work
Then stand back and stay out of the way as much as possible.
Yes you can (Score:2)
Unfortunately only after the project is finished. You measure the number of bug report tickets.
McKinsey (Score:2)
McKinsey - creating a problem than only McKinsey can solve.
Just like every Tom, Dick and Harry that invent a new Pattern/Practice that solves a problem that rarely exists, making development/debugging more complex - just for the sake of earning money - rather than getting stuff done and meeting expectations - on time and budget.
it's easy to measure teams (Score:2)
It's hard to measure individual performance. Engineering teams have deliverables on a schedule, making those schedules is a measurement of productivity. And effort can be estimated by developers and tracked to evaluate project managers. Tasks can be tracked and prioritized in something as simple as a bug database. From there management can collect metrics such as the time to close tickets or the rate that tickets are closed over a long period.
Since engineering consists of professionals, making them define s
Filed it under: what a hands-off manager would say (Score:2)
The old IBM way (Score:2)
In the PBS documentary Triumph of the Nerds, Microsoft executive Steve Ballmer criticized the use of counting lines of code:
"In IBM there's a religion in software that says you have to count K-LOCs, and a K-LOC is a thousand lines of code. How big a project is it? Oh, it's sort of a 10K-LOC project. This is a 20K-LOCer. And this is 50K-LOCs. And IBM wanted to sort of make it the religion about how we got paid. How much money we made off OS/2, how much they did. How many K-LO
Show me how to measure an artist + learning first. (Score:2)
How do you measure "deciding what to make"? Or how much effort it took to learn a new concept or tool?
Is an artist "failing" to be productive if they don't love their own result? Many technology products need tons of revisions, or even replacement eventually. Are those "negative productivity"? Or was the previous claimed productivity false?
Is putting paint on canvas being "productive" no matter how it was done, or the effect it has? And by that thinking submitting changes is productivity?
We measure wha
It's certain that McKinsey cannot measure it. (Score:2)
"Developer productivity" is a really challenging problem because in many cases more-correct solutions are simpler than incorrect solutions - and, anyhow, it is often challenging to even measure "solution" in this space.
But regardless of whether experienced developers can measure productivity in anything remotely resembling an actionable way, I'm 100% certain that some random third-party consultant can't do so. I'd say that for any situation where McKinsey CAN successfully measure developer productivity, the
Probably the best way would be code size (Score:2)
Take the amount of code a developer wrote in order to solve a problem, add the dependencies and subtract comments and documentation. Perhaps if the language has some form of nested structure, multiply every token you count by the level of nesting before you sum them up. (Obviously one needs to work out the details)
If one produces a lot of code, not only will they spend a lot of time and resources writing it, but everyone coming after them will also need to spend time and effort to read and understand it. Co
Re:demand deliverables (Score:5, Insightful)
Unless you are doing routine work, the same as before, then the real problem is "You don't know what you don't know". Have you ever started a sentence to you boss with "I just wanted to...".
Re: (Score:2)
When doing anything new, or new-ish, it's an exploration of the problem to try to understand the problem and try solutions as a way to try to better understand the problem. Then at some point, time runs out and if lucky there's a certain amount of usable stuff to wrap into a product. I'm not aware of any exceptions to this.
Re: (Score:2)
There might be some semantics here. What you describe isn't what I'd call development, it's just construction.
Re: (Score:2)
Case in point. I was recently working on
Re: (Score:2)
You must be a miracle worker! Surely in this modern, Agile world no-one ever knows any requirements before development starts. Even if they did, those requirements would have changed five minutes before they were written down. So it's clearly completely impossible to make any kind of plan and instead we need a modern development process where we throw stuff at the wall and see what sticks.
Love,
Every Agile specialist working at these big consultancies, probably
Re: (Score:2)
When the requirements are so well designed that their implementation can be actually measured, then a machine can probably do the job. But when the job is too complex for a machine to handle, actually measuring productivity is probably impossible. You can guesstimate things, and it's worked well so far.
Re: (Score:2)
Re: (Score:2)
That are the ones that are sent to management classes because that way they cause less damage.