Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software

Rating System for Open Source Software 207

prostoalex writes "Carnegie Mellon University, Intel and SpikeSource are launching a rating system for open source software, New York Times says. OpenBRR 'is being proposed as a new standard model for rating open source software. It is intended to enable the entire community (enterprise adopters and developers) to rate software in an open and standardized way.'"
This discussion has been archived. No new comments can be posted.

Rating System for Open Source Software

Comments Filter:
  • Oh No! (Score:5, Funny)

    by AAeyers ( 857625 ) on Monday August 01, 2005 @05:39PM (#13218119) Journal
    This could be hurtful! Everyone should be a winner!

    Think of the children!
    • Free software, by definition, is software that's not used to extort money.

      But how are you going to reward the creators? How are they going to feed?

      This rating system could be one step in the right direction.

      Rating could determine donation-worthiness of software - say there was a universal tax levied, or universal donation fund by businesses and foundations, and then the money could distributed by professionals who know where to spend the buck most efficiently, which developpers are about to die of hunger,
    • I propose that the current form of sports competition is as harmful to the children as letter grades and streaming and holding back.

      We need to put up high curtains between the lanes and have individual finishing areas. This way not child will be made to feel inferior when they realise that they cannot run as fast as other children.

      I know this will be a damper for the spectators, but what can we do? Think of the children.

      all the best,

      drew
  • FUD from the NYT (Score:5, Insightful)

    by Catamaran ( 106796 ) on Monday August 01, 2005 @05:39PM (#13218120)
    The author of the article talks as though evaluating software objectively were a problem unique to adopters of Open Source:
    Free software, despite the price, can be confusing and costly for corporations to use. A few freely distributed programs, like the Linux operating system and the Apache Web server, have become well known, but most are still unproved.
    A more simple and accurate statement would be, "Software can be confusing and costly".
    • by 1000101 ( 584896 )
      "A more simple and accurate statement would be, "Software can be confusing and costly"."

      The entire article is about open source software, not all software. His statement is valid.

      • Hah! The main difference is that with OSS, at least you can evaluate the software if you choose to spend the time to do so, without paying a big fee and agreeing to a ridiculous license. Furthermore, with commercial software there is more incentive to lie and exaggerate the software capabilities.

        I don't blame the NYT, but I'd love to hear the rationalization for limiting the rating system to OSS. I know I'd love to rate a few of the commercial applications I've used. Some shareware sites do have ratin

      • Valid to whom?

        I don't buy it.. but, then again, I use FOSS software all the time. So I'm confused why these big megabucks corps are confused. Didn't their exec go to highschool?

        Software can be confusing and costly, if you're a moron.
        • Valid to those who usually pay for their software and are pleased with what they buy contrasted with those who try free software and are dissapointed in how crappy most of it performs, usually from a userfriendliness point of view.
      • The entire article is about open source software, not all software. His statement is valid.

        No it's not. The same can be said about commercial software too. Moreover, commercial software would also need such a rating system, since you'd better know what you pay for before it's too late.

    • It is totally a problem unique to Open Source. We're all perfectly informed as to:
      How empowering IBM's Software is.
      How much we can get done using Microsoft software.
      How well Oracle scales
      And all thanks to their advertising. I mean, it's as if we already know EVERYTHING without even having to do any silly reading. When was the last time you had to actually install Microsoft software to know how good it was?

      Thank goodness for TV.
    • by msimm ( 580077 ) on Monday August 01, 2005 @06:34PM (#13218445) Homepage
      But the truth is propriety software is quite well reviewed (there's an entire industry who makes it their business to review and recommend commercial software, usually somewhat usefully).

      Reading reviews of you're favorite Windows Antivirus software or researching an enterprise class database package will turn up a wealth of infomation (of course you still need to dig into it and make the final decision, but some things simply can't be helped:).

      OSS software is comparably a total mess, with only certain major projects (and not surpisingly usually projects with some sort commercial support, i.e. apache, mysql, sendmail, etc, but the water gets pretty muddy quickly).

      And aside from all those mainly concrete (maybe to you and I anyway) worries there are other concerns when reviewing OSS software for deployment in a business/production environment: support, boss appeal; someone has to sign off even if the software is free, that the software is mature/will meet or exceed your needs and that (if you decide to leave the company) its reasonably well supported (so someone that comes in and doesn't know the particular software has a reasonably good chance of configuring and maintaining it).

      Those crazy business people.
      • There is the measure that some OSS applications that are worth a darn rise to the top. Those that don't are buried on some obsecure inactive SF project.
        • It could be, but I work in the industry. The truth is that while a lot of good projects get/become commercially supported a lot of good projects don't. And even out of the ones that do a lot still don't get the kind of thorough review that you'd find in the commercial software business (TMDA, Postfix, etc).

          And of course they miss out on the stuctured feedback that kind of review provides.
      • But the truth is propriety software is quite well reviewed

        The truth is that some of it is and some of it isn't. And a good number of the "reviews" are done by hacks who are getting paid by the developers for it.

        Reading reviews of you're favorite Windows Antivirus software or researching an enterprise class database package will turn up a wealth of infomation ... OSS software is comparably a total mess, with only certain major projects

        To the contrary. I can type the name of almost any OSS project into
        • With business the rules are different:

          A) there is no free, free costs money in support or special training, but more importantly free might cost PHB's (or more likely the tech person who was responsible for deploying it) their JOBs.

          B) Support, support, support. If you're going to deploy something on a business critical production system the rules of the game change a little bit more as little issues like: is it well documented (no, forums only count for the poor sap trouble shooting it, you and I may have
        • Support is way over rated. I have never ever gotten good support from any company that produced anything.

          There are two aspects to support; actually getting support, so that your problems are fixed quickly and efficiently is one of them. The other, and often just as important to some people, is being able to point the finger at someone else.

          It's arse-covering - being able to say "Yes, there's a problem, yes, I recommended the software, but it's not our fault, the support firm is dicking us about - we've paid
      • But the truth is propriety software is quite well reviewed (there's an entire industry who makes it their business to review and recommend commercial software, usually somewhat usefully).

        Oh come on. Please tell us, how many of such reviews have you read. And please tell us, hwo many of those read reviews were actually good, and not written by some fake ignorant "pro" with too much spare time and nothing else to do (yes that goes for quite many "professional" sites also, you'd be surprised).

        Most of such
      • The truth is that you don't understand how or why people use open source software.

        You are the ultimate reviewer because you can download it for free, test it on your hardware, see if it really meets the needs of your organization without the pressures of trial software or of a vendor looking over your shoulder.

        The age of organic IT has arrived, which is the age of real IT. In five years time, people will not buy into the marketing drivel that often promises the earth with very little in situ quantifiable ev
        • The fact is I use both proprietary and Open software in both my personal and profession work every day. I'm a systems administrator so I'm in a fairly good position to see what its like from a business prospective.

          The idiom "anyone can support it" cuts right down to the problem with the OSS idealist. Do businesses really want to have to support it? But more importantly, do they want to bet their short, long or mid-term strategies on it?

          This is exactly why Redhat does so well. In a year, they'll be here.
      • OSS software...
        So are you the kind of person who enters their PIN number into the ATM machine while looking at the LCD display?

        (Laugh, it's funny! Well, maybe...)
    • The author of the article talks as though evaluating software objectively were a problem unique to adopters of Open Source:

      Good point, it's a problem with all software, and even moreso with proprietary software, I would argue. Open Source has the advantage that you can try/evaluate it indefinitely, inspect the source code yourself, and it rarely comes with a spiel from some idiot salesman that has never coded a day in his life yet says it can do everything you ask if it can do, even when it can't.

      Thi
    • A few freely distributed programs, like the Linux operating system and the Apache Web server, have become well known, but most are still unproved.

      Surely, Captain Kirk, wrote that, article.
  • by generic-man ( 33649 ) on Monday August 01, 2005 @05:41PM (#13218133) Homepage Journal
    If you execute a specific elisp file at a key time, emacs displays a very graphic mini-game involving Richard Stallman. As a responsible parent, I want to make sure that this sort of thing isn't seen by my children when I'm not watching them.

    I applaud this rating system and wish it well.
    • If you execute a specific elisp file at a key time, emacs displays a very graphic mini-game involving Richard Stallman.

      I would pay money not to see that.

      My eyes! The burning!
    • Oh, don't get your undies in a bunch. It's basically just Stallman and ESR "talking nerdy" to each other. There's hardly any physical contact, and what little there is is better described as "awkward" than "hot man-on-man action". Finally, and most importantly, it's important to note that they're both fully clothed.

      *shudder*

      (I kid! I kid!)

    • > If you execute a specific elisp file at a key time, emacs displays a very graphic mini-game involving Richard Stallman. As a responsible parent, I want to make sure that this sort of thing isn't seen by my children when I'm not watching them.

      You think you've got trouble? I bought this goddamn O'Reilly book [oreilly.com], and right there in Bob-damned Chapter 15 if it ain't instructions on how to get Hot Coffee!

  • So the site exists to provide feedback on open source software, but yet all of the RFC documents they provide for public consumption are using Microsoft Office formats. Seems like using OpenDocument or PDF would have been a bit more appropriate. What better introduction could you have to businesses than to let them know the world doesn't revolve around Microsoft Office.

    ed

    • Actually you want them to be able to open it, and there might be a good chance that the people wanting to see an rating AREN'T using open software, but are considering it.

    • Actually, the main white paper is in PDF: http://www.openbrr.org/docs/BRR_whitepaper_2005RFC 1.pdf [openbrr.org].
    • Here here. This is the first thing I noticed as well. I'm all for a rating system for open source projects (even if people use it for no other reason than to find projects that they hadn't found anywhere else). However how hypocritcal can we possibly be when we write reviews of open source software in a non-open source format? Good grief, people. What the hell were you thinking?

      Frankly I prefer a review system based on raw numbers such as how FreshMeat.net [freshmeat.net] handles ratings. How many downloads does a

      • The Freshmeat method is a step in the right direction, but it doesn't go anywhere far enough. What does it tell you?

        Obviously, popularity doesn't mean anything or we would still be using Windows. Downloads don't tell you anything about how good a project is. Downloads from repeat "customers" might, but Freshmeat doesn't tell you that. Clickthrus to the website don't tell you anything about how good a project is. It only tells you how enticing the project description is.

        I want to discount their "vitality" st
        • A high vitality is actually a mark of instability!

          I'd definitely disagree here. A high vitality mark just means that the project is continually making progress. It doesn't mean that it's instable. It means that the developers are actively moving forward with the project. I do agree that this isn't exactly a useful benchmark. Look at Sendmail's vitality. Ha! I'd consider it to be the best MTA and it from Freshmeat's number is looks like an abandoned project. :-)

      • The proper shout is "Hear, hear", which, when you think about it, makes an awful lot more sense.
    • Here here... I'd go a step further and save all of the files as LaTex files so you can easily convert them to whatever format you want. If the CIOs don't have the necessary software to convert/read LaTex files this will be a great opportunity to tell them to RTFM, write the code themselves, or atleast install the right software using apt-get or portage. So not only do they learn about the rating system they'll learn a lot about the community too.
    • A rather telling indicator of what many employers think about ms office is:

      "Please do NOT send your resume as an attachment; rather, please send as text, in the body of the message."

      Some don't even want .pdf files. Why? In almost every instance (ok, in a number of them) where I've seen this request, they specifically say this measure is to avoid a virus attack. Some places even ask for fax or paper resumes. So much for anti-virus tools and ms office, particularly with ms word having vb and other useless jun
  • Good idea... (Score:3, Interesting)

    by msmercenary ( 837876 ) on Monday August 01, 2005 @05:42PM (#13218141)
    I can't count how many times I've googled for some OSS to do a specific task, and found what I wanted only after installing and uninstalling four programs that were buggy, slow, didn't have the features I wanted, or simply wouldn't build/install.

    On the flip side, there has always been an inherent and objective rating system for the quality of non-free software -- At what price will enough people purchase it to make it worth producing?
    • Well don't ask Intel. They are the worst people for rating anything. It's going to come down to whether it is optimized for their instructions to get points.

    • On the flip side, there has always been an inherent and objective rating system for the quality of non-free software -- At what price will enough people purchase it to make it worth producing?

      I'd say the opposite is closer to the truth! In my experience, expensive software tends to be niche software with only a few customers, and plenty of rough edges. The cheap, workaday software (say, WinZip) is polished. Price has more to do with the dynamics of the competitive environment - value is just the upper

  • This is a silly idea. Most people don't know what good software is, they will always pick the thing with a giant paper-clip over something that runs ten fold faster... I fail to see how this is valid. Not least of all because a lot of OpenSource software isn't designed for the public domain and thus who would be able to say good things about it?

    I would also like to ask what software being OpenSourced (as opposed to Closed or Free source) has to do with a rating system? Also what value is there is a 'stan
  • The rating system has 11 categories, including Normal, Offtopic, Flamebait, Troll, Redundant, Insightful, Interesting, Informative, Funny, Overrated and Underrated.

    Each category is to be rated -1 to 5.

    There will also be filtering tools so a potential corporate user can specify its most important considerations.
  • Does this mean that an Adult Only rating will be applied if the source code contains excessive amount of profanity in the comments? Will the U.S. Congress tax any OSS with an Adult Only rating? Will having clean code become the norm?
  • FOSS has always been rated by popularity among users. What's wrong/insufficient with this good old natural rating system ?
    • It's time consuming to ask everybody whenever you're looking for some software. If you were choosing between qmail, exim, and sendmail, it would be tricky to accurately determine their popularities without some formal method. And applying a formal method takes a lot of work, which would need to be done by everyone trying to choose a program. The point of this project is to do that for you.
    • Because if it were only up to popularity, you would still be using Windows!
    • Why do we have movie reviewers when we can have box office receipts to tell the same thing (substitute books, games, etc)? Answer: because its more useful for you to have someone whose judgement you trust act as a proxy for your utility than it is to have the market average act as a proxy for your utility.

      Information aquisition costs regarding software are insane, too, compared to the price of a book. Cost of determining Harry Potter #6 was definately a good read: $23 and 8 hours of reading. Cost of de

  • As someone who is a windows user I'd like to see objective direct comparisons for all OS's so that I can see what will be seamless and what i will have trouble with so i can see what I'm getting myself into.

    I guess an analogy might be buying a new type of car. I only have my current car to judge the new one by, and I know my old car pretty well... so how does it stack up to my old car, and what are these new models they are offering.
  • ...is that often people will give a piece of software a low rating as a sort of "cry for help" if they can't figure out how to get it to work. Like, "I would've given X a 10/10 rating but I couldn't work out how to make it do Y.".
    • Good, then maybe software developers will improve their user interfaces. If a user can't figure out how to do X, then X might as well not even be implemented... it amounts to the same thing.
      • If a user can't figure out how to do X, then X might as well not even be implemented... it amounts to the same thing.

        I absolutely agree if we change it to "If all users can't figure out how to do X" but saying "a user" leaves no room for a learning curve. I think it's completely valid to expect that some software will be written that is not necessarily meant for the novice user.
  • As good as CMM? (Score:3, Insightful)

    by orthogonal ( 588627 ) on Monday August 01, 2005 @05:54PM (#13218218) Journal
    Carnegie-Mellon these are the guys who love to quantify the unquantifiable.

    Didn't they also give us the "Capability Maturity Model"? I've seen organizations race to get to CMM-3 or CMM-4, and it's all been a joke.

    A bunch of highly paid consultants tell everyone a new way to count beans ("under CMM, we group the beans starting from the right, not the left....").

    Promises are made about code auditing, but once the CMM level has been awarded (usually by highly paid consultants who just happen to work with the highly paid consults who "mentored" the company's CMM training), all tat's actually done is that the people doing the real work of writing software are regularly distracted by a clown with a check-list and a clipboard.

    Carnegie-Mellon continues to have a fetish for quantifying and for creating check-lists, and middle management continues to have a fetish for anything that allows them to quantify (even spuriously), because it takes the risk and bother out of their jobs.

    Middle Manager: "The WordPerfect Project only got a 3 on the Carnegie Mellon software score, but the Clippy Project got a 5! So, it's perfectly safe for me to decide that to disband the WordPerfect Project and devote its resources to the Clippy Project. (And if it turns out later that was a bad decision, they can't fire me, because I relied on hard numbers generated by a known process!"

    • From a personal perspective, I like to imagine software has been tested to death by totally anal people with checklists before I find my life depends on it.
      • I think you misunderstand what CMM et al, are all about. Software "Quality" checklists have nothing to do with software testing (except perhaps to ensure that the header, footer and coverpage of the test document were conformant).
        • "I think you misunderstand what CMM et al, are all about."

          I think you underestimate my experience in software development.

          I belive your bad experiences are due to a common mistake many bussiness make, ie: CMM being enfored from "on high" using a few contractors to "mentor" under-qualified people as the "quality assurance" people, ie: The PHB's want to tack quality on at the end of a project, the rest of the staff can keep churning out code. To you it seems a waste of time because some "quality" person
    • Didn't get accepted to CMU, huh? Had to settle for one of those non-quant schools, like Harvard or Princeton?

    • Didn't they also give us the "Capability Maturity Model"? I've seen organizations race to get to CMM-3 or CMM-4, and it's all been a joke.

      I've worked at a CMM-3 company before. It looks like the CMM has actually been superseded by something else so Its not easy to get the list of qualities for each level, but they are something like 1) random monkeys writing code 2) random monkeys writing more than one program 3) random monkeys writing code "repeatedly" with a content management system (CVS or similar) 4 a
      • I've worked at a CMM-3 company before. It looks like the CMM has actually been superseded by something else

        CMMI.

        so Its not easy to get the list of qualities for each level, but they are something like 1) random monkeys writing code 2) random monkeys writing more than one program 3) random monkeys writing code "repeatedly" with a content management system (CVS or similar) 4 and 5 I don't remember, but its not anything remarkable beyond what you like about quality software.

        Yep; they aren't even trainin

    • Carnegie-Mellon these are the guys who love to quantify the unquantifiable.

      Didn't they also give us the "Capability Maturity Model"?


      FYI, CMM came from Carnegie Mellon's Software Engineering Institute, an affiliated but wholly separate spinoff. This rating system comes from Carnegie Mellon West, a new spinoff campus in Silicon Valley (on the other side of the country from their main campus in Pittsburgh). Neither of these systems come from CMU's main School of Computer Science. Plus, I believe Carnegie Me
  • What's the longest slashdot thread ever? (don't mod me down please!) - A google search provided me with no answer, so, in case anyone knows, what's the longest slashdot thread ever?
  • From the article: Each category is to be rated 1 to 5.

    Gentoo has been given an OpenBRR of "1" for "1337 haxorz only."

  • Yes, excellent idea. We'll rate all OSS with all these nice quantitative factors using mostly arbitrary value assignments, so the already risk-averse PHB community have yet another reason to avoid using OSS!

    I'm certain the boys in the Redmond boardroom are all nodding their heads in delighted approval.

    Perhaps members of the OSS community should turn the tables ? I suggest we create a set of metrics to rate the business users of OSS, e.g.,

    • Has XYZ Corp. contributed any patches ?
    • Has XYZ Corp. contribute
  • I just hope this rating system doesn't suck as much as slashdot moderation.
  • 1 to 10 (Score:4, Funny)

    by b17bmbr ( 608864 ) on Monday August 01, 2005 @06:24PM (#13218391)
    1: absolutely horshit. stuff i wouldn't use if paid a million dolars.

    10: barely usable, requires constant tweaking, stuck at version 0.9.3, crashes occasionally, and requires three new libraries each upgrade which break other applications.
    • Hmm, I think I agree completely. However, as another poster noted earlier, this all applies just as much to proprietary software as to "open source". :)
  • This is like rating computer games, or art, or anything else for that matter. It's subjective, and one person giving a considered score 9/10 doesn't mean another person trying to be equally fair doesn't give it 7/10.

    Another thing: The evaluation often has to do with what you're using the software for. What would you rate better? A saw or a hammer? Depends on the situation right?
  • What I have wanted for a while, is website where people could comment on the best program to use if you want to do X. It doesn't have to have a *single* best answer, but provide some information, and allow a default setting for people who might feel overwhelmed with choices.

    The website would have two modes: consumer, and reviewer. The consumer mode would provide a tree-like interface where you could click choices on what you want to do. A couple of examples would be:
    draw->raster->most powerful
    draw-
  • I am wondering how this helps over freshmeat's rating system?

    I personally use the slug.org.au rating system, ask at the next meeting how to do something and listen to the responses.
    • I am wondering how this helps over freshmeat's rating system?

      Why, in a modern day and age, do we have to still have people like this, like freshmeat, and so forth, *still* rating software based on a simple, global, scalar metric?

      One of the *best* things that computers can do versus noninteractive tables of ratings is to produce a customized rating!

      So, if we have a big "web" of entries, and I rate foo better than baz, and baz better than bar, then someone else can come along and if *they* tend to rate things
  • Isn't this just going to devolve into fanboy vote scamming?

    I mean, if you look at the discussion boards, you see a lot of partisanship along the lines of "Linux Sucks" and "BSD is teh fuxx0r." I see more energy spent surrounding certain software packages in the slamming of competing projects than actual development and improvements. It's not universal, but I see more of it than I'd like.

    Also, who do you trust to rate a project? It's authors? Its users? Rabid fanboys of a competing project?

    The same problem e
  • vi or emacs, which is better?
  • Of course, we at /. know that of the 12 categories mentioned as rated categories there is a missing 13th category: interoperability with M$ Windows. If an OS project can't get that '5,' then it's not worth the effort.

    Ben
  • The eXcel templates are cute, but a neutral, open XML document format or RDF markup designed to publish the results of this analysis might actually be useful.

    As it stands, The BRR system will work wonders for private consultants who would rather produce reports one customer invoice at a time. However, if it was easy to publish the document in a standard format, then you could use google to find out whose ratings of a particular program are most reliable and filter out the flames and over-exuberant raves fr
  • I really don't see an opportunity for objectivity here. Who decides whether a particular open source package is worthy?

    For example, I maintain a project [citadel.org] that often competes directly with software produced by Carnegie Mellon University. How could it possibly get a good rating?

    Ok, ok, RTFA and you'll see that everyone contributes, you say. Yes, but then you have the groupthink effect. Slashdot is the perfect example of this, where the level of groupthink and popularity contests are surpassed only by hig

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...