Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT

MySQL 5.1 Improves Performance, Partitioning, Bug Fixes 146

kylehase writes "CIO.com has a writeup about MySQL's 5.1 release planned for next week. Among the enhancements are many bug fixes from 5.0, some of which may increase performance 20% or more, as well as 'partitioning, events scheduling, row-based replication and disk-based clustering.'"
This discussion has been archived. No new comments can be posted.

MySQL 5.1 Improves Performance, Partitioning, Bug Fixes

Comments Filter:
  • by Anonymous Coward on Saturday April 12, 2008 @10:43AM (#23047016)
    MySQL has nearly caught up to PostgreSQL [postgresql.org] in terms of features.

    PostgreSQL's Generalized Search Tree (GiST) indexing is still better than anything MySQL has to offer, in terms of performance and capability.

    The PostgreSQL OpenFTS full text search engine is another marvel of engineering. It routinely outperforms similar extensions for MySQL in terms of performance, memory usage, and concurrency.

    I hope that an upcoming release of MySQL deals with the maximum field size problem. With PostreSQL, there is a max field size of 1 GB. For MySQL, it's a mere 50 MB. For textual representations of certain geographic system data, it's not unusual these days to have individual fields that need to store 500 to 600 MB of data. PostgreSQL handles these fields fine. MySQL fails.

    • by IversenX ( 713302 ) on Saturday April 12, 2008 @11:13AM (#23047224) Homepage
      MySQL fails in many other cases, too.

      Many people see MySQL as the consistent winner in database benchmarks. I don't mean this in a bad way, but a lot of people are so focused on the performance of MySQL vs. PostgreSQL, that they forget that MySQL is usually only fast for really simple queries.

      That would be fine, though, if it weren't for the failing integrity.

      In terms of data integrity, PostgreSQL is kilometers ahead of MySQL. With MySQL, I have seen tables get badly corrupted, sometimes even beyond repair(!) if a disk runs full. That's simply unacceptable.

      The syntax is also pretty lax. Adding an integer and a string? No problem. String and a float? Sure.

      You want a contraint? Sure, it'll accept that query. Will it honour the constraint? Not so much.

      Createing an InnoDB table, for (some) referential integrity? Sure, it'll give no errors, but if innodb support is disabled for any reason, it will create MyISAM tables instead, without any hint or warning. This has the potential to create great data loss.

      Inserting a row with a primary key value outside the legal range? It'll give no errors, but it also wont insert the row. Instant data loss.

      I know it's popular database, but I would probably not recommend MySQL for any project. If you need something lean and fast, try SQLite. Then you _know_ you don't get any type checks and fancy things like that, so you code for it. If you want to proper, free database, go with PostgreSQL. Half-baked is not my kind of tea. I really hope they will work on data integrity in the upcoming releases, but I fear it's not going to happen.
      • Re: (Score:2, Flamebait)

        by locokamil ( 850008 )
        No offense intended... but how do you bake tea?
        • Re: (Score:3, Funny)

          by chunk08 ( 1229574 )
          1. Pick tea leaves
          2. Preheat oven to 400 degrees farenheit
          3. Arrange leaves on baking sheet
          4. Bake until crispy and dry, but not burnt
          5. ???
          6. Profit!
      • by Splab ( 574204 ) on Saturday April 12, 2008 @12:49PM (#23047822)
        While I generally agree with you a few points and additions.

        Createing an InnoDB table, for (some) referential integrity? Sure, it'll give no errors, but if innodb support is disabled for any reason, it will create MyISAM tables instead, without any hint or warning. This has the potential to create great data loss.

        This is not entirely true. MySQL will revert to MyISAM even though you specifically asked for InnoDB - it will however issue a warning that it is doing so, this of course is a moot point since most application programmers never check for warnings.

        And just to feed the flames while we're at it, MySQL will fail to fire triggers on cascading events.

        If you got table A and B and C where B references some information in A and C in B all cascades on updates in A, then any update trigger on C (and possibly B) will fail to fire. This is a very big problem if you are using triggers to keep at least some form of consistency.

        To top it up most replication services in MySQL are at best flaky, usually they replicate by using the binary log, so if the primary fails you lost the X last seconds/minuttes/hours (depending on setup and load) of transactions. Even if you got the binary log on a GFS you are still in big trouble since the secondary still needs to replay all transactions leading to the failure - I've heard of sites where this was taking minuttes to complete! (This might change in the new version)

        Personally I wouldn't touch either PGSQL or MySQL in a mission critical environment, they are very nice toy databases, but when shit hits the fan - and it WILL happen - you need a reliable system with instant failover, which neither database can provide.
        • by segedunum ( 883035 ) on Saturday April 12, 2008 @02:20PM (#23048396)

          Personally I wouldn't touch either PGSQL or MySQL in a mission critical environment, they are very nice toy databases
          I hear this refrain from every terrified analyst who ever wants to bring up the dreaded subject of open source databases, and I see no hard evidence for it. Sorry, but my bullshit detector goes into overdrive when I hear the phrase 'mission critical' and 'toy databases'. MySQL has its shortcomings, and has generally been the web database backend of choice (and it powers quite a few large 'mission critical' web sites), but Postgres really has been the open source database that has kicked on. Failover? Mirroring? Clustering? Yer, there are ways and means of doing that pretty well, and I have seen ample evidence that it can be trusted with lots of 'mission critical' tasks.

          I've managed to start using Postgres in an organisation that has traditionally been all Oracle. The main reasons are the huge cost involved of additional licensing for additional servers, the incredible amount of DBA assistance that all Oracle installations seem to need and which they don't have the resources to provide and Oracle's incredible ability to suck any system resources you have into a black hole on any system. When any 'mission critical' database has the memory footprint of either MySQL or Postgres, and when it can actually start up in time for the end of the next ice age, give me a call.

          but when shit hits the fan - and it WILL happen - you need a reliable system with instant failover, which neither database can provide.
          An awful lot of people have been waiting an awful long time for that shit to hit the fan - and in the meantime it has cost them an arm and a leg in not only licensing and support costs, but also in a needless waste of system and hardware resources.
          • by Splab ( 574204 ) on Saturday April 12, 2008 @03:15PM (#23048732)
            Might want to get your BS detector checked then.

            MySQL fails at some very critical points. As I said in previous post it fails to fire triggers on updates.
            Also MySQL believe its better to serve a best effort than a failure - this is probably the biggest NO GO! out there. YOU NEVER EVER do something other than requested in a database. If the transaction model fails you are using no more than an advance file pointer.

            Now PG is a very nice database, they got all the right things implemented, and often better than the competition.

            PG however does not have any support for scaling, if you want to scale you need some form of middleware to handle it - and currently you have to buy continuent for that - which is a nice product, they however don't support stored procedures and triggers.

            And please don't just hit google for PG and scaleability, and come back saying there are all sorts of products out there - most of them are based on triggers and some very bad methods for propergating data - all of them lack the ability to take down primary or secondary server(s) in a running environment and put a new up without interruption in the data flow.

            An awful lot of people have been waiting an awful long time for that shit to hit the fan - and in the meantime it has cost them an arm and a leg in not only licensing and support costs, but also in a needless waste of system and hardware resources.


            That line alone tells me you got your head so far up your OSS arse you are seeing pink elephants.

            IBM Denmark just went down this week for a whole day, pretty sure their big clients are a bit unimpressed in their failure to bring multimillion installations back online.

            If postgres can handle your situation then fine, but in my environment a database failure means everything comes to a grinding halt. And when you promise clients 99.999% uptime you sure as hell need subsecond failover *hint you can't do that with anything that reads binary logs from primary* and zero loss of transactions.

            • by Splab ( 574204 )
              bloody hell, forgot my point with IBM.

              When multimillion dollar installations fails and you are paying for the support + guarantee on uptime you got somewhere to send the bill if shit hits the fan.

              What will you do when your PG installation fail? Go on IRC and ask for help?
              • by growse ( 928427 ) on Saturday April 12, 2008 @04:28PM (#23049250) Homepage

                "Phone Sun" I believe is a reasonable answer to your last point. I also believe they're not the only people who do support.

                But you're right - anyone who picks MySQL or Postgres to power a super-resiliant mission-critical service is an idiot. And anyone who uses Oracle to power a non-resiliant low to medium load webservice is also usually an idiot.

                Tools for the jobs people, tools for the jobs.

                • by Splab ( 574204 )
                  Actually Oracle does come in a "thin" cheap client.

                  To be honest I haven't checked the new prices on MySQL support, but carrier grade support was very expensive before, and I doubt it has improved with the Sun takeover. They do have support, and it is according to rumors fast, however you don't get that support unless you cough up the money for it.
                • by Firehed ( 942385 )
                  I'm sure you won't like the example, but Facebook is almost entirely powered by MySQL. Granted, it's very heavily modified (at least according to their jobs pages) to provide better support for pretty much everything that'll get mentioned in all of these comments, but they say that most or all of those changes will be released back into the public for future revisions.

                  I believe that Google also uses MySQL heavily, or at least did at one point. However, that's just some vague recollection and could be tota
                  • by Splab ( 574204 )
                    Google has done some work on MySQL.

                    Those examples are where MySQL does shine. Any web application where you got a factor of 100 or 1000 - even more reads per write, MySQL is a good option.

                    You can never make a generalization and say this will solve everything. The right tools for the job etc.
              • bloody hell, forgot my point with IBM. When multimillion dollar installations fails and you are paying for the support + guarantee on uptime you got somewhere to send the bill if shit hits the fan. What will you do when your PG installation fail? Go on IRC and ask for help?
                Tools for the job; if you are spending millions on software then go with the a company that is going to charge you millions. If you want a kick ass database for free then go with PG.
                • by Splab ( 574204 )
                  PG is a nice database, but it does not provide instant failover, and thus does not provide what I need.
              • by segedunum ( 883035 ) on Sunday April 13, 2008 @09:16AM (#23053980)

                When multimillion dollar installations fails and you are paying for the support + guarantee on uptime you got somewhere to send the bill if shit hits the fan.
                In reality, you have absolutely nowhere to hide and no one else to blame. The downtime still happened, you still have to deal with it and you're the one who picked IBM or whoever. The enterprise vendor doesn't give a fuck because you ponied up the money and you're locked in anyway. The fingers always point at you. Spending other peoples' money in large quantities to cover your ample ass isn't going to help.

                What will you do when your PG installation fail? Go on IRC and ask for help?
                This is another point that gets made by idiot analysts banging on their blogs. Noting the above, that it is always your fault and your responsibility no matter how much money you chuck at an enterprise vendor, you have to have experienced some of the 'enterprise' support from vendors as I have. The caveats on what they will and won't support a lot of the time are unbelievable. In a lot of cases, Google gives you a faster response and more of a hint at the problem - and I've experienced that from everything from databases to server hardware. By the time a consultant arrives, I know more about what's going on than he does.

                Also, I think you save a lot of time, money and stress by putting yourself into situations where dependency on emergency enterprise support is minimised. Just a small hint.
              • How about EnterpriseDB [enterprisedb.com]?

                I would rather get support for my database from an organization dedicated to the database support, rather than an IBM that might provide a DB2 support guy, along with half a dozen sales guys trying to tell you that you need other IBM products to go along with the DB2 database to really have the environment you need.

            • MySQL fails at some very critical points. As I said in previous post it fails to fire triggers on updates.

              If you have a lot of cascading triggers then I'd worry more about what you're doing than how your database handles them. Handling triggers responsibly is important, as you'll never figure out what the hell is happening twelve months from now. That's an application developer's problem, not a DBA's problem, and I wish DBA's would just stay the hell away. If it has to be done for maintenance reasons or s

              • by Splab ( 574204 )
                First of all, I don't use Oracle, so stop telling me why Oracle sucks, I know that.

                I've had Postgres databases over the past few years that have done that, and have provided uptime that is as close to 100% as you can get (albeit with some inexpensive add-on options at times) - network and operating system permitting. In fact, it's always been a network, OS or other outage that has been the cause of any downtime. These are in environments where people really would notice if the database went away for some re

            • "IBM Denmark just went down this week for a whole day, pretty sure their big clients are a bit unimpressed in their failure to bring multimillion installations back online."

              That has nothing to do with Open Source in general or PostgreSQL (or even MySQL) in specific. IBM suffered a complete network meltdown, something that no database in the world could have survived. All the many extra thousands of dollars a year paid to big database vendors for automatic failover would have been wasted in this case.

              While
            • It's the wrong tool for the job. At the very best it's a kludge. There are excellent tools out there which are designed with no other purpose than getting data from here to there, there, there and there but RDBMS are not one of them.

               
          • by jd ( 1658 )
            Not that I expect anyone to look this far back, but... Ingres is GPLed (and therefore Open Source). Why does that mater? Because it's a very solid system, provides lots of robustness, and is often forgotten as one of the major open source databases. PostgreSQL is great and for many real-world problems, it's perfect. MySQL - not sure where that fits in anymore, but it used to be that you'd want that to handle those components of a problem that needed a speed demon.

            However, I would say that it depends on wh

      • Re: (Score:3, Interesting)

        by Anonymous Coward
        SQLite really isn't that fast and lean though. It's really only good for tiny data stores (in which case you can use RAM instead). If you take the same data and stuff it in the various DB systems you will see SQLite databases are huge compared to MySQL or PostgreSQL (lots of wasted space). Then there is the performance which isn't bad but not better than the other databases.

        Don't get me wrong, I like the idea of SQLite. Per-user databases are needed very badly. I just wish SQLite performed better on n
        • I have to disagree, for whatever "normal sized data sets" means. Admittedly anecdotal, but I was once getting unacceptable performance from MySQL on a 400+ million row table with a few simple joins, and it turned out to be faster to export all the data to flat text, import it into an SQLite database and run the entire thing in SQLite. Same hardware, same OS, otherwise idle machine. Unfortunately, there wasn't enough time to investigate exactly what the cause was for the slowdown in MySQL, perhaps it coul
          • by Bronster ( 13157 )
            I had a bunch of ~200k record databases with about 8 simple tables in them. Indexing data for mail server backups in fact. I was doing lots of indexed queries on said tables, and once the sqlite database hit a certain size it went seek crazy.

            By seek crazy I mean that per single _indexed_ query it would perform about 200 seeks on the database file. Multiply that by many thousands of index checks for your typical backup run and it was game over for sqlite.

            The machine has 16Gb of memory and a maximum of 30
      • by consumer ( 9588 ) on Sunday April 13, 2008 @11:31AM (#23054698)

        With MySQL, I have seen tables get badly corrupted, sometimes even beyond repair(!) if a disk runs full.
        Perfect, an anecdote witout details or any way to reproduce the claimed problem.

        The syntax is also pretty lax. Adding an integer and a string? No problem. String and a float? Sure.
        Turn on the strict mode.

        You want a contraint? Sure, it'll accept that query. Will it honour the constraint? Not so much.
        Turn on the strict mode.

        Createing an InnoDB table, for (some) referential integrity? Sure, it'll give no errors, but if innodb support is disabled for any reason, it will create MyISAM tables instead, without any hint or warning.
        That would be a fundamental configuration mistake. You would get a warning, and any time you looked at the table definition it would tell you it was a MyISAM table, not an InnoDB one.

        Inserting a row with a primary key value outside the legal range? It'll give no errors, but it also wont insert the row. Instant data loss.
        Turn on the strict mode. Seriously, this stuff has all been there for YEARS and you have only yourself to blame if you haven't figured it out yet.

        If you need something lean and fast, try SQLite.
        Give me a break. SQLite is a neat project and great for times when you don't want to bother installing a database daemon (e.g. the music database in Amarok), but its performance is terrible compared to MySQL, especially for concurrent access.
        • Turn on the strict mode.

          That's exactly backward. Those constraints should be on by default and only disabled by the admin running it with --enable-toy-db. It's kind of amazing that a popular database in 2008 still defaults to dangerous behavior.

          • by consumer ( 9588 )
            Yeah, because the defaults on every other piece of software are PERFECT. MySQL tries hard to maintain compatibility with older versions. It's not that outrageous to ask people to specify that they don't need backwards compatibility by turning on the strict mode.
            • Yeah, because the defaults on every other piece of software are PERFECT. MySQL tries hard to maintain compatibility with older versions. It's not that outrageous to ask people to specify that they don't need backwards compatibility by turning on the strict mode.

              Yes, and equally it's not that hard for people who want backwards-compatability to specify "--use-unsafe-behaviour" is it? Surely it should default to "safe".

      • by bytesex ( 112972 )
        The difference is more that when postgres has a drawback (which it does, many in fact - replication comes to mind, as well as certain configurability options, a reason why you always need a WAL (which you don't) and good caching) it will be plainly stated on their website; so-and-so is still lacking, we're working on it, if you want you can participate by either sending in your comments or implementing this-and-that. If, however, mysql has a drawback (which it has, many in fact) it will state nothing at th
    • I'm Already Gone (Score:5, Insightful)

      by segedunum ( 883035 ) on Saturday April 12, 2008 @01:50PM (#23048220)
      We've already started a migration from MySQL to Postgres, and we're not going back. Full Text Searching was one of the features, but Postgres all round just has a lot more to it. You can make the thing look like an Oracle database if necessary, there's auto vacuuming now, asynchronous commits and a ton of other performance improvements that don't skimp on features.

      I really can't see why anyone would choose MySQL now, apart from inertia and backwards compatibility.
      • MySQL has lax enforcement of constraints. Which is a big black eye, and makes it totally unsuitable for a number of important tasks for which people are willing to pay good money.

        However, when you've already made the choice that you're going to compromise on your constraints and referential integrity, it makes multi-master clustering a lot easier.

        This is the niche in which MySQL fits.

        That said, I don't like it, and use Postgres for my own projects.
      • Does Postgres have good mirroring? I use MySQL's cluster ability on my desktop and laptop so that I'm not constantly shutting down both and rsyncing, but have my data with me everywhere. Yes, this is way outside a production environment, but it is where I am.
    • by mindas ( 533922 )
      Don't know the specifics of your project, but isn't it better to use a tool that is designed to do the job (FTS in this case)? Lucene is pretty much de facto standard these days, robust and free.
    • by consumer ( 9588 )

      PostgreSQL's Generalized Search Tree (GiST) indexing is still better than anything MySQL has to offer, in terms of performance and capability.

      Since you offer no benchmarks, this is nothing more than FUD.

      The PostgreSQL OpenFTS full text search engine is another marvel of engineering. It routinely outperforms similar extensions for MySQL in terms of performance, memory usage, and concurrency.

      Same here.

      For textual representations of certain geographic system data, it's not unusual these days to have individual fields that need to store 500 to 600 MB of data.

      No, that is VERY unusual, and probably a sign of poorly normalized data.

      • Have they added indexed fulltext search to InnoDB? Having to choose between indexed full text search and transactions and foreign keys always seem a big problem.

        So, yes you can have transactions and you can have indexed fulltext, but you can't have them at the same time.
        • by consumer ( 9588 )
          Transactions for your full-text search? That's not something most people have or want. Typical usage of full-text search engines is to refresh the index periodically, often once-a-day or less.


          With MySQL, you'd probably keep all of your normalized data in InnoDB tables and keep separate full-text search data up-to-date with triggers or a cron job. I wouldn't call it ideal, but it seems a lot less important than other things I'd like to see changed (e.g. better performance for subqueries).

          • An example for why i would want that:

            Imagine that slashdot uses InnoDB for all its tables(It don't but just imagene it does) and that slashdot
            want to add a feature where you can search other users Journals. To do this effective would require a full text index, but
            that conflict with the need for transactions and foregin keys. There are solutions to this. You can either use MyIsam on the table containing the journals, add a third party full text engine or analyze the situation, and find out that the search wi
    • by Evets ( 629327 ) *
      Many of us have run into limitations - both with Postgres and with MySQL. You can pit one against the other and come up with reasons to use (and not to use) each of them, but the important thing to take from an announcement like this is that open source databases are improving.

      By and large, the database application implementations I've seen over the last decade use the underlying data management software as a storage facility - neither taking advantage of platform specific performance tuning possibilities,
  • From TFA:

    MySQL had said it would release 5.1 in the first quarter, which ended March 31, and some developers have been getting impatient for the new release.

    What?!? I've been running 5.1 on a production server for almost a year now.

    Probably, we should have called it 6.0, because there's so much stuff in there and we've been working on it for a couple of years.

    What?!? The 6.0 alpha has been available for half a year, it's already in developement, OF COURSE you can't call 5.1 6.0 since both are in development. What the hell is this guy on?

  • Disk Clustering (Score:3, Interesting)

    by TheLinuxSRC ( 683475 ) * <slashdot&pagewash,com> on Saturday April 12, 2008 @10:48AM (#23047056) Homepage
    I am really looking forward to disk based clustering in MySQL. I have tried the NDB clustering, but the hardware requirements can be hefty. I am also curious about performance in this area. Contrary to what one might assume, the in-memory clustering is generally slower than storing the files on disk. I am curious how the disk based clustering fares compared to NDB clustering and a traditional on-disk MySQL DB.
    • by dysk ( 621566 )

      Contrary to what one might assume, the in-memory clustering is generally slower than storing the files on disk. .
      Are you sure this holds up for mysql's shared nothing architecture? Most other DBMSes use a shared block device (a SAN) for clustered databases, which is a whole other perforamnce profile.
    • Comment removed based on user account deletion
    • by aauu ( 46157 ) *
      Read carefully, NDB supports placing some types of data columns on disk. Blobs are not tables.

      When you talk clustering there are two architectures:

      A single active instance that can migrate between nodes: This is red hat, drbd, windows, veritas clustering. This is a high availability option where an instance will migrate to a new node by dismounting disk resources, moving ip addresses and starting the instance on a new node. This can be in response to a node failure detected by cluster heartbeat/monitor or f
      • ...although you should already be raid 10 or 20 for you database files.

        RAID 20? I don't think that exists. Perhaps you meant RAID 50 or RAID 60...

    • Re: (Score:3, Informative)

      by theantix ( 466036 )
      With NDB Cluster 5.1, all of the indexed columns are still in memory, so the performance impact is minimal for the types of queries and DML that NDB is good for. At least, in my testing it has been.

      For things NDB cluster is really bad at, like querying against non-indexed tables... even the memory based NDB is terrible compared with the innodb/myisam. So you wouldn't be doing that anyway, but the indexed columns would be relatively unaffected by the change.
    • From my tests and working with MySQL professional services, once disk based clustering is turned on, performance tanks across the board, even on the memory portion.

      This technology has a long, long way to go. There are very few real world applications for NDB cluster right now.
  • License status. (Score:2, Interesting)

    by DAldredge ( 2353 )
    Do they still insist that simply connecting to the server process requires a commercial license if you aren't GPL?
    • Re: (Score:3, Informative)

      by Doug Neal ( 195160 )
      The client library is GPL. There's nothing to stop anyone writing their own client library under another license, but nobody's done that yet (as far as I know).
      • Re:License status. (Score:5, Informative)

        by Fweeky ( 41046 ) on Saturday April 12, 2008 @12:21PM (#23047654) Homepage
        php-mysqlnd [mysql.com] is a replacement for libmysql, under the PHP license.
      • Re:License status. (Score:4, Informative)

        by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Saturday April 12, 2008 @12:35PM (#23047754) Journal
        http://www.mysql.com/about/legal/licensing/commercial-license.html [mysql.com] The Commercial License is an agreement with MySQL AB for organizations that do not want to release their application source code. Commercially licensed customers get a commercially supported product with assurances from MySQL. Commercially licensed users are also free from the requirement of making their own application open source. When your application is not licensed under either the GPL-compatible Free Software License as defined by the Free Software Foundation or approved by OSI, and you intend to or you may distribute MySQL software, you must first obtain a commercial license to the MySQL product. Typical examples of MySQL distribution include: * Selling software that includes MySQL to customers who install the software on their own machines. * Selling software that requires customers to install MySQL themselves on their own machines. * Building a hardware system that includes MySQL and selling that hardware system to customers for installation at their own locations. Specifically: * If you include the MySQL server with an application that is not licensed under the GPL or GPL-compatible license, you need a commercial license for the MySQL server. * If you develop and distribute a commercial application and as part of utilizing your application, the end-user must download a copy of MySQL; for each derivative work, you (or, in some cases, your end-user) need a commercial license for the MySQL server and/or MySQL client libraries. * If you include one or more of the MySQL drivers in your non-GPL application (so that your application can run with MySQL), you need a commercial license for the driver(s) in question. The MySQL drivers currently include an ODBC driver, a JDBC driver and the C language library. * GPL users have no direct legal relationship with MySQL AB. The commercial license, on the other hand, is MySQL AB's private license, and provides a direct legal relationship with MySQL AB. With a commercial non-GPL MySQL server license, one license is required per database server (single installed MySQL binary). There are no restrictions on the number of connections, number of CPUs, memory or disks to that one MySQL database server. The MaxDB server is licensed per CPU or named user.
    • When will people realize the licensing issues are *solved* now?

      Surely, I can see clueless people 100 years from now still bitching about MySQL's licensing terms.
      • When will people realize the licensing issues are *solved* now?

        They are? So you can write non-GPL software with a MySQL backend now? Great!

        • Re: (Score:3, Informative)

          by kylehase ( 982334 )
          Sure you can, just don't distribute the software. Every commercial case listed in the license above describes distributing MySQL in whole or part.

          I'm no lawyer but it seems if you develop a non-GPL commercial service that runs a community-licensed MySQL backend it's perfectly fine to charge for your service.
          • Sure you can, just don't distribute the software. Every commercial case listed in the license above describes distributing MySQL in whole or part.

            ...including the client libraries. This makes MySQL the only major database restricting commercial developers. I've actually looked up the licensing for DB2, Oracle, and SQL Server, and each of them allows linking and distribution of their connectors. PostgreSQL, being BSD licensed, and SQLite, being public domain, of course allow that as well.

  • by bogaboga ( 793279 ) on Saturday April 12, 2008 @10:57AM (#23047112)
    I am wondering when we shall ever have a free as is OSS, fully programmable front end to MySQL. All the free front ends available suck big time and the non free ones, though somewhat functional, are not available without some kind of restrictions.

    In my opinion, the day MySQL will have a fully programmable front end...I mean one that a programmer can add business logic to, program input masks, direct functionality at widget or control level and use to generate customized reports depending on various metrics, MySQL will kick ass. Right now, all front ends to MYSQL suck big time and there does not appear to be an end in sight - sadly.

    SQL Maestro is very promising but it's not free!

    • When will you start developing it? OSS exists because someone has an itch that needs scratching.
      • Not everybody that uses OSS software is a developer. I'm a system admin and I've done my fair share of shell scripting, PHP and Python but I still can't write every application that I need from scratch. If MySQL had something like Enterprise Studio it would be really nice, phpMyAdmin is ok but it's missing some things.
        • by bXTr ( 123510 )

          Not everybody that uses OSS software is a developer.
          Not everyone who uses OSS software has to be a developer to contribute. There are developers out there who wouldn't mind being paid to make an Enterprise Studio-y front end for MySQL for you. Hell, anything is possible; it's only a question of time and money.
    • Re: (Score:3, Interesting)

      by Animats ( 122034 )

      SQL Maestro is an administrative tool, not a report generator.

      PHP Generator for MySQL [sqlmaestro.com] is free and useful for generating simple database-driven web sites.

      Admittedly, the MySQL Query Browser is clunky, but at least it finally works. For several releases, it was badly broken.

    • by dysk ( 621566 )
      Sadly, management tools and report generation are hard, and they require a level of coordination that's easier to achieve in one company and much harder in an open source environment.

      A brilliant programmer can come up with some really solid and innovative code (ex. reiserfs), but to make a nontrivial management tool you need a combination of programmers, designers, and yes, managers, working in tight concert.

      I personally am okay with paying for front ends when they're needed, so we can get kickass scala

    • by NevarMore ( 248971 ) on Saturday April 12, 2008 @11:33AM (#23047354) Homepage Journal
      Fully programmable front-end for a database?

      You mean like C, C++, Java, Ruby, PHP, Python, OO Calc, ASP, C# ??
      • I'm confused at his request, also. Perl is my mysql front-end and I can do absolutely anything with it. Maybe he is asking for the old MS Access style graphical "form" interface for inputting data and generating reports or perhaps he is talking about a GUI administrative interface like phpmyadmin.
        • Re: (Score:3, Informative)

          by Shados ( 741919 )
          He's talking about a 4th gen RAD front end, so yeah, like MS Access, eDeveloper, Oracle Developer (is that still how its called?), etc. There are a few up and coming one in the open source world, but none really that are feature complete.
        • Maybe he is asking for the old MS Access style graphical "form" interface for inputting data and generating reports or perhaps he is talking about a GUI administrative interface like phpmyadmin.
          Yes, both of those qualify as a "front-end."

          phpmyadmin fails as it's an unnecessary layer of abstraction -- I shouldn't have to run a webserver on my db engine, or local machine, just to admin my database outside a command shell.

      • I'm thinking that he wanted something you could click your mouse on, but still customize. You should be able to do a lot of things with the database using your mouse alone. A first step would be graphical tools for extracting and displaying the data; maybe then you can move on to modifying it.
      • I think he means like SQL Server Reporting Services, MS Access and SQL Server Analytic services.

        Hard to be sure though.

        The tools like Mondrian, JasperSoft, Petaho, Navicat, etc. They're all okay, but nothing like as polished as Microsoft's.
    • by shmlco ( 594907 )
      "... the non free ones, though somewhat functional, are not available without some kind of restrictions."

      Yeah, like they expect you to pay for them.
    • Dunno if its what you are after, but have you tried HeidiSQL?
    • You mean Access? You could try Rekall, Kexi and OpenOffice Base, for example. I think they all work with Python, which is a huge advantage over Access.
  • by g_adams27 ( 581237 ) on Saturday April 12, 2008 @11:03AM (#23047146)

    I would simply like to point out that this MySQL update is completely irrelevant because PostgreSQL has had (g_adams27, fill this part in before submitting) for a very long time, and MySQL is simply playing catchup.

    ...

    And now I would like to strongly disagree with g_adams27, who obviously doesn't realize that MySQL is an excellent choice even compared with PostgreSQL, and I wish he'd stop making silly comparisons.

    ...

    In response to that, I say: g_adams27, SHUT UP! You obviously don't recognize the fatal flaws that MySQL still has, in that it still can't (fill this part out later) even after years of development. PostgreSQL is obviously the superior option, and you can take your stupid MySQL advocacy somewhere else.

    ...

    Oh, yeah? Well maybe YOU should shut up! I can't say I'm shocked at g_adams27' mean-spirited response, because that's typical of PostgreSQL jerks. MySQL is AWESOME, and YOU need to shut up, jerk!

    ...

    Well, g_adams27, maybe you should take your TOY MySQL and go play with your dollies, while us REAL sysadmins use a REAL RDBMS to do REAL work! Idiot.

    ...

    And now, allow me, g_adams27, to step in to the middle of this debate and simply point out that you're BOTH right, and that MySQL and PostgreSQL are perfectly good choices.

    Just doing my part to shorten this thread.

    • Re: (Score:1, Offtopic)

      On that same note, I would like to say the emacs is way better than vi. That's right. You heard me. Bring it! :P
    • by MrNaz ( 730548 ) *
      If you don't stop using MySQL I'm going to tell the teacher! Also you're a poopy head locks no returnies!
    • Re: (Score:1, Funny)

      by Anonymous Coward
      By golly. Over a hour after this story popped up and there were only 37 comments posted. You perform magic with your words of wisdom, g_adams27!
    • by Godji ( 957148 )
      Not to start a flamewar, but how many of the aforementioned features does PostgreSQL already have (available or planned)?

      Note that I am not asking which DBMS is better for any definition of "better".
    • I must say, I've been sitting at this PostgreSQL machine at this contract web design gig, and I don't know what all of you Postgres people are talking about! I started this 100 row SELECT statement 20 minutes ago, and it STILL hasn't finished. MySQL has it's problems, but seriously, guys!

      Always look over your head for joke before replying. I wish I could find a link to the original post.

      • http://www.kottke.org/98/11/my-mac-sucks [kottke.org] would be the link you're after.

        I found it somewhat amusing that I'm reading this thread as I'm working on a project that uses Postgres on Mac. I came here to post the same joke but you beat me by a long shot.

        todo: insert joke about my Mac taking over 20 minutes to post a comment
  • by tji ( 74570 ) on Saturday April 12, 2008 @02:04PM (#23048302)
    I do use databases for various apps and projects, but only enough to do what I need. I am by no means a DB expert.

    So, can someone more DB-literate explain some of the new features?

    - Disk based clustering: I assume this means I can dynamically expand the size of my database by adding more disks. Is this correct? Does PostgreSQL also support this (my project where this would be handy currently uses pgsql)?

    - Partitioning: I can think of several things this could mean.. Splitting data among several tables at some logical dividing point. Or, limiting the size of tables so they can't overrun the complete storage space. What does this mean in MySQL 5.1 terms?
    • by theantix ( 466036 ) on Saturday April 12, 2008 @03:18PM (#23048760) Journal

      - Disk based clustering: I assume this means I can dynamically expand the size of my database by adding more disks. Is this correct? Does PostgreSQL also support this (my project where this would be handy currently uses pgsql)?
      Disk based clustering only applies to people using the MySQL NDB Cluster product, which is quite different from the traditional MySQL product. So for the vast majority of MySQL users who use MyISAM or InnoDB tables, this doesn't really affect them at all.

      - Partitioning: I can think of several things this could mean.. Splitting data among several tables at some logical dividing point. Or, limiting the size of tables so they can't overrun the complete storage space. What does this mean in MySQL 5.1 terms?
      This means splitting an existing table along logical dividing points, but still acting as a single table. Let's say you partition it by date, well then you would insert/select/update like normal -- but a query or update that looks at the date would only have to look at a smaller partition of the table to know what row needs to be updated.
  • Did they fix perf in subselects and multi-way joins? No? Didn't think so.
    • Subqueries in Mysql is as useful in the real-world deployments as paper and pencil. Actually, at least with p&p, I can doodle some fancy comics.

      Subselects is so limited in the indexes it can use the performance as melted has pointed out is bad. To me, it's not bad, it's unusable in just-in-time page generation. Usable for cron jobs and data warehousing but forget about it if you want it "fast".
  • about **** time! (Score:2, Informative)

    As a heavy user of Mysql since 4 series, 5.X has been the buggiest, slowest, with the most god-awful slow release schedule of them all. 4.1 alpha was higher quality in terms of bugs/stability than all the stable "5.0" releases and 5.1 just takes forever to get even beta revisions out the door. Mysql is getting slower and slower at getting releases out the door. Expect Mysql 6.0 in 2011 if not later.

    I'm a paid mysql enterprise subscriber and I'm pissed at their pace.

    It's one thing to have a slow stable relea
  • "5.1, though it sounds like an incremental release, has got some pretty major features," said Zack Urlocker, vice president for MySQL products at Sun, in a video postedto InfoWorld's Web site this week. "Probably, we should have called it 6.0, because there's so much stuff in there and we've been working on it for a couple of years."
    Hmm...Or maybe the marketers at Sun should give the name the grandeur it deserves and change it to MySQL 2 Standard Edition version 6...

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...