Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Databases Programming Software IT

Migrate a MySQL Database Preserving Special Characters 98

TomSlick writes "Michael Chu's blog provides a good solution for people migrating their MySQL databases and finding that special characters (like smart quotes) get mangled. He presents two practical solutions to migrating the database properly."
This discussion has been archived. No new comments can be posted.

Migrate a MySQL Database Preserving Special Characters

Comments Filter:
  • Migration (Score:4, Informative)

    by dfetter ( 2035 ) <david@fetter.org> on Monday May 07, 2007 @12:38AM (#19017095) Homepage Journal
    Better still, install DBI-Link http://pgfoundry.org/projects/dbi-link/ [pgfoundry.org] inside PostgreSQL, migrate once and have done ;)
  • There's about 8,000 wordpress blog's out there that could use this. Pity I can't mod an article insightful
    • by jamshid ( 140925 ) on Monday May 07, 2007 @01:38AM (#19017415)
      Then send the wordpress developers this link:

      http://www.joelonsoftware.com/articles/Unicode.htm l [joelonsoftware.com]
      The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
      • by frisket ( 149522 )

        But some of them Just Don't Get It [tm]. "We're Americans, we don't use fancy foreign letters, so we just want ASCII." Sigh.

        --
        The best cure for seasickness is to go sit under a tree. [Spike Milligan]

        • I would say "always" but nobody's *always* anything... but often enough I can't pull up a specific example of a Joel On Software article that's not off the top of my head. :)
        • Re: (Score:3, Interesting)

          by Hognoxious ( 631665 )
          I'm not American, and I'm sitting here supporting a multinational IT system (Italy, Belgium, Netherlands, UK, Italy, Spain & Portugal) and it works fine without unicode. While I'm generally a fan of Joel I think he overstates the case here.
          • by Krischi ( 61667 )
            Good luck in supporting EU member countries such as Bulgaria or Greece, then. You will need it.
      • UCS-2 only covers plane zero (the Basic Multilingual Plane, or BMP). It doesn't cover code points outside that. Unicode actually supports the entire UCS, all 1.1 million (and growing) code points.

        In other words, Joel has made the same mistake as the people who wrote software that only works in 7-bit ASCII or 8-bit UTF-8 or the IBM or Apple or Adobe 8-bit extended ASCII sets or the 9-bit extended ASCII set that ITS used, or...

        And it's already too late to try and cram everything into 2 bytes. After the Han Unification mess (the attempt to force Chinese and Japanese and everything else that used some variant of Chinese ideograms (Kanji, etc...) into a common subset of similar characters that fit in the 65535 available codes in the BMP) the People Republic of China decided to require their computers to support their national encoding anyway. As of 2000.

        So you have to support the full UCS encoding anyway.

        There's three storage formats that it's practical to use: UCS-4 (4 bytes per character, with the same byte-ordering problems as UCS-2), UTF-16 (2-4 bytes per character, same as UCS-2 for the BMP) or UTF-8 (1-4 bytes per character). Internally: you can use UCS-4 as your wide character type, and translate on the fly; use UTF-8 and use care to avoid breaking strings in the middle of glyphs or use UTF-16 and translate on the fly and use care to avoid breaking strings in the middle of glyphs.

        If Joel is lucky the libraries he's using are actually operating on UTF-16 strings instead of UCS-2 strings. If he's *really* lucky they're designed to avoid breaking up codes outside the BMP. If he's *super* lucky he's managed to avoid creating any code that just operates on strings as a stream of wchar_t anywhere.

        Personally, I think that UTF-16 gets you the worst of both worlds: your data is almost certainly less compact than if you use UTF-8; you still have to deal with multi-wchar_t strings so your code is no easier to write than if you used UTF-8... you're just less likely to find bugs in testing; and you get byte order issues in files just like you would with UCS-4. Unless you think UCS-2 is "good enough" and you just ignore everything outside the BMP and discover that people in China are suddenly getting hash when they use your program.
        • Actually, Windows shifted to UTF-16 (with full surrogate pair support) as of Win2K. From the description in Joel's article it would seem that he is relying on the Windows string APIs and is safe unless his customers are trying to run on older versions.
          • Re: (Score:3, Informative)

            by argent ( 18001 )
            Assuming he's ONLY using Windows string APIs.

            First, you need to be religious about it. But if you are, then the choice of internal encoding is really a performance issue only, and the choice of external encoding is a matter of following the principle of least astonishment. Your code shouldn't know nor care what encoding the string APIs use internally. The program should work the same whether wchar_t is (unsigned char), (unsigned short), (unsigned long), or even (double).

            Second, there's a lot of overhead in
        • by epine ( 68316 ) on Monday May 07, 2007 @07:26PM (#19029849)
          That was a good post, but I don't understand your premise whatsoever. There seems to be two tactics at work here: arbitrary line drawing, and the belief that if you can't make everyone happy the best compromise is to make everyone unhappy. I read that post by Joel long ago, and I just read it again. I don't think he could have done a better job in the space devoted to it.

          My one criticism of Joel was passing himself the "get out of jail free" card. Before I get started, I should warn you that if you are one of those rare people who knows about internationalization, you are going to find my entire discussion a little bit oversimplified. This is a fair disclaimer, but it makes it impossible to judge where Joel was simplifying deliberately and where he simplified because he didn't know any better. The correction would be for Joel to state "I'm going to simplify issues X, Y, and Zed". Then mistakes in the middle of the alphabet would be entirely his own. Just as there is no such thing as a string without a coding system, there is no such thing as a useful disclaimer that doesn't specify precisely what it disclaims. It amused me to see Joel invoke the ASCII standard of accountability.

          Concerning the claim that Joel has made the same mistake [over again], this same claim comes up all the time concerning address arithmetic. How much existing code is portable to a 128 bit address size? We're sure to need this by 2050. Or perhaps not. People tend to neglect the observation that we're talking about a doubly exponential progression in codespace: (2^2^3)^2^N, with the values N=0,1,2,3,4 plausible in photolithographic solid state. On the current progression, for N=5 transistors would need to be subatomic. As far as the present transition from 32 bits to 64 bits of address space, it makes sense that operating systems and file systems are 64-bit native, while 99% of user space applications continue to run in less time and space compiled for 32 bits. Among the growing sliver of applications that do run better in 64-bits are a few applications of especially high importance.

          I worked extensively with CJK languages in the early 1990s, and my opinion at the time was that UCS-4 primarily catered to the SETI crowd, and potentially, belligerent Mandarins in mainland China. I recall more argument at the time about Korean, which is a syllabic script masquerading as ideographic blocks.

          http://en.wikipedia.org/wiki/Hangul [wikipedia.org]

          I've always had a lot of trouble understanding the opposition to Han unification. Many distinctions in the origins of the English language were lost in the adoption of ASCII, such as the ae ligature and the old-English thorn (which causes many Hollywood sets to feature "Ye old saloon").

          http://en.wikipedia.org/wiki/Han_unification [wikipedia.org]
          http://en.wikipedia.org/wiki/Thorn_(letter) [wikipedia.org]

          ... Unicode now encodes far more [Han] characters than any other standard, and far more than were listed in any dictionary, with many more being processed for inclusion as fast as the scholars can agree on their identities.

          Some characters used only in names are not included in Unicode. This is not a form of cultural imperialism, as is sometimes feared. These characters are generally not included in their national character sets either.

          And all this fits quite nicely in UCS-2 as advocated by Joel.

          A slight difference in rendering characters might be considered a serious problem if it changes the meaning or reflects the wrong cultural tradition. Besides a simple nuisance like Japanese text looking like Chinese, names might be displayed with a different glyph -- the same character in the sense of encoding but a different character in the view of the users. This rendering problem is often employed to criticize Westerners for not being aware o

          • My objection is that he's saying "use UCS-2, it solves the problem" when UCS-2 doesn't solve the problem and it creates new ones.

            What he should have said is "use a wide character library that supports Unicode-2, no matter what it uses internally, and make sure your code still works if they change the encoding behind your back".

            I don't know why you're going on about "I have a tough time accepting the premise that the British Commonwealth was well served by ASCII". I didn't say that anyone was well served by
    • by Krischi ( 61667 )
      This ticket [wordpress.org] contains a patch that more or less allows you to use Wordpress blogs with UTF-8 encodings.
  • by Frogbert ( 589961 ) <frogbertNO@SPAMgmail.com> on Monday May 07, 2007 @12:53AM (#19017163)
    First you get the names of every table in the old database

    Then you create these tables in the new one. Just so there are no problems with data types you should probably just make every field varchar(100) in the new one.

    Then you fire up MS Access, the older the better. I try to stick to Access 95.

    Then you create two ODBC links, one to your old one and one to the new one.

    Then you use the linked table manager to link each table to ms access.

    Then you open both the new table and the old table and select all, copy and paste the data into the new table.

    It's so simple even a child could do it!
  • How is this news??? Later on today: "Data from POST/GET and special characters"?
    • by kestasjk ( 933987 ) on Monday May 07, 2007 @01:39AM (#19017427) Homepage
      If there's a chance of starting a PostgreSQL vs MySQL flamewar, it's news.
      • Re: (Score:3, Interesting)

        by arivanov ( 12034 )
        True.

        As well as a chance of posting an arcane method of database transition involving MySQL to start an ACID war.

        As well as on the original subject of the article - the best way to migrate an application is to load all of the data from one datasource and dump it into another datasource. If the application fails this trivial test its database access libraries are broken. If the app sticks strictly to dynamic SQL, high level DBI functions and does no manual escaping - it just works. The escaping portion of th
      • by ajs ( 35943 )
        I have a serious question to ask, but I'm sure it's going to sound like an invitation to a flame war. Please refrain.

        Does anyone actually use PostgreSQL? I mean, I know it's the defacto database that we wave around when we want to bash MySQL, but that doesn't mean anyone uses it. I've yet to run into anyone who used PostgreSQL except as a rapid-prototype for an Oracle environment. Anyone have data points here? Does anyone know the rough sizes of the user bases? Are we really just waving PostgreSQL like a fl
        • Not much of a data-point, but we use it for just about every project that needs an RDBMS. It's overkill for much of what we do, but it's very solid. I've also seen it embedded into a few commercially available systems, such as a wimax provisioning system, and a point-of-sale program. It's not as ubiquitous as MySQL, but it does get around.
        • I use postgres. Things like views, foreign keys, check constraints, stored procedures, and functions make life a lot easier. When I was using mysql (4.x), I had to fake that stuff in user code, which was twice the work and not nearly as clean (or reliable). mysql 5 has some of that stuff now, but they're still a decade behind.
          • by ajs ( 35943 )

            mysql 5 has some of that stuff now, but they're still a decade behind.
            I guess it was unavoidable that any question asked about PostgreSQL usage had to involve MySQL bashing... sigh.
  • by DJ Rubbie ( 621940 ) on Monday May 07, 2007 @01:15AM (#19017301) Homepage Journal

    As I understand it, the problem arises from the fact that mysqldump uses utf8 for character encoding while, more often than not, mysql tables default to latin1 character encoding. (If you were smart enough to manually set the character encoding to utf8, then you'll have no problems - everyone running mysql 4.0 or early will be using latin1 since it didn't support any other encodings.) So lets say we have a database named example_db with tables that have varchar and text columns. If you have special characters that are really UTF-8 encoded characters stored in the db, it works just fine until you try to move the db to another server.

    That bit me one time when one of my live servers crashed and I had to load the data on the backup onto a different server. I remember wondering to myself, when was the good old days when a database was a dumb (smart, depending your POV) engine that only worries about a string of bytes (chars). Seriously, it only should become smarter and start talking in unicode only when I want it to.

    Issues with using unicode is not just limited to MySQL, as Python have similar issues. However they are almost always caused by poor programmers who mixes usage between the two, or not doing type checking on the proper type (basestring).
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      However they are almost always caused by poor programmers who mixes usage between the two
      Yes, and in this case the poor programmers wrote the mysqldump program.
    • The real MySQL problem with Unicode is that it hardly supports it. Every other database engine on Earth has proper Unicode support (even SQLite, which isn't even really a database), why is MySQL so far behind everything? The more I use MySQL, the more I hate it. Hate, hate, hate.

      Of course, since Dreamhost refuses to install a half-decent database on their servers, I'm stuck using it. Does anybody know how to install Firebird on a Dreamhost account and make it work? Is it even possible?
      • At least with a standard dreamhost account it's against the tos. No servers of your own allowed. I wrote them asking the identical question except with postgres and that is the answer I got.
    • by jadavis ( 473492 )
      good old days when a database was a dumb (smart, depending your POV) engine that only worries about a string of bytes (chars)

      Databases use data types for the same reason data types are used in programming languages.

      Relational databases offer a lot more as well that I won't go into. But if you don't care about any of those things and just want to store bytes, there are plenty of ways to do that, and there have been for a long time (files are the most obvious example).

      • I am well aware of the extra functionality presented by SQL and I make liberal use of them, which is why I use that in the first place. However, I don't need the database to pretend it can be really smart about certain things unless I told it to, and as Blakey Rat's (99501) sentiment, the more I used MySQL, the more I hate it.

        As for my dumb/smart comment, I only want my database to be smart at what it was supposed to do, and only do what I want it to do, not guessing what I might want to do also and mess u
        • by jadavis ( 473492 )
          and only do what I want it to do

          Fair enough.

          not guessing what I might want to do also and mess up the output and resulting database dump.

          That's the result of a very bad implementation of "smart" ;)

          I think correct use of character encodings and locales are the way to go with software, but only if done correctly. And you certainly can't count on MySQL AB to do something correctly.
  • by hpavc ( 129350 ) on Monday May 07, 2007 @01:37AM (#19017411)
    This guys mysqldump statement could use some args, too much is packed in his my.cnf defaults to make this truely useful as a how to. He could easily cause more problems than he is solving.
    • Agreed, the proper mysqldump command should be at least:
      mysqldump --opt --default-character-set=latin1 database

      Something like this might be more interesting
      mysqldump --opt --default-character-set=latin1 database1 | sed "s/SET NAMES latin1/SET NAMES utf8/" | mysql database2

      And then simply switch the application to use database2 instead of database1 (or rename the databases)
  • by rylin ( 688457 ) on Monday May 07, 2007 @02:12AM (#19017575)
    Not a single day seems to go by without a gnubie or two posting things that are really basic knowledge.
    If you do insert unicode data into a latin1 table, you will get unexpected results.

    What you do is make sure that your:
    a) database(s) are set to utf8 by default
    b) table(s) are set to utf8 by default
    c) column(s) are set to utf8 by default
    d) connection defaults to utf8
    (provided, of course, that it's utf8 you're after)

    That way, it'll "Just Work"(tm)

    We've gone through upgrades from 3.23 -> 4.0 -> 4.1 -> 5.0 and never had a problem; and yes, our tables were all latin1 from the beginning.

    • > What you do is make sure that your:
      > a) database(s) are set to utf8 by default
      > b) table(s) are set to utf8 by default
      > c) column(s) are set to utf8 by default
      > d) connection defaults to utf8

      Please tell me that if I specify a database is UTF8 that I don't also have to tell it that each column is utf8 as well!

      And why should I have to tell the connections? Doesn't that get resolved automatically when the client connects to the server?

      And why should I have to tell a utility what the target da
      • by rylin ( 688457 )
        I guess I should've clarified or looked back at the thread sooner... :P
        You can set the database (or default connection e.t.c.) to have utf-8 by standard.
        Then again, you might be using a third-party admin utility to create a table, and the util might be ignoring the db default and create a latin1 table... or latin1 column.

        E.g., things can go wrong (obviously) - but if you specify that the DB is utf8, then any unspecified changes to the tablespace in it will default to utf8.

        Hopefully that clears it up ;-)
    • Re: (Score:3, Insightful)

      by cortana ( 588495 )

      Not a single day seems to go by without a gnubie or two posting things that are really basic knowledge.
      If you do insert unicode data into a latin1 table, you will get unexpected results.
      Ah, I love MySQL. They should fix it so that if you insert unicode data where latin1 data is expected, you get an error instead of silent data corruption.
      • Re: (Score:3, Interesting)

        The whole point of UTF-8 is that it can silently be inserted in places that were designed to handle ASCII. So no, there is no way for something which is handling latin1 to know that what you gave it is actually UTF8 and therefore not legal.

      • How is MySQL supposed to know that the data it gets is unicode instead of latin-1? Latin-1 characters are 1 byte, what MySQL receives may be an actual unicode string or may just be a sequence of latin-1 characters. There's no way to reliably find out.
        • by raynet ( 51803 )
          Perhaps you could set the used character encoding while connecting to the database. Atleast PostgreSQL supports this, though it will actually do a conversion from Unicode to Latin1 in this case and barf an error if it cannot map some Unicode chars to Latin equivalents.
    • How about: CREATE DATABASE blah ENCODING 'UNICODE';
      Or does that not work in MySQL?
    • So I have to remember to do EACH and EVERY one of those things so it "Just Works?"

      Unbelievable.
  • Anyone noticing a strong similarity between this article and about 95% of the stuff posted on digg every day? What's next, "The Top 10 Photoshop Filters of All Time?" :) Tagged slashdigg.
    • Re: (Score:1, Funny)

      by Anonymous Coward
      No, it's "The Top 10 Photoshop Filters for Ron Paul!"
    • "How to sort MS-Word tables"...

      "Excel & the power of CSV files"

      "Bottleneck your CPU: How to gain MASSIVE performance improvements in any computer: defrag.exe"

      "Make your data manageable: separating content from style the CSS way!!"

      "FREE SOFTWARE! The best 10...er...20...er...100...free (and open source) software products"

      PinkPanther bangs head on wall
  • From TFA:

    If you were smart enough to manually set the character encoding to utf8, then you'll have no problems
    That's all that's needed here, the whole article & this page can be condensed in a 1 line Question & 1 line Answer - doh!
    Oh & you can change the encoding at anytime, so even if you initially forget you just change it later, then dump.
  • when doing (non-'real-time'-critical) migrations... I find it much less troublesome to just scp the files and then run mysqlcheck to repair the tables on the new server (if required).

    http://www.seanodonnell.com/code/?id=66 [seanodonnell.com]

    That process prevents the syntax conversion issues, storage and bandwidth requirements, and processing-time requirements caused by using mysqldump.

Programmers do it bit by bit.

Working...