Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Unicode 7.0 Released, Supporting 23 New Scripts 108

An anonymous reader writes "The newest major version of the Unicode Standard was released today, adding 2,834 new characters, including two new currency symbols and 250 emoji. The inclusion of 23 new scripts is the largest addition of writing systems to Unicode since version 1.0 was published with Unicode's original 24 scripts. Among the new scripts are Linear A, Grantha, Siddham, Mende Kikakui, and the first shorthand encoded in Unicode, Duployan."
This discussion has been archived. No new comments can be posted.

Unicode 7.0 Released, Supporting 23 New Scripts

Comments Filter:
  • Seriously? (Score:4, Funny)

    by newsman220 ( 1928648 ) on Monday June 16, 2014 @06:49PM (#47250333)
    Still no Klingon?
    • by Anonymous Coward

      Seriously, there are Klingon speakers. I worked with three, one of whom didn't know the other two knew Klingon until he cursed in Klingon. It was surreal. Linear A is an absolutely fascinating script (with hundreds of symbols), but there just aren't enough extant samples to justify adding it to Unicode, and nobody can translate it.

      (Yes, that was a weird job. I left as soon as I could, though not due to the Klingons, but management.)

      • but there just aren't enough extant samples to justify adding it to Unicode, and nobody can translate it.

        Unicode is supposed to be universal, and it has more than enough codepoints to spare - why is there a problem adding it? I'm sure having it in a standard encoding would prove useful to anyone who is trying to translate Linear A, or to archeologists/historians looking to digitize fragments we do have, etc.

        • by Anonymous Coward

          The larger Unicode becomes, the more fragmented the implementations will be. The more fragmented it is, the more errors and incompatibilities will compound. It will get less and less useful, and more and more bulky, and will eventually be as useful as Flash. (well, it may not be that bad, but still, Flash was all things to all people, and almost universally installed, until it wasn't.

          • by Fubari ( 196373 ) on Monday June 16, 2014 @08:33PM (#47251149)
            Fragmented? I haven't heard of any unicode forks. The people at the Unicode_Consortium [wikipedia.org] seem like they're doing ok. Unicode seems pretty backwards compatible; have any of the the newer versions overwritten or changed the meaning of older versions (e.g. caused damage)? That isn't true for various ascii encodings, which is an i18n abomination on the hi-bit characters. Or with ebcdic, which isn't self compatible. One of the things I love about unicode is the characters (glyphs) stay where you put them, and don't transmute depending on what locale a program happens to run in.

            The larger Unicode becomes, the more fragmented the implementations will be.

            Maybe instead of fragmented, you mean there won't be font sets that can't render all of unicode's characters?
            *shrug* Even if that were a problem, the underlying data is intact and undamaged and will be viewable once a suitable font library is obtained.

            The more fragmented it is, the more errors and incompatibilities will compound. It will get less and less useful, and more and more bulky, and will eventually be as useful as Flash. (well, it may not be that bad, but still, Flash was all things to all people, and almost universally installed, until it wasn't.

            Can you give me an example of an incompatibility? I'm not saying there are none, just that I don't know of anything and that, in general, I've been very pleased with Unicode's stability - compared to other encodings - for doing data exchange.

            • by Anonymous Coward

              BIDI is one of the weirder and more difficult parts of Unicode, and its semantics have not been 100% stable across versions.

              http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=Unicode5QuoteMirroring

              In fairness, they did attempt to limit the damage, and on the whole, having a well-thought-out standard for BIDI, even if occasionally buggy, is better than not having one.

            • Unicode seems pretty backwards compatible; have any of the the newer versions overwritten or changed the meaning of older versions (e.g. caused damage)?

              Yes. Version 2.0 completely changed the Hangul character set. Korean texts written with Unicode 1.1 were not readable in Unicode 2.0, and vice versa. This was 17 years ago, but note that it was after ISO had accepted version 1.1 as an ISO/IEC standard.

            • by AmiMoJo ( 196126 ) * on Tuesday June 17, 2014 @02:58AM (#47252449) Homepage Journal

              The main problem is the broken CJK (Chinese, Japanese, Korean) support that has caused numerous ad-hok work-arounds and hacks to be developed. In a nutshell all three languages shared some common characters in the past, but over time they diverged. Unfortunately these characters share the same code points in Unicode, even though they are rendered differently depending on the language. A Japanese and Chinese font will contain different glyphs for the same character.

              It is therefore impossible to mix Chinese and Japanese in the same plain text document. You need extra metadata to tell the editor which parts need Chinese characters and which need Japanese. There are Japanese bands that release songs with Chinese lyrics and vice versa, and books that contain both (e.g. textbooks, dictionaries). Unicode is unable to encode this data adequately.

              Even the web is somewhat broken because of this. If a random web page says it is encoded with Unicode there is no simple way for the browser to choose a Japanese, Korean or Chinese font, and all the major ones just use whatever the user's default is.

              It really isn't clear how this can be fixed now. Unicode could split the code pages but a lot of existing software will carry on using the old ones. It's a bit of a disaster, but most westerners don't seem to be aware of it.

              • by Goaway ( 82658 )

                That sucks, but it does not seem to be an example of what was asked for.

              • by ais523 ( 1172701 )
                One situation I was wondering about for that problem was the use of Japanese/Chinese/Korean marks/overrides, the same way that there are LTR and RTL overrides. Choice of language for a particular ideograph seems to be much the same as choice of direction for an inherently undirectional character (you're interpreting the character differently depending on context). This also has the advantage of being pretty much backwards compatible.
              • by dillee1 ( 741792 )

                That "divergence over time" actually occurs not that long ago. Right before WW2 everyone on the planet that use Chinese characters use the 1 and only 1 glyph, traditional Chinese. That includes China, Japan, Korea, Vietnam, Hong Kong, Macau, Taiwan.

                After WW2 China and Japan tries to simplify the Chinese characters in separate effort, resulting in completely different glyphs and the shitty state of CJK coding we see now.

                Korea and Vietnam largely abandoned Chinese characters, may be except for person and plac

              • by Fubari ( 196373 )
                r.e. CJK - that is interesting, and it is something I haven't interacted with directly. The collisions in mapping to unicode sounds like a *significant* headache. Thanks for the heads up (now I'm at least aware that I'm ignorant of this; a small step forward).
          • There is no such thing as fragmentation with Unicode. Most fonts only implement a small portion of it however.

            If Microsoft and Apple both decide to implement 'Linear A' for example, they will do it with different fonts but using the same codepoints.

        • the problem is there is no klingon alphabet to add, just several fan made lists claiming to be that

          so your advocating adding an act of fanservice to a fictional language by adding something to unicode for which the authors of the fiction themselves haven't even been arsed to make. that's beyond silly, that's like saying the next space shuttle should be shaped like the starship enterprise.

      • by frisket ( 149522 )
        The lack (or not) of speakers isn't the reason. According to one of my moles, the official dead-pan response to the question why Klingon and Elvish aren't in Unicode is that they are not human languages :-)
        • by Anonymous Coward

          It has nothing to do with it being a human language or not. The reason why Klingon pIqaD failed was because nobody in the Klingon community actually uses it for writing texts to each other. A Private Use agreement that is more widely supported than almost any SMP script exists for Klingon pIqaD, but tlhIngan Hol speakers just don't use it. Tengwar and Cirth are still immature proposals, and it is more a lack of initiative within the Tolkeinist community that has had these stalled before being formally devel

      • Comment removed based on user account deletion
        • by narcc ( 412956 )

          Wait, what? I was unaware there was a distinction between "nerd" and "geek". Can I get a few nerds to geek out here and argue over their definitions?

      • Given that Linear A hasn't been deciphered yet, I wonder how they justify putting it in unicode. They don't know for certain which glyphs are distinct characters yet.

    • there is no standard from which to make Unicode, fans have made the most popular versions of various klingon alphabets

    • First I'll assume that you're talking about the KLI pIqaD for tlhIngan Hol, and not the Skybox pIqaD or the Mandel script. The Unicode team looked at encoding KLI pIqaD but decided against it because the Klingon-speaking community on Earth had already adopted a Latin-based script. (Reference: Klingon alphabets on Wikipedia [wikipedia.org]) But it could use a slight spelling reform to make it case-insensitive.
    • Still no Klingon?

      At least the Vulcan salute [blogspot.de].

  • I'm sure there are lots of docs in that....

  • by account_deleted ( 4530225 ) on Monday June 16, 2014 @06:58PM (#47250393)
    Comment removed based on user account deletion
  • Why emoji? (Score:2, Insightful)

    by Anonymous Coward

    What's the point of adding pictographic symbols to Unicode? Is this really something we want frozen in time for eternity? What's the benefit of standardizing them anyway?

    Wouldn't we be better off standardizing all characters used in written language and be done with it?

    • ðY'...ðY'ðY'©

    • Re:Why emoji? (Score:5, Insightful)

      by RyuuzakiTetsuya ( 195424 ) <taiki@co x . net> on Monday June 16, 2014 @07:25PM (#47250635)

      Not everyone speaks English or Chinese or Spanish.

      Everyone recognizes stop sign, airport, pile of poop and other symbols. So communicating via pictographs is actually good. Even if it was incidental.

      • Re:Why emoji? (Score:4, Informative)

        by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Monday June 16, 2014 @07:31PM (#47250691)

        Not everyone speaks English or Chinese or Spanish.

        Everyone recognizes stop sign, airport, pile of poop and other symbols. So communicating via pictographs is actually good. Even if it was incidental.

        And many of them recognize this [emojipedia.org] as well.

      • But they're not "standard" even if Unicode claims they are. I only heard of emoji within the last year, but there is not central body that dictates exactly what they look like, so that pile of poop symbol will vary depending upon which texting app you use it with. The apps that use emojis are not coordinating with any standard's body or ensuring that the intended meaning is preserved.

        Today emojis are purely a fad. We'd think it ridiculous if unicode standardized some of the 80's era desktop icons (so tha

        • that pile of poop symbol will vary depending upon which texting app you use it with

          So will any symbol. Though A, A, and A probably produce distinct glyphs on your machine, you can recognize them all as U+0041 LATIN CAPITAL LETTER A. Likewise, though U+1F4A9 appears different in different fonts, it'll look like shit in all of them.

        • Re:Why emoji? (Score:5, Interesting)

          by BitZtream ( 692029 ) on Monday June 16, 2014 @10:12PM (#47251693)

          But they're not "standard" even if Unicode claims they are.

          They are standard in reference to Unicode because the Unicode Consortium defines the Unicode standard. Someone has to be the first to define the standard.

          but there is not central body that dictates exactly what they look like, so that pile of poop symbol will vary depending upon which texting app you use it with

          Yes, those are called fonts, and in case you haven't noticed, that was true before digital computers with silicon microprocessors even existed and has been true for thousands of years.

          The apps that use emojis are not coordinating with any standard's body or ensuring that the intended meaning is preserved.

          Apple does, hence why the Messages app already matches the new code points. Google Hangouts seems to work fine as well. Both Messages and Hangouts convert even things like :) into the proper unicode code point and use standard fonts for display. Sure, some half assed apps may not work correctly, but anyone that supports unicode and has fonts will receive them properly already.

          Emoji is somewhat silly, but its hardly new, just go ask Japan. Just because you're new to the ballgame doesn't mean its a new ballgame.

          • by Ark42 ( 522144 )

            I think the problem most people think Apple/Emoji has with compatibility is that old versions of Apple stuff used the private-use codepoint areas for emoji, instead of the Unicode standard code points. This has since been fixed, as far as I know, but there are a TON of free Android keyboards that are supposed to type emoji, but only use the old private-use codepoints, and thus don't display anything but a blank space or a square box on Android without some special app to translate and display them.

            If you lo

      • Say you're communicating with pictographs, and you have an action involving two things. Do you put the pictograph for the action before, between, or after the pictographs for the things?

        (Spoiler: Speakers of Welsh or Arabic will want to put the action first, while speakers of Japanese or Finnish will want to put it last.)

        • How is that relevant to the discussion of unicode code points? Unicode doesn't define how you conjugate the verb either.

          • by tepples ( 727027 )
            I intended to ask to what extent RyuuzakiTetsuya's concept of "communicating via pictographs" (plural) is practical.
            • Emoji was an accidental feature for ntt docomo phones.

              That being said, if I don't understand Portuguese and you don't understand Korean, I message you a stop sign, that's straightforward to understand.

        • by rossdee ( 243626 )

          "Speakers of Welsh or Arabic will want to put the action first, while speakers of Japanese or Finnish will want to put it last"

          Along with speakers of hsiloP

    • by Goaway ( 82658 )

      Round-trip compatibility with other encodings that already have them.

    • by idji ( 984038 )
      If the emoji are standardized in Unicode, then it will be easier for any kind of software to support them.
  • by steelfood ( 895457 ) on Monday June 16, 2014 @08:46PM (#47251237)

    It's great they're adding new currency symbols for new currencies, but there's still a long-standing issue of the $ with one bar and $ with two bars. It's currently still considered a stylistic difference, but the scope of Unicode has evolved to account for every glyph known to man. Certainly, one- and two-bar $ can hardly be said to be the same glyph within this new context.

    Especially considering that there are already stylistic duplicates (half-width and full-width latin forms vs. plain latin), I can't seem to understand the justification behind letting one- and two-bar $, which are historically separate glyphs, be underrepresented.

    • Re:Peso vs. Dollar (Score:5, Informative)

      by lithis ( 5679 ) <sd AT selg DOT hethrael DOT org> on Monday June 16, 2014 @09:14PM (#47251403) Homepage

      Many of the stylistic duplicates, for example the half-width and full-width latin forms that you mentioned, are only in Unicode because of backwards compatibility with pre-Unicode character sets. If there hadn't been character sets that had different encodings for half- and full-width forms, Unicode never would have had them either. So you can't use them to argue for more glyph variations in Unicode. The same applies to many of the formatted numbers, such as the Unicode characters "VII" (U+2166), "7." (U+248E), "(7)" (U+247A), and "1/7" (U+2150), and units of measure ("cm^2", U+33A0).

      (Oh, for Unicode support in Slashdot....)

      • My argument isn't that the one- and two-bar $ are variations that deserve two code points, but that they are inideed separate glyphs that deserve separate code points. There's historical as well as current cultural precedent for this. For Unicode to aspire to represent all written symbols (especially now that it's taken on emoji), this treatment of the two different $ continues to baffle me.

        My point about the half- and full-width glyph variations are that they exist. I just find it odd that a character with

    • This is strange. The UK Pound is U+00A3 and the Italian Lira is U+20A4. While the latter has two lines across, two lines is acceptable for the pound and a single line was acceptable for the Lira.

      (Not that anyone stil uses the Italian Lira but other countries use the symbol and people may still write about it)
  • by Anonymous Coward

    Slashdot celebrates new version of Unicode...

  • Proprietary fonts (Score:5, Insightful)

    by ortholattice ( 175065 ) on Monday June 16, 2014 @11:28PM (#47251909)

    Over the years, I've tried to use Unicode for math symbols on various web pages and tend to revert back to GIFs or LaTeX-generating tools due to problems with symbols missing from the font used by this or that browser/OS combination, or even incorrect symbols in some cases.

    IMO the biggest problem with Unicode is the lack of a public domain reference font. Instead, it is a mishmash of proprietary fonts each of which only partly implements the spec. Even the Unicode spec itself uses proprietary fonts from various sources and thus cannot be freely reproduced (it says so right in the spec), a terrible idea for a supposed "standard".

    I'd love to see a plain, unadorned public-domain reference font that incorporates all defined characters - indeed, it would seem to me to be the responsibility of the Unicode Standard committee to provide such a font. Then others can use it as a basis for their own fancy proprietary font variations, and I would have a reliable font I could revert to when necessary.

    • by SEE ( 7681 )

      Why do you think an official Unicode font would solve your mathematical symbol problem any more than the already-available STIX [stixfonts.org] has failed to?

      • by Swistak ( 899225 )
        Probably becouse then when startin new font design you just fork reference fotn and replace all glyphs you want/have to. Your font will display your glyphs in places you care about and use standard glyphs for ones you didn't implement?
    • The problem is not with Unicode. Don't blame the character set, blame the font-specification, the software, and copyrights (!)

      In my view, every font that does not specify all unicode characters should point to one or more fall-back fonts, and the search should proceed recursively. Eventually, there should be a default "unicode" font implementing all characters.

      Also, fonts should not be copyrightable, because that adds greatly to the whole mess.

    • by Sarlok ( 144969 )
      These folks [sil.org] have several open fonts that cover some lesser-used code points. They don't have a big font with everything, but the Doulos [sil.org] font has pretty good coverage for Latin and Cyrillic scripts.
    • I agree that it's a problem but I don't think it's Unicode's. I don't think the consortium has set out to do anything but encode characters (and I think they're doing a good job). I imagine that coming up with a font for all those characters would be another massive undertaking.

      And as much as I champion free software I would have no problem with a company stepping in and filling that need by selling such a font.

  • by bradley13 ( 1118935 ) on Tuesday June 17, 2014 @03:46AM (#47252591) Homepage

    Great, Unicode is already a fragmented mess, and now the standards organization justifies its existence by adding characters that do not exist.

    An earlier poster asked why anyone thinks Unicode is fragmented. The answer in one word: fonts. Different fonts support different subsets of Unicode, because the whole thing is just too big. If you expect your font to mostly be used in Europe, you are unlikely to bother with Asian characters. if you have an Asian font, it probably has only English characters, not the rest of Europe. huge. If you have a font with complete mathematical symbols, it will include the Greek alphabet, but actual language support is a crapshoot.

    So the solution to this problem is to add made-up characters that no one cares about. "Man in business suit, levitating". Really?

    • Great, Unicode is already a fragmented mess, and now the standards organization justifies its existence by adding characters that do not exist.

      An earlier poster asked why anyone thinks Unicode is fragmented. The answer in one word: fonts. Different fonts support different subsets of Unicode, because the whole thing is just too big. If you expect your font to mostly be used in Europe, you are unlikely to bother with Asian characters. if you have an Asian font, it probably has only English characters, not the rest of Europe. huge. If you have a font with complete mathematical symbols, it will include the Greek alphabet, but actual language support is a crapshoot.

      You are correct in the reason that most fonts only contain a subset of Unicode code points. There are thousands of code points. Most documents will only use a small subset. Why should I have to have all those Chinese or Arabic characters when I only write in English, Spanish, Portuguese, and Hawaiian? People who read and write Hawaiian have fonts which support the Hawaiian letters `okina and kahako. Chinese have fonts which support the Chinese glyphs.

      As for language support, that isn't a font's problem. It'

    • your assertion the characters don't exist is provably false. chat software produces them, hundreds of millions of people use them

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...