ICANN Might Pre-Register gTLDs To Placate Critics 70
judgecorp writes "ICANN is to be congratulated for succeeding in expanding the Internet beyond the Latin alphabet. However, the organization is facing a harder task in extending the Internet's global top-level domains (gTLDs) — its proposal to open up the gTLD space has been plagued by controversy and delays. INCANN faces struggles with trademark owners and competing businesses — but even so it is being criticized for acting slowly (as seen in transcripts from the recent meeting in Seoul). It now seems likely the body will have a pre-registration scheme to gauge demand and placate critics by getting something moving on new gTLDs."
Re:So (Score:3, Informative)
except that .net has been used for so many things totally unrelated to network infrastructure (I myself have a .net and its just a small personal site)
Re:Plato,Jesus and Shakespeare used Ascii. So can (Score:4, Informative)
> "ICANN is to be congratulated for succeeding in expanding the Internet beyond the Latin alphabet.
The problem isn't that ICANN expanded "the Internet" beyond the Latin Alphabet (or at least the subset enshrined in ASCII's alphanumeric characters plus hyphen)... the problem is the collateral damage it caused, and continues to cause, because they did the equivalent of dumping a freeway interchange in the middle of an already-thriving residential neighborhood.
ICANN (possibly with IETF) needs to do three things:
1) Work with IETF to extend DNS so that TLD registrars can define a specific subset of UTF-8 that's valid for its subdomains. By definition, .com/.net/.org should be forever restricted to the historical [A-Za-z0-9\-] subset to put an end to homograph phishing. In other words, no TLD could indiscriminately include everything from legacy-ASCII to Klingon, Runes, and ancient Egyptian. They'd have to pick the characters used to write a single real language and stick with it.
2) Require that TLD character-validity rules be fully normalized against characters between 0x30 and 0x7f. In other words, if there's a letter in the language's unicode codepage that looks just like ASCII 'i', they can allow ASCII 'i', or the language's own version of 'i' with its own UTF-8 value, but NOT both. The choice of which 'a-zA-Z' to use would largely depend upon which value gets generated from a keypress by a keyboard in the target country.
3) Create new TLDs for writing systems used in more than one country, by at least 50 million people... preferably, short and understandable to anyone in a country that uses that writing system. So, Chinese might get .{zhong} or .{zhongwen} (but the PRC itself's country TLD might be .{zhongguo}), Cyrillic might sensibly get the characters that resemble .NHT (backwards 'N') which apparently is the abbreviation for "Int" in Russian, Ukranian, Serbian, and probably most other languages using Cyrillic, etc, and conveniently looks vaguely like ".NET" to everyone else (but the backwards-N would ensure only a complete idiot could think it really WAS .net). Ditto for Arabic. Languages almost synonymous with a single country (Hebrew, Greek, Japanese, Korean, etc) or spoken by fewer than 50 million real people in daily business wouldn't get their own TLDs... but their countries would get a new country TLD in the writing system (along with their old 2-letter TLD).
The point is, the way internationalization has been rolled out so far has created a worldwide party for fraud and phishing via homograph attacks. An end needs to be put to it NOW. If someone has an existing IDNS name that would be invalidated by the new rules (say, {nurren}.com), they'd get first chance at it in the new .{zhong} TLD (nurren}.{zhong}). If there were two or more existing .com|.net|.org domains that clashed (say, {nurren}.com and {nurren}.net), they'd have to share the TLD and settle for distinct subdomains of it, like {something}.{nurren}.{zhong} and {somethingdifferent}.{nurren}.{zhong}.