Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Chrome Google IT

Google Wants To Kill the URL (wired.com) 282

As Chrome looks ahead to its next 10 years, the team is mulling its most controversial initiative yet: fundamentally rethinking URLs across the web. From a report: Uniform Resource Locators are the familiar web addresses you use everyday. They are listed in the web's DNS address book and direct browsers to the right Internet Protocol addresses that identify and differentiate web servers. In short, you navigate to WIRED.com to read WIRED so you don't have to manage complicated routing protocols and strings of numbers. But over time, URLs have gotten more and more difficult to read and understand. The resulting opacity has been a boon for cyber criminals who build malicious sites to exploit the confusion. They impersonate legitimate institutions, launch phishing schemes, hawk malicious downloads, and run phony web services -- all because it's difficult for web users to keep track of who they're dealing with. Now, the Chrome team says it's time for a massive change.

"People have a really hard time understanding URLs," says Adrienne Porter Felt, Chrome's engineering Manager. "They're hard to read, it's hard to know which part of them is supposed to be trusted, and in general I don't think URLs are working as a good way to convey site identity. So we want to move toward a place where web identity is understandable by everyone -- they know who they're talking to when they're using a website and they can reason about whether they can trust them. But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we're figuring out the right way to convey identity."

If you're having a tough time thinking of what could possibly be used in place of URLs, you're not alone. Academics have considered options over the years, but the problem doesn't have an easy answer. Porter Felt and her colleague Justin Schuh, Chrome's principal engineer, say that even the Chrome team itself is still divided on the best solution to propose. And the group won't offer any examples at this point of the types of schemes they are considering. The focus right now, they say, is on identifying all the ways people use URLs to try to find an alternative that will enhance security and identity integrity on the web while also adding convenience for everyday tasks like sharing links on mobile devices.

This discussion has been archived. No new comments can be posted.

Google Wants To Kill the URL

Comments Filter:
  • by sinij ( 911942 ) on Tuesday September 04, 2018 @12:25PM (#57251448)
    It is not acceptable for Google that some browsing bypasses Google search engine when people directly type in URLs.
    • by think_nix ( 1467471 ) on Tuesday September 04, 2018 @12:44PM (#57251598)

      or analytics services, adservice.google.com, apis.google.com, id.google.com, google crawlers, list goes on and on. Seriously though since late 90's early turn of the century with the initial launch of search which is nowhere near the original purpose of the service, I cannot think of one great advancement google has done to better anything on the web.

    • Re: (Score:3, Interesting)

      by tlhIngan ( 30335 )

      It is not acceptable for Google that some browsing bypasses Google search engine when people directly type in URLs.

      Remember the great dot-com shootout in the early 00's? Back when people wanted very special .com addresses because that's how people found you, by stumbling about composing URLs?

      Sites like sex.com, pets.com, books.com etc. etc. - those domains sold for $$$$$ back in the day. Nowadays it matters a lot less,because everyone Googles rather than tries random URLs. Doesn't hurt that half of the URLs

    • Comment removed based on user account deletion
  • RIP Web (Score:3, Insightful)

    by Anonymous Coward on Tuesday September 04, 2018 @12:27PM (#57251458)
    Google, just doing its part in ripping the last bits of the 'web' to shreds. I guess they really do want their search engine to be the place from which all information originates.
    • AOL (Score:2, Insightful)

      by Anonymous Coward

      With Google mail and voice and everything else and now this. I guess Google wants to be the 21st century AOL.

      At least we won't have to worry about the endless stream of CDs in the mail.

    • Re:RIP Web (Score:5, Insightful)

      by skids ( 119237 ) on Tuesday September 04, 2018 @12:43PM (#57251594) Homepage

      URI/URL effectively died a decade or so ago when savage PHP coders and their armada of "content
      management systems" violated the usage guidance on them and then took years to rediscover the
      concept on their own (as "permalinks").

      Though I'd say the initial first blow was the constant rearranging of static sites by companies that
      apparently had nothing better to do than pay people to move files around for no good reason.

      • I used to blame WordPress outright, but I eventually learned that the blame lies squarely with the users assuming everything 'just works', as well as with plugin developers getting complacent about people using their plugins sanely.
  • Finally! (Score:5, Funny)

    by p4ul13 ( 560810 ) on Tuesday September 04, 2018 @12:32PM (#57251490) Homepage
    It's about time we revert back to the future with the AOL keywords we all have been sorely missing from our lives!!!
    • Alternative future: AOL is the new SCO, and continues to harass the goog for decades for trying to knock off its model.
  • by Rick Schumann ( 4662797 ) on Tuesday September 04, 2018 @12:33PM (#57251494) Journal
    PRO-TIP: There's this magical thing called a bookmark that stores all those 'so-over-complicated' URL thingies under a name that you can even change yourself so your teeny little human brain can understand it! Ain't that amazing?</extreme_sarcasm>

    Seriously, Google, what the actual fuck is wrong with you?
    Or is it that people have become so fucking dumb that they really can't type in {website}.{top_level_domain}? Considering all the stupid shit I see in the news pretty much every single day anymore I'd be very tempted to believe that, too.
    • by ddtmm ( 549094 )
      Forget the article, did you even read the summary?
    • by SirSlud ( 67381 )

      I love it when dumb people lament how dumb everyone has become.

      • by Rick Schumann ( 4662797 ) on Tuesday September 04, 2018 @03:34PM (#57252676) Journal
        I read the article so fuck you. They want to 'child proof' shit because people are dumb but childproofing everything just creates more problems. How about we EDUCATE people better so they're not so dumb anymore? People get more and more done FOR them by some automation or service or whatever and they never learn to do shit for themselves and over time it just makes them dumber and dumber.
    • I don't think they are implying people don't know how to type cnn.com into their browser to get to cnn's webpage, they have trouble understanding that when they get an e-mail linking to http://www.cnn.notascam.com/st... [notascam.com]
      • Actually , the reason certificates and https were invented is because without them there is no way to prove that when you type cnn.com on your web browser that you are getting a page from a computer that has anything to do with the entity you are wanting to trust. Any router, any DNS equipment , anywhere along the route can reroute you to a different page, one that logs your keystrokes and then sends you to the real site. So yes, this issue is deeper and harder then you might think. I'm sure one of the o

      • Then we need to TEACH them to know better. Childproofing everything like everyone has an IQ of 75 isn't a good solution.
    • With some rng magic built in maybe this is a way for them to anonymize the ad service providers and all other tracking mechanisms they like to implement.

    • Comment removed based on user account deletion
    • by Reziac ( 43301 ) *

      Didn't there used to be a plugin called something like Link Decombobulator, that decoded, sanity/safety-checked, and previewed these messy links?

      If there's not, there should be; problem solved.

  • Solution (Score:4, Insightful)

    by 110010001000 ( 697113 ) on Tuesday September 04, 2018 @12:33PM (#57251500) Homepage Journal
    I can guess what Googles solution will be: if you want to host web content you need to purchase a virtual website on their cloud and then they will issue you a code that people can use to type or scan into Chrome. After all, think of the terrorists and/or the children.
  • Back to AOL? (Score:4, Interesting)

    by agiacalone ( 815893 ) <agiacalone@gma[ ]com ['il.' in gap]> on Tuesday September 04, 2018 @12:34PM (#57251508) Homepage

    Didn't AOL try this back in the 90s with the 'AOL Keyword'? IIRC, it failed miserably.

    • Didn't AOL try this back in the 90s with the 'AOL Keyword'? IIRC, it failed miserably.

      I believe Yahoo tried this with their directory as well back in the day. That didn't make it either...

    • Re:Back to AOL? (Score:4, Insightful)

      by bigpat ( 158134 ) on Tuesday September 04, 2018 @01:43PM (#57252016)

      Didn't AOL try this back in the 90s with the 'AOL Keyword'? IIRC, it failed miserably.

      Actually AOL keywords were very very successful and AOL charged big bucks to sell keywords to companies, but eventually domain names and URLs were cheaper to get (unless someone else registered it first) and the DNS system wasn't a monopoly so competition drove down prices further.

      • by dissy ( 172727 )

        Didn't AOL try this back in the 90s with the 'AOL Keyword'? IIRC, it failed miserably.

        Actually AOL keywords were very very successful and AOL charged big bucks to sell keywords to companies, but eventually domain names and URLs were cheaper to get (unless someone else registered it first) and the DNS system wasn't a monopoly so competition drove down prices further.

        "Eventually" but not by too far.

        From 1984 until 1990, the DNS system was completely a monopoly, in whole administrated by InterNIC.
        InterNIC was also the sole source of .com domains (as well as net, org, us, edu, mil, and gov, and before that .arpa) and IP allocations.

        It wasn't until 1991 that they split off top level control to Network Solutions.
        It was a couple years after that until the "root servers" and "gtld-servers" were split apart, although Network Solutions was still the sole source for .com (and ne

  • by Anonymous Coward

    People have a really hard time understanding URLs

    Understatement of the year. "People" don't understand URLs at all. That doesn't mean we should let Google be the arbiter of identity on the internet.

    • Re: (Score:2, Insightful)

      Some people have a really hard time understanding URLs

      FTFY.

      Computer savvy people have been using the Internet just fine since the '90s thank-you-very-much. Just because X% of the population doesn't understand that an URL is like a phone number doesn't mean we need to replace it with a broken design.

    • by sjames ( 1099 )

      People have a hard time understanding street addresses. Lets just assign every address a single 12 digit number and pay a private corporation a zillion dollars to provide a lookup service.

  • by HotNeedleOfInquiry ( 598897 ) on Tuesday September 04, 2018 @12:39PM (#57251540)
    That used to be the first step in an internet protocol change. Does Google well and truly own the internet now?
    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Tuesday September 04, 2018 @01:01PM (#57251724)
      Comment removed based on user account deletion
    • Does Google well and truly own the internet now?

      Yes. Or can you tell me the last time you "Bing'd" something or "Yahoo'd" something?

  • This problem, like all internet problems, is not new to the internet. It's not hard to put up a tent by the side of the road, put an apple-like logo onto it, and sell fake iphones.

    How do you know, when you walk into a bank, that it's a bank, and not just some guy with a storefront that looks like a bank? When was the last time that you authenticated your bank branch as actually being a bank?

    How about if my browser -- that has no problem parsing a URL -- simply asked me, the first time I wind up at a new d

    • maybe we should start arresting criminals -- you know, like in every other part of life -- instead of trying to make consumers play their own game of cops and robbers every minute of every day.

      I live in the inner city of a high crime city. Anytime I walk outside, I have to pay attention, since there are dangers lurking nearby. I don't expect the police to make everything totally safe, nor would I want to live in the kind of absolute police state environment that it would require. Similarly, I don't wan
      • I'm not suggesting either one.

        I'm suggesting that after a crime has been committed, reported, and identified, that police then arrest those responsible.

        It's not about making crime difficult, and it's not even about deterring future criminals. It's simply about making the price of crime much much higher than it is today.

    • by nasch ( 598556 )

      How about if my browser -- that has no problem parsing a URL -- simply asked me, the first time I wind up at a new domain, if I'm sure it is who I think it is?

      That might be fine for you, but most people would see something with an OK button preventing them from doing what they want to do, and click OK without reading it.

  • by hyades1 ( 1149581 ) <hyades1@hotmail.com> on Tuesday September 04, 2018 @12:41PM (#57251568)

    So just like always, some powerful agent seeking to invade the privacy of individuals more comprehensively uses "security" as an excuse. Meanwhile, methods that could make the existing system far more secure (while preserving anonymity for those who need it) are ignored.

    If I remember correctly, Google just got caught investigating ways to help China's Big Brother regime weaponize its search engine by turning it into a government-friendly propaganda tool. Google needs to be told in no uncertain terms to shove this so far up its corporate arse the whole board gets a sore throat.

  • No, it's not (Score:4, Interesting)

    by rickb928 ( 945187 ) on Tuesday September 04, 2018 @12:42PM (#57251580) Homepage Journal

    "Uniform Resource Locators are the familiar web addresses you use everyday."

    So far, so good.

    "They are listed in the web's DNS address book"

    Uh, NO. That's DNS, and that works with the part before slashes etc, right up to and including the TLD (.com, .edu, .info for example).

    "and direct browsers to the right Internet Protocol addresses that identify and differentiate web servers"

    Um, partly, that's DNS. Then the URL includes the info that web server needs to find and deliver whatever you were looking for.

    Who writes this crap anyways? Can't we get this right now and then?

    Other than that, I think the idea is not merely dangerous, it's unnecessary. Websites could solve this with simpler URLs, like their own individual versions of .bit/ly-type shortening. Let's encourage them and the software they depend on to make their visions publishable, instead of fixing what isn't broken, merely inconvenient...

    • by mi ( 197448 )

      "They are listed in the web's DNS address book"

      Yes. Whoever put this sentence together does not know first thing about DNS. He should not be writing for "Wired".

      Whoever copy-pasted this junk into a Slashdot submission should be banned from ever submitting again, and the editor who let the submission through ought to be suspended without pay. From a metal hook. By the rib...

  • I forget when it started but for some time now when a company like Google, Facebook or Twitter (and a few others i'm sure) and the word 'trusted' is used at the same time, I just assume they are up to something and its probably not good for free speech, individualism or personal responsibility.

    Seems like yet another thin edge of the wedge towards us all 'needing to be protected' from ourselves.

  • Are they trying to get obsolete by creating a default home page from where you can access information they control? good luck, who is next?
  • SSL/certificates solves website legitimacy issue.
    • I don't see how TLS PKI solves the problem of someone registering "WE11SFARGO.COM", obtaining a domain-validated certificate for "WE11SFARGO.COM", and using that domain name and certificate to impersonate Wells Fargo Bank.

  • by plopez ( 54068 ) on Tuesday September 04, 2018 @12:55PM (#57251666) Journal

    They'll probably want a 16 hexadecimal string with a dotted 48 bit octal sub identifier. Because it's obvious.

    • They'll probably want a 16 hexadecimal string with a dotted 48 bit octal sub identifier. Because it's obvious.

      Google wants that because it drives more traffic to their search engine.

  • For me, the replacements are search results and bookmarks in the browser, with URLs being strictly a machine-usable form used inside software. Whose site I can reach through a given bookmark (or what content is at that page) and whether it's owned by who I think it is (via SSL certificate match usually, although DANE would be better) is everything most users want, the rest should be the equivalent of an IP address.

  • by gweihir ( 88907 ) on Tuesday September 04, 2018 @01:11PM (#57251780)

    Seriously. Don't fix things that are not broken.

  • I agree that there are problems with URLs the way they have been used. I think google is addressing a real problem. But I think the article is confusing and mixes and confuses several problems

    - domain name handling in general "They (URLs?) are listed in the web's DNS address book..."
    - domain name spoofing (i.e. goog1e instead of google)
    - url-rewriting, shortening, and redirection
    - encoding cryptic data in URLs
    - tracking links
    - etc.

    At one point in time, the "path" component (after the first single slash)

  • In short, you navigate to WIRED.com to read WIRED...

    Yeah, damn! I see what you mean. So unnecessarily difficult.

  • This will just turn out like so many other "improvements" - so many modern UI's try to get around user incompetence by trying to get things into people's faces so blatantly that there's often little rhyme or reason to how things work.

    All this has an actual negative impact on people who know how to use computers well though. It used to be that if you understood the general paradigm of UI design for a given platform you could pickup just about any program and figure it out within a short period of time. All

  • "People have a really hard time understanding URLs"

    Well, no, not really- not in the infinitive sense. URLs can be ugly and thus hard to decipher unless you have a little bit of education or experience on the matter.

    It's hard as in "People have a really hard time doing long division" (until they're taught it) and not ""People have a really hard time understanding hypercubes" (because thinking beyond 3-dimensional space is foreign to the human experience).

    If we simply teach people the standards ins and outs o

  • by TeknoHog ( 164938 ) on Tuesday September 04, 2018 @01:39PM (#57251992) Homepage Journal
    *throws chair*
    • by sinij ( 911942 )
      I never expected to have to say this, but MS under Ballmer was a lot less evil than what exists today. Ballmer never attempted to have Windows spy on consumers for profit.
  • ... in charge of the Internet?

    Has anyone at Google bothered to propose an RFC so that this can be discussed? Or are they just going to make these pronouncements and push this into practice through their size?

  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Tuesday September 04, 2018 @01:58PM (#57252126) Homepage Journal

    1. Ted Neilson's Xanadu. Never got off the ground.

    2. IPv6. The original spec required all Internet traffic to be over IPSec, with server networks using digital certificates to prove their identity.

    3. Class 3 SSL certificates. These were certificates released if the person could prove their identity and could prove they had the right to the certificate. Nothing more for user certificates, proof of ownership of domain name and business for server certificates.

    4. Smart web pages. If you're using AJAX and servlets, everything can be done in data, you don't need to mess with the URL.

    The result? A few of these are utilized, but most webmasters either don't understand the technology, won't use it or have been ordered not to by their boss.

    Hacking the URI bar won't change that.

    If Google wanted a better system, they'd start by looking at TUBA, one of the IPng/IP6 candidates. If you can uniquely express a resource with an address, you can give it a name. TUBA has infinitely variable length addresses, so it's easy to code a directory path into the physical address. It's an address so it can be given a unique name.

    Your webserver is now a virtual network rather than a filesystem.

    That doesn't sound like what they're doing.

    • I'd add hash-based links, like magnet and IPFS. They've found a niche, but are too cumbersome for general use. Perhaps the full backing of Google could make IPFS mainstream, but even then it'd only work for static content.

  • From "do no evil" to adopting MicroSoft's Embrace, Extend, Extinguish strategy.

    They proved it works with their handling of RSS and now they're moving on to "extending" the web where people can either comply with Google insinuating itself as the main (or sole) arbiter of identity or else get de-ranked in search results.

    It's the end of the web as we know it.

  • On a similar note, I recall a lot of hubbub a few years ago about us being about to run out of IP4 addresses... Have we all switched over to IP6 quietly..? Or is there still a disaster about to befall us? Inquiring minds want to know (but are too lazy to search for it..haw haw..let's be social and converse instead!)

  • That reach and influence can be divisive, though, and as Chrome looks ahead to its next 10 years, the team is mulling its most controversial initiative yet: fundamentally rethinking URLs across the web.

    Any changes to the URL system should be arrived at in an open, participatory, voluntary manner, not by a 800 pound gorilla with massive commercial self-interest and a history of censorship throwing around its weight. That is, if there is one team I don't want to design this, it's a Google team.

    Furthermore, it

  • Step 1: If the URL is from one of the new domains just block it. They are all shit. If the URL is in PUNY code (non-ASCII display) and it isn't mostly characters that are not significantly different from the Roman letters block it, otherwise display the name of the language set and mark it with a warning. Sorry rest of the world but having a few thousand character sets that often overlap is a security nightmare.

    Step 2: If the URL has multiple sub domains in it, list the top domain first. Make an exce
  • I have experimented with the idea of Reputational Identities for sought and offered things over a mesh network (calling it, BigMesh). I think it works really well. The gist of it is:

    Every entity has an ID through which it/they can post things offered and/or things sought to the mesh. These may be blog articles, tutorials, products, services, or whatever. Each are descripted by a tree of details, such as: Sought: article( topic:"robots" or "artificial intelligence"; price: $0 ); quality > 15

    The mesh

  • Fake (Score:5, Informative)

    by Tailhook ( 98486 ) on Tuesday September 04, 2018 @02:33PM (#57252342)

    Google does not want to "kill" URLs.

    The Wired story quotes plans to attempt changing how URLs are displayed:

    But this will mean big changes in how and when Chrome displays URLs. We want to challenge how URLs should be displayed and question it as we’re figuring out the right way to convey identity.

    They cite previous experience when they attempted an "origin chip," later removed. Again, this was only an attempt to render things in a simpler manner. At no point in the story does any Google representative propose replacing URLs; all that language comes from the Wired writer. Maybe one could illegitimately surmise that some constraint on URLs might be implied in all this but there is no conceivable way to reach "KILL!" That's just fake.

    This is Wired clickbait parroted by Slashdot.

  • by Shooter6947 ( 148693 ) <jbarnes007@c3po. ... t ['s.n' in gap]> on Tuesday September 04, 2018 @02:36PM (#57252362) Homepage

    Didn't Dilbert suggest this back in 1998 [dilbert.com].

  • Their argument is user security. For the nominal cost and slight headache of setting up an EV certificate, businesses could just do that instead, and Google search, chrome and other browsers could highlight websites as ID Verified. Since EV certificates require a URL to be cross checked against a physical business with government registration, its less likely someone will register a website pretending to be "Facebook" or "Mastercard" if browsers enforced EV for high profile targets.

  • The URL is often the only way to recognize phishing sites. If the URL disappears, how do we then recognize them?

  • A proposition that is a tad awkward, since they've already basically taken over the web. ... Whatever.

    If you want to replace URLs, your basically replacing the web. If you want to do that, good luck. It better be a really good replacement, with open standards and premium reference implementations and some really awesome stuff like meshing, state and offline built right in. Plus some amazing programming language to build and run things on it.

    In short: Good luck with that.

    Google could do it, but I doubt even

    • sure let's let a U.S. corporation trying to build a marketing database on everyone for their advertising clients dictate the planet's web standards

  • But you'd still have URLs under the covers. Or you'd end up re-inventing them.

    Some years ago I spent time pondering all the possible ways things could be identified in a system. I came up with three broad, non-mutually exclusive strategies:

    (1) Analytically: identify a thing by a set of properties which are unique to it (e.g. relational primary keys);
    (2) Algorithmically: use an algorithm that is guaranteed to issue identifiers that are unique in the required scope (e.g. UUIDs for global scope, or within re

  • I, for one, welcome our new AOL keywords.

Trap full -- please empty.

Working...