Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Android Google Security IT

Android Is Helping Kill Passwords on a Billion Devices (wired.com) 123

The FIDO Alliance -- a consortium that develops open source authentication standards -- has been pushing to expand its secure login protocols to make seamless logins a reality for several years. Today, it has hit the jackpot: Google. From a report: On Monday, Google and the FIDO Alliance announced that Android has added certified support for the FIDO2 standard, meaning that the vast majority of devices running Android 7 or later will now be able to handle password-less logins in mobile browsers like Chrome. Android already offered secure FIDO login options for mobile apps, where you authenticate using a phone's fingerprint scanner or with a hardware dongle like a YubiKey. But FIDO2 support will make it possible to use these easy authentication steps for web services in a mobile browser instead of laboriously typing in your password every time you want to log in. Web developers can now design their sites to interact with Android's FIDO2 management infrastructure.
This discussion has been archived. No new comments can be posted.

Android Is Helping Kill Passwords on a Billion Devices

Comments Filter:
  • by Anonymous Coward on Monday February 25, 2019 @09:48AM (#58176026)

    Biometrics can be stolen or faked. But there's no way for the legitimate owner of that body to replace them when that happens.

    (posting this on the day my office is forcing a periodic password change on me)

    • Biometrics can be stolen or faked. But there's no way for the legitimate owner of that body to replace them when that happens.

      (posting this on the day my office is forcing a periodic password change on me)

      Yes, because whatever biometric data is read- it is converted to some digital format. That digital format is then no different than a password- probably a lot longer, but if you can steal that array of data, you can copy it and send it from any device you desire.

      • Why would you send a fingerprint? Wouldn't you use that to send a secure token? After unlocking the local method of building said token.

        Think you guys need to go put on the tinfoil hats again. Smart people are on it for you. Or go help fix the broken specifications none of us have read.

        Oh and go read that "New LG hand blood vessel security on G8" release from just a bit ago. It has a sensor that includes depth so scotch tape won't work (no 3D ridges). Yes people will find ways to break in eventually, and t

    • by Zmobie ( 2478450 )

      While true, doesn't actually matter entirely in this case. Since FIDO allows you to use whatever you want on your side for authentication (Key card, biometric, password probably, etc.) you can change it or not use it at all. The challenging party doesn't even get access to what method you used for authentication locally, that never leaves the device. The difficulty to steal that data then becomes all of the hidden juicy bits are on user devices that an attacker would need to compromise individually.

  • by DontBeAMoran ( 4843879 ) on Monday February 25, 2019 @09:50AM (#58176046)

    Web developers can now design their sites to interact with Android's FIDO2 management infrastructure.

    How about you design your FIDO2 thing to automatically type passwords into regular password fields instead of asking the whole web to change for your new special feature?

    • Why would you not choose a new method if it is more secure and you can also have it in place at the same time as normal passwords?

      Nobody is suggesting that this be a mutually exclusive choice for web developers.

      By your argument, we would never be able to change anything once it was deployed...

      • by sinij ( 911942 )

        Why would you not choose a new method if it is more secure...

        What is being traded in exchange for additional security? It isn't nothing, so don't try that line.

        • Yeah because I like typing passwords!

          I mean I assume that's your point as you have no issue to point at (nothing is being lost that you know of... Just a fear of change)

        • Re:Web developers (Score:5, Informative)

          by Zmobie ( 2478450 ) on Monday February 25, 2019 @12:53PM (#58177324)

          It kind of is nothing actually. At least to the service, some of this depends on how sketchy Google or whomever implements the client side portion wants it to be I suppose. The user still only has to input their secret one time and the service and client systems handle the rest. The main difference is the user now handles their own secret information and doesn't have to trust the service with it. I read up on the standard and this is how it works:

          User creates a local registration public/private key pair with whatever secret they want to use to restrict access on the client side. Then transmit ONLY the public key to the service which then associates this public key with a unique user ID on their system which is then sent to the client for storage and retrieval.

          When the user wishes to login to the service they go to the publicly known location of said service which at 'login' sends their client a challenge first. The challenge consists of the user's unique ID and some type of one-time use question or internal payload that the client has to be able to read in order to answer. The challenge is encrypted by the service using the stored public key and then transmitted to the client to start the process.

          Once the client receives the challenge, it uses the known service association to locate the expected key/value pair association and decrypt that information with the private key that only the local device has access to with the user's permission/interaction (scan your fingerprint, hit your smart card button, etc.). The client system can then validate the internal payload contains the expected unique ID (validating the service is who they say they are) and answer/sign the challenge since it can now be read. Using the internal ID or some other mechanism (some type of session key for a modified diffie-helman I think) the client then encrypts the challenge response and returns it to the service for verification.

          At that point the service validates the payload matches what the client response to what it should be and authentication is complete.

          There are two attack vectors I see and neither are easy to do at scale. One, compromise the registration such that when the service sets up the user with their Unique ID that was created on the service side the user associates it to the attacker's unique ID instead. An attacker could then impersonate the service and attempt to harvest information that way. It wouldn't enable a full MITM attack still simply because without access to the private key there is no way for the attacker to impersonate the client and authenticate properly with the service.

          The second, obviously, is breaching the local device. As I have pointed out in a couple other posts though, attackers have to breach every device individually now though creating scaling issues. Not only that, if the authentication methods used are separated per service, it only gets the attacker access to the private key of that single service assuming they don't have persistent access and can wait for the user to use all secrets over time (essentially individual encryption on the keys themselves makes this even more difficult with minimal user interaction required still, but unlikely many users would do this without having very high security needs). Separating devices that store the private keys would also create at least segmented security as well such that the user would not lose everything if one device is persistently breached.

          Generally I like the standard. I may not have everything correct, and I think some of my understanding may actually be assumed internals. If anyone has any corrections or sees issues, feel free to chime in. The service would still be able to uniquely fingerprint an individual for tracking purposes, but I don't see a way around that for single user authentication anyway. Kind of difficult to let a user be anonymous when they have a unique account on the service anyway...

    • How about you design your FIDO2 thing to automatically type passwords into regular password fields instead of asking the whole web to change for your new special feature?

      I agree FIDO is redundant. It brings nothing useful to the table client certs don't already provide. Any existing UX issues with deployment and management could be addressed instead of attempting to create completely new and redundant standards.

      That said there are few things as deleterious to the continued security of Internet users today than "regular password fields". Continuing to allow insecure authentication as if it's somehow acceptable (Bu....uuu....ttt.. TLS..!!) results in millions falling victi

      • by tepples ( 727027 )

        It's not just a user experience difference. Devices implementing FIDO spec also tend to be more hardened against copying out the private key than a browser's certificate store is. I guess this is part of why browser developers have spent more time on FIDO than on TLS client cert UX.

      • Except the crap user experience of existing cert methods. Have you used them? I've only tried to add root certs using It directions and that didn't even work the first time. And they expire so you get to juggle them manually.

        I assume FIDO2 is a mechanism for automatic certificate agreement and distribution. So yeah... That'd be a big boon over the existing system in my experience as an attempted consumer of said system.

        • Except the crap user experience of existing cert methods. Have you used them? I've only tried to add root certs using It directions and that didn't even work the first time. And they expire so you get to juggle them manually.

          All major browsers/OS have a means of importing pkcs #12 files easily without much effort. As in you save/open the pkcs 12 file and are greeted with a prompt to add it to the user certificate store. It takes seconds.

          I agree this is far from ideal.. the interaction should be more user friendly/seamless/automated yet it's still very much a trivial process.

          As far as generating certs and expiration policy that's all on the application server. The system we use handles this seamlessly for us while onboarding

    • Re:Web developers (Score:5, Interesting)

      by AmiMoJo ( 196126 ) on Monday February 25, 2019 @11:07AM (#58176432) Homepage Journal

      That's not how FIDO works. It uses public key crypto, so you secret never leaves the phone. In contrast with a password the secret (i.e. the password) has to be both transmitted to the server and stored in some fashion (hopefully one-way hashed with salt).

      Of course Chrome also supports auto-fill for passwords, which you can use if for some reason you don't understand what FIDO is.

    • by dissy ( 172727 )

      How about you design your FIDO2 thing to automatically type passwords into regular password fields instead of asking the whole web to change for your new special feature?

      Uhh, you mean to say you are currently *TYPING* your 8k binary private keys into password boxes on websites?!?

      Dude, you are doing this horrendously wrong! Your private keys should NEVER leave your possession!
      Stop sending them in web form fields, and you should have all of them revoked and make new ones to use properly.

      You never send your private key, you receive a random challenge that you decrypt with your private key and send THAT back.

      I strongly advise against typing all of that binary data as well, but

    • for your new special feature

      FIDO2 is not special, and given Webauthn is 3 years old it's hardly new at this point either.

      How about you design your FIDO2 thing to automatically type passwords into regular password fields

      How about we don't gimp new protocols by reverting their security to the lowest common denominator, a denominator which has been repeatedly shown to be wildly insecure and susceptible to all manner of attacks to say nothing of rampant end user misuse.

  • A technology that has been supported by all major browsers since the beginning of time itself.

    • That seems to be a pretty cumbersome system for users as they would have to either pay to be recognized by a CA and/or would have to go through a rigamarole set-up procedure to install a certificate in every piece of Internet-communicating software they use.

      Also, what happens if that cert gets lost? Now they have to revoke it (and we all know what a joke the cert revocation system is) and go through the whole thing again...

      • That seems to be a pretty cumbersome system for users as they would have to either pay to be recognized by a CA and/or would have to go through a rigamarole set-up procedure to install a certificate in every piece of Internet-communicating software they use.

        It's not like this. The best high level explanation I can give is that client certs are like browsing to a secure site except in reverse.

        When you go to https://www.reallysecuresite.o... [reallysecuresite.omg]

        1. Your browser has a listed of trusted certs handed down by the CA gods.
        2. www.reallysecuresite.omg has a private/public key pair blessed by CA gods.

        Client certs flip this.

        1. With client authentication the browser has private/public key pair blessed by www.reallysecuresite.omg.

        2. www.reallysecuresite.omg has a list of trust

    • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Monday February 25, 2019 @10:33AM (#58176270) Homepage Journal

      One big difference between client certificates and U2F keys like this is that compared to a web browser's client certificate store, a U2F key is somewhat more hardened against attempts to copy out the private key. This lets a U2F key pass more tests for being "something you have."

      The other is that TLS client authentication have been a usability nightmare [browserauth.net], particularly for non-technical users, in "all major browsers since the beginning of time itself."

      • No obvious button to "sign out" (use no client certificate) in order to retrieve a logged-out view of a resource or to switch between certificates associated with different user accounts.
      • Backing up certificates and moving them among devices isn't easy.
      • Certificates aren't associated to a domain. If the user uses the same certificate on all sites, this isn't quite as bad as sharing a password, but it does have the same cross-site tracking implications as third-party cookies. If the user uses a different certificate on each site, major browsers traditionally haven't helped the user figure out which certificate goes with each site

      Browser publishers haven't prioritized improving client certificate UX because of the low user base of client certificates. I've seen them on only two sites: StartCom (a defunct TLS CA) and Kount (an e-commerce fraud risk assessment platform).. But browsers could improve this UI in a few ways:

      • When a TLS site requests a client certificate, show a key icon in the location bar next to the TLS lock icon. This opens the certificate chooser. The user can click it again to log out.
      • Group certificates in the certificate chooser by the registrable domain* with which they were last used.
      • Let the user drag certificate files in and out of the certificate chooser.
      • Include client certificates in the browser's password sync feature.

      But good luck getting browser publishers to devote any time==money to this.

      *A "registrable domain" is a public suffix, as defined by Mozilla's Public Suffix List, plus one name part. If "co.uk" is a public suffix, for example, then "ebay.co.uk" is registrable.

      • One big difference between client certificates and U2F keys like this is that compared to a web browser's client certificate store, a U2F key is somewhat more hardened against attempts to copy out the private key.

        There is no reason keys can't be stored in security modules or "smart cards" or even USB sticks for those dumb enough to require the security nightmare that is USB for user authentication.

        The other is that TLS client authentication have been a usability nightmare, particularly for non-technical users, in "all major browsers since the beginning of time itself."

        So instead of addressing usability problems the answer is to create an entirely new incompatible protocol? Fuck that. I use client certs every day and it's no more of a usability nightmare than HTTP authentication.

        No obvious button to "sign out" (use no client certificate) in order to retrieve a logged-out view of a resource or to switch between certificates associated with different user accounts.

        This is impossible to fix right? A completely new protocol is required to address this because browsers don

        • by tepples ( 727027 )

          There is no reason keys can't be stored in security modules or "smart cards" or even USB sticks for those dumb enough to require the security nightmare that is USB for user authentication.

          If you're doing the signing on the computer, an attacker can copy the private key on its way from the module to the computer or copy it out of the browser process once it is in the computer. If you're doing the signing on the module, that is exactly what FIDO aims to do. What am I missing?

          This is impossible to fix right? A completely new protocol is required to address this because browsers don't offer obvious sign out buttons for certificate and http auth.

          I notice the sarcasm. It's required in a browser only because as of first quarter 2019, more customer-facing websites support FIDO than TLS client certificates.

          All systems require a key to work. Whether on a USB stick or a smart card or embedded in a computers security module it's the same issue. Nothing prevents portability and regardless of which one you select the same keying material is being guarded.

          The structure of FIDO ensures that the private key never leave

          • by ffkom ( 3519199 )
            FIDO itself does not enforce a client key to reside on any specific hardware, so it could theoretically reside in just some software-implemented key storage.

            But unlike TLS client certificates, FIDO is well-prepared to work for web sites that want to make sure the client is actually using some sort of hardware key storage, from where it is never transferred into the main memory.

            So I understand the most prominent advantage for e.g. a bank would be that they could issue their favorite hardware token to the u
            • But unlike TLS client certificates, FIDO is well-prepared to work for web sites that want to make sure the client is actually using some sort of hardware key storage, from where it is never transferred into the main memory.

              This idea that private key operations of client certs have to be handled in software requiring keys to be transferred to the host system is simply not correct.

              Smart cards have enabled exactly this for at least a dozen years and counting. They also happen to cost four times less than current USB sticks.

              • Smart cards have enabled [signing communications off the main CPU] for at least a dozen years and counting. They also happen to cost four times less than current USB sticks.

                Even when you include the cost of a smart card reader that connects to one of the ports on the outside of a smartphone, tablet, or laptop computer? On my laptop, counterclockwise from top left, these are power, HDMI, USB, microSD, audio, USB, and USB. Last I checked, Square was charging $35 for a smart card reader that connects to a TRRS audio port [squareup.com], and I imagine that Square's might support only EMV application, not TLS application. If a consumer product computing device does have an ID-000 sized [wikipedia.org] smart card

                • Even when you include the cost of a smart card reader that connects to one of the ports on the outside of a smartphone, tablet, or laptop computer?

                  USB card readers are $10-20 if you don't already have one. Individual cards run $0.50 - $1.00 /ea. OTG adaptors sold separately.

                  Not only is it cheaper having card readers be the interface people interact rather than raw USB improves system security.

                  A trojan authenticator plugged into a USB port can own a system in seconds.

          • If you're doing the signing on the computer, an attacker can copy the private key on its way from the module to the computer or copy it out of the browser process once it is in the computer. If you're doing the signing on the module, that is exactly what FIDO aims to do. What am I missing?

            There are a number of deployment options. In high security systems keys are loaded onto smart cards and handed off to users. Private keys never leave the card.

            In low security situations where you just want to protect users from phishing key pairs would just be downloaded initially during initial onboarding and loaded into systems user cert store.

            You could also do signing request from the hardware itself. No matter what solution you pick it's all RSA and it's all about managing private keys. The same opt

    • by flink ( 18449 )

      A technology that has been supported by all major browsers since the beginning of time itself.

      I've dealt with client certs quite a bit having been a government contractor both utilizing and writing software which does CAC/PIV (i.e. client certificate) authentication. While it is an effective way to secure an endpoint, the user experience is less than ideal. Because the client cert is negotiated as part of of the TLS handshake, when it fails it is difficult to give any meaningful feedback to the user. They usually end up just seeing an SSL error or a 401 HTTP server response. The UI browsers prese

      • I've dealt with client certs quite a bit having been a government contractor both utilizing and writing software which does CAC/PIV (i.e. client certificate) authentication. While it is an effective way to secure an endpoint, the user experience is less than ideal. Because the client cert is negotiated as part of of the TLS handshake, when it fails it is difficult to give any meaningful feedback to the user.

        Most secure authentication systems only provide pass/fail feedback by design.

        Here you at least have a choice. You can setup the web site to not require certificate authentication so that if authentication fails the user is given instruction on what they can do/contact to get the issue resolved.

        The UI browsers present to chose a certificate is rather rudimentary and for good reason not under the control of the application developer. In addition, most browsers pin the client certificate choice for the duration of the session, so if a user accidentally chooses the wrong certificate, they have to quit out and restart their browsers.

        I completely agree the client implementation sucks in many regards but these are implementation problems that can be resolved with relatively minor effort. This isn't an excuse to reinvent the wheel.

        All of this is not to mention the logistical hassle of issuing certificates, maintaining a CRL, getting smart cards out to people, etc. This and the usability issues are tractable for a large organization whose users will receive training on how to deal with mutual authentication, but for the general public I think it would be kind of a non-starter.

        The issues are e

  • They want me to trust an Android phone to authenticate all my logins? Are they high?

    Switch to KeePass and family. Create a database with a keyfile and a master password. Distribute the database using I switched to KeePass and family a couple years ago, and it was the best thing I ever did. Use a master-password plus a sneaker-net distributed keyfile to protect the database. You can share the database with something like SyncThing, that has end-to-end encryption you control just for added safety but really you could share the database publicly with complete safety at that point.

    Don't get me wrong, I like Android. But Google has been in the NSA's back pocket from the beginning. Not that Assange is one of my favourite people, but he did make a compelling case for Google being essentially an arm of the US government. Which is one reason why China had it out with them (we may get on Huawei's case for back-doors, but we did it to them first with Google and Windows).

    • Nothing wrong with KeePass and ilk, it is just perpetuating an outdated model.

      There are better ways to authenticate these days.

      I get that you are technologically adept, but not everyone is. Those who are not are still valuable humans just like you.

      For those who fear the three letter agencies (as you should), there will (hopefully) always be alternatives. For the masses, there will be a trade-off made where some level of spying, in the name of safety, will be implicit.

      We make the same trade-offs in the "offl

      • The problem is that the US is extremely authoritarian while marketing itself as free. The watchdogs are powerless over the wails of cops and cowards that "if you have nothing to hiiiiiiiide, why do you need priiiiiiivacy?"
      • I get that you are technologically adept, but not everyone is. Those who are not are still valuable humans just like you.

        This isn't an excuse. Computer have been around since the early 70s. Either learn or don't use any device. Your dumbass fault for using the same shitty password on all your sites cause you refuse to learn.

    • What's wrong with Keepass? You mean other than having to send your credentials across the internet? Also if you don't trust your Android device then you can't trust running Keepass on it and you've already lost.

  • I didn't read the article because WIRED happens not to be part of my current subscription package. But based only on the quoted paragraph, I see two practical problems likely to arise.

    The first is the requirement of "Android 7 or later". that last I checked, phones were still being sold multiple major versions of Android behind because newer versions of Android require more CPU and RAM than fit in the bill of material for a budget prepaid smartphone. Which entry-level phone ships with 64-bit Android 7 or la

    • The fourth problem is that not everyone wants to tie their usage of Web services to a phone number (aka a real-world identity). Requiring a phone number to create an account is a loss of privacy.
    • Which entry-level phone ships with 64-bit Android 7 or later?

      Moto X4, available at $140 or less carrier locked (I paid $150 for the unlocked Android One version) supports Android 9. It also has ARCore support. I tried and tried to find something better and/or cheaper with water resistance (X4 claims full IP68) and failed. Granted, it's at about the end of its lifecycle, so updates will probably stop soon, but since it's actually current now that should hold me for a while. I'm replacing a Moto G2, on which I am currently running Pixel Experience, so I'm going to doub

  • 200 Million Biometric credentials stolen in Security breach.

    It is inevitable.

  • by sinij ( 911942 ) on Monday February 25, 2019 @10:15AM (#58176178)
    Corrected headline - Android is helping to spread pervasive tracking.

    User name and password is "something you know", and as such is not something that can be used without your explicit consent. Seamless login is "something you have", and since it is part of your phone, it doesn't require your explicit consent to be checked.

    Make no mistake, this is about removing what little anonymity is left from the Internet. FIDO standard is effectively a Real Name Only policy disguised as progress.
    • Yep, nailed it -- this is about techscum like Google wanting to hold the keys to the safe.
    • When I can't use an Android device without signing in to Google, I will not buy another Android device.
      • by Anonymous Coward

        You already can't download anything from Google Play without signing in, though. Are you able to find enough software from other sources?

        • Aurora store just quit working I think so it's f-droid and direct downloads at this point.
          • YALP works, it's on F-Droid.

            In order to really work without Google though, one needs also to replace play services, location services, cloud messaging (GCM/firebase), disable the hotspot checking, disable safetynet checks, etc., it's more than just not signing in - there are a pervasive number of googley tie-ins.

            Need root + firewall + something like MicroG and a healthy dose of paranoia to seperate Google and Android.
        • Actually take that back, you can create a random google account and then sign into it with aurora store without signing in the whole phone. The anonymous aurora store account just quit working recently so I just checked and figured this out.
        • Comment removed based on user account deletion
    • by Oswald McWeany ( 2428506 ) on Monday February 25, 2019 @10:40AM (#58176290)

      Corrected headline - Android is helping to spread pervasive tracking.

      User name and password is "something you know", and as such is not something that can be used without your explicit consent. Seamless login is "something you have", and since it is part of your phone, it doesn't require your explicit consent to be checked.

      Yes, and I use a dozen different e-mail accounts to make it slightly harder for different companies on the web to know that I am the same person if they try and share data. I don't want the same account ID on every site I go to.

      I want Amazon and Slashdot, for example, to not know I'm the same person if they share databases. Or my bank and Google, etc. I know there are other ways of tracking and I'm probably not fooling the big guys much- but I want to log in different places as "different people".

    • by AmiMoJo ( 196126 )

      Wrong. The browser does require you to explicitly consent to seamless login credentials. There is an on-screen prompt every time, unless you explicitly tick the "don't ask again for this site" box.

      This is a big win for most people. It can be used in addition to a username and password, or with just a username, if desired.

      • This (no requirement to always automatically trigger any system but for safety). If users want it then go build it and sell it or give it away.

        The only time the government should block that is if they're like China ("dissention is evil"). Otherwise we should be able to call them out and build a system to fight it.

        Otherwise it's just a monetary issue. Said company won't let you so go reimplement or figure out a public health reason to have the government regulate them into submission. If they won't then sham

    • by tepples ( 727027 )

      Seamless login is "something you have", and since it is part of your phone, it doesn't require your explicit consent to be checked.

      Unless unlocking the phone's FIDO keystore requires your fingerprint (Touch ID) or a direct stare (Face ID) or your hand veins (Hand ID) or at least some other expression of consent. Does it?

      • by Zmobie ( 2478450 )

        I think it does from what I was reading. It may have the option to do it without the interaction, but it looks like it is setup such that the user needs to consent to unlocking the private key for that service.

    • by AmiMoJo ( 196126 )

      Here's the standard, as you can see it requires user interaction before authentication can take place:

      https://fidoalliance.org/specs... [fidoalliance.org]

    • Corrected headline - Android is helping to spread pervasive tracking.

      I am shocked. SHOCKED! Are you saying an advertising company has an economic incentive for the continued development of Android?

    • Corrected headline - Android is helping to spread pervasive tracking.

      You don't know what you're talking about. The FIDO2 approach to online auth uses a different key to authenticate to every we site and is designed to make it impossible to connect a login to one site with a login to another site (unless user-provided data provides a connection between them).

    • FIDO standard is effectively a Real Name Only policy disguised as progress.

      With the exception of FIDO not being anymore tied to my Real Name than my Slashdot pseudonym is. Sure. A bit less of the tinfoil hattery and a bit more understanding how public key cryptography works please. You're on a tech forum. Act like it.

    • by ffkom ( 3519199 )
      While suspicion is always a good thing, FIDO is less inclined to expose your identity to some arbitrary web service than classic TLS client certificates or simple cookies or JavaScript run-time environments are.

      The FIDO standard itself is definitely not guilty of trying to increase tracking capabilities.

      But of course, a malicious implementation of FIDO in a browser could be abused by the browser's vendor to facilitate even more tracking. So a non-Google open source browser implementing FIDO would certainl
      • While suspicion is always a good thing, FIDO is less inclined to expose your identity to some arbitrary web service than classic TLS client certificates or simple cookies or JavaScript run-time environments are.

        Why less? FIDO can be configured to prompt. TLS authentication can be configured likewise to prompt. FIDO has channel bindings allowing servers to get indications of usage at transport level same as TLS.

        What specifically makes FIDO *less* inclined?

  • Earlier hackers needed to crack one site at a time. Now, thanks to innovation and advances, all they have to do is to crack android.
    • by sinij ( 911942 )

      ... all they have to do is to crack android.

      Or they could just wait few months for a metasploit module to come out targeting carrier-locked Android phones that are at least a decade behind on patching.

      • Wonder if we'll ever find a way to get long term support? I'm hoping Google's project "treble", I think it was called, will help. Think it was a tweak to the kernel that allowed plug and play kernel updates, versus the current custom compiled in drivers.

        Maybe we need to make sure those "right to repair" legislations people are working on includes software, not just hardware. From the few details I'd read I thought one did (Massachusett?) but I really should investigate them better.

    • by Zmobie ( 2478450 )

      I mean, that was already the case though? Same with Windows really. The attack surface of breaching user devices to steal secret information isn't miraculously materializing from this standard, it was already there. If one's phone were compromised then the keylogger on it is already stealing all your credentials and what site they go to.

      At least with this type of standard the user device HAS to be hit in order for the private key to be compromised. This actually decreases the attack surface and makes re

    • by ffkom ( 3519199 )
      Why crack Android when it is just the colorful bloat-ware browser that you need to crack, in order to access every interaction of the user with some web service? FIDO or not FIDO does not make a difference to this kind of threat. After the death of Flash, JavaScript is certainly the biggest contributor to this threat.
  • by b0s0z0ku ( 752509 ) on Monday February 25, 2019 @10:18AM (#58176198)

    I don't want to be dependent on a given device or ecosystem for using a website or an app, and I don't necessarily want to tie it to my identity via biometrics. I can make passwords arbitrarily complex, yet easy to remember, and even write them down in a little book (kind of hard to hack remotely).

    Password-less authentication isn't about security -- it's about control and LACK of security. Google wants to hold the keys to the city.

    • Password-less authentication isn't about security -- it's about control and LACK of security. Google wants to hold the keys to the city.

      Nope. Google has no access to any of the keys used.

      Google does have the private key to the root CA key used to validate FIDO2 authentications, but the keys used to do the authentications are created on-device, and signed on-device by keys which are not device-unique and therefore provide no device identity linkable across web sites.

      What you're saying is like claiming that Let's Encrypt has the "keys to the web" because they are the root CA for much of the web's TLS certificates.

      • Google does have the private key to the root CA key used to validate FIDO2 authentications, but the keys used to do the authentications are created on-device, and signed on-device by keys which are not device-unique and therefore provide no device identity linkable across web sites.

        What you're saying is like claiming that Let's Encrypt has the "keys to the web" because they are the root CA for much of the web's TLS certificates.

        This is a scary argument.

        Let's Encrypt along with every other CA on the planet very much holds the keys to the web. They can generate keys enabling them to subvert virtually any secure site on the planet at their pleasure.

        You are right in that this is not because they are the root CA "for much of the" web.

        It's much more basic than that. It's simply because they are a CA and therefore inherently in a position to globally issue certs trusted unconditionally by all clients using the web regardless of how man

        • Google does have the private key to the root CA key used to validate FIDO2 authentications, but the keys used to do the authentications are created on-device, and signed on-device by keys which are not device-unique and therefore provide no device identity linkable across web sites.

          What you're saying is like claiming that Let's Encrypt has the "keys to the web" because they are the root CA for much of the web's TLS certificates.

          This is a scary argument.

          Let's Encrypt along with every other CA on the planet very much holds the keys to the web. They can generate keys enabling them to subvert virtually any secure site on the planet at their pleasure.

          Actually, you make a good point. Google in this case has far less power than a traditional CA, because each time you set up a WebAuthN authentication with a web site, you generate a new, unique asymmetric key pair and the site you're authenticating to stores the public key. The Google root is used only at that time, by the web site to verify that the key you've provided is stored in some sort of secure hardware -- and only if they care to verify that (in most cases, I can't think they'll even bother.).

          O

          • Actually, you make a good point. Google in this case has far less power than a traditional CA, because each time you set up a WebAuthN authentication with a web site, you generate a new, unique asymmetric key pair.

            Of course this in itself doesn't mean anything. Creation of a key pair doesn't preclude existence of parallel trust paths.

            and the site you're authenticating to stores the public key.

            When I login to a site using this system are you saying the sites keep a database of actual public keys and validates by matching my individual public key from that databases? It doesn't simply perform validation using something more manageable higher up trust chain?

            The Google root is used only at that time, by the web site to verify that the key you've provided is stored in some sort of secure hardware

            I don't understand how this is possible. Does each piece of hardware have a factory installed key pair to establish this

        • by Zmobie ( 2478450 )

          You do bring up a solid point that them being able to issue root certificates is a significant problem if abused, but I don't think they can impersonate a user that has registered to the service with their own keys. The certificate is merely to certify they are who they say they are for whatever purpose.

          The problem is that since the FIDO challenge contains a challenge question that must be answered to authenticate (and it is encrypted and can only be decrypted using the user's private key), even a root CA

    • by Zmobie ( 2478450 )

      I imagine one of the mechanisms for FIDO authentications retrieval of a private key is actually entering a password. The difference is the password never leaves the device nor does the private key itself. This is also a standard not necessarily a specific device/ecosystem/vendor/ etc. which we are already extremely dependent on thousands of standards to do anything with technology.

    • I don't want to be dependent on a given device or ecosystem for using a website or an app

      It's not. There's always a fallback.

      and I don't necessarily want to tie it to my identity via biometrics.

      It's not. That's not how fingerprint authentication works on any device with any standard.

      and even write them down in a little book (kind of hard to hack remotely).

      Please do us a favour and don't ever talk publicly in an article about security again.

  • by BrendaEM ( 871664 ) on Monday February 25, 2019 @10:27AM (#58176242) Homepage
    First it was fingerprints, then it was the face. While the question where it will end exists, does anyone notice that they are just scanner our bodies part by part, and selling the information?
    • by sinij ( 911942 )
      Error. Post aborted. Failed to confirm user identity. Please firmly re-insert authentication device into your sphincter and try posting again.
  • by MCRocker ( 461060 ) * on Monday February 25, 2019 @10:32AM (#58176262) Homepage

    I'm a little shocked to see an article on FIDO without even a mention of Steve Gibson's competing Secure Quick Reliable Login [grc.com].

    Although I'm not an expert on this, most reports I've heard is that SQRL [wikipedia.org], is what FIDO was trying to be.

    One key feature of SQRL is that it only does one of Authentication and Authorization, so it can be used for anonymous login, which would be better for many purposes, such as blog comments where you only need to verify that some response belonged to the same author as some other so nobody could impersonate someone else. Though it looks like FIDO may also do this.

    • by tepples ( 727027 )

      SQRL [...] can be used for anonymous login, which would be better for many purposes, such as blog comments where you only need to verify that some response belonged to the same author as some other so nobody could impersonate someone else.

      So can client certificates in a web browser, if only their UX weren't so horrible. So can a "tripcode", or a self-assigned password whose hash salted by the email address is displayed publicly, as 4chan has demonstrated.

      • Those fail the requirements of not needing a CA and not needing separate keys from separate websites. SQRL gets these right.

  • by ironicsky ( 569792 ) on Monday February 25, 2019 @11:40AM (#58176692) Journal

    If only they would apply 2FA policies to device authentication. Using their BLE token , you should not be able to unlock your device without your token and a finger print or password.

    As others have mentioned, finger prints can be faked, passwords can be guessed, but none of that matters when the phone is stolen if you are missing the token attached to someone's keychain.

    Google accounts online can be protected by 2FA, but your Google device is the weak link, because it has access to all your photos and drive documents without authentication once your device is unlocked.

    • Google accounts online can be protected by 2FA, but your Google device is the weak link, because it has access to all your photos and drive documents without authentication once your device is unlocked.

      Be sure to use a short screen lock timeout, set your device to lock when the power button is pressed, and make a habit of always pressing it when you are done using your phone. Basically, maximize the odds that if your device is lost or stolen, it will be in a locked state.

      As others have mentioned, finger prints can be faked, passwords can be guessed

      Fingerprints can be faked, and I don't have any recommendations as to how to mitigate that risk. You could choose not to use biometric authentication, but in most cases I think that's a bad tradeoff, since the convenience of fingerprint a

  • by Anonymous Coward

    Anyone who read the news about the NSA leaks, still has all the alarm bells go off, whenever he hears the name "FIDO".

    It was synonymous with "backdoored, as required".
    I hardly think this got any better.

    Sorry, the NSA is in one category with China, the FSB, Mossad, GCHQ and maybe less evil than North Korea but far more powerful and anti-American to be frank.

Technology is dominated by those who manage what they do not understand.

Working...