Google Starts Testing Its Replacement for Third-Party Cookies for Chrome (engadget.com) 72
"Google has taken one step closer to banishing third-party cookies from Chrome," reports Engadget.
The internet giant has started testing its trust tokens with developers, with promises that more would move to live tests "soon." As before, the company hoped to phase out third-party cookies in Chrome once it could meet the needs of both users and advertisers.
Trust tokens are meant to foster user trust across sites without relying on persistent identifying data like third-party cookies. They theoretically prevent bot-based ad fraud without tying data to individuals. This would be one framework as part of a larger Privacy Sandbox including multiple open standards.
The company still hopes to eliminate third-party cookies by 2022.
Trust tokens are meant to foster user trust across sites without relying on persistent identifying data like third-party cookies. They theoretically prevent bot-based ad fraud without tying data to individuals. This would be one framework as part of a larger Privacy Sandbox including multiple open standards.
The company still hopes to eliminate third-party cookies by 2022.
Trust? (Score:2, Insightful)
Some how I doubt Alphabet is going to give up dipping its hand into the privacy cookie jar, after all, they are an advertising company.
Re:Trust? (Score:4, Insightful)
Yep, I think they're much more interested in the "prevent bot-based ad fraud" part of this.
Yes - their stated goal/excuse is impossible. (Score:2, Informative)
once it could meet the needs of both users and advertisers
The needs of (sane) users and the "needs" of advertisers conflict. They just want a monopoly on spyware.
Re: Yes - their stated goal/excuse is impossible. (Score:2)
s/on spyware//
They just want a monopoly, or at least its little brother, lock-in.
Like any for-profit business, driven by the mad-laughing-in-a-base-under-a-volcano insane requirement of infinite exponential profit growth. Requiring either bully tactics and abuse of power, criminal behavior, or self-writing/buying regulation, to destroy the competition, in the long run.
Something fundamentally incompatible with a free market.
Which is precisely why there is no such thing as self-regulating / self-balancing by
To serve good ads, they have to increase privacy (Score:2)
Yes Google wants to reduxe click fraud because it undermines their business. Google also wants to be able to serve us ads for an ESP32 or whatever we're most likely to buy. That's what Google wants. (Check out the ESP32 if you haven't).
Pesky little folks like the US Senate are poised to make it harder for Google to do what they need to do unless they can do it in a way that better protects privacy. Ad blockers are already causing them problems. Third-party cookies are not a reliable feature that Google ca
Re: To serve good ads, they have to increase priva (Score:1)
Re: (Score:2)
> probably spent 30 hours battling that ADC.
Sounds like what I would do, before spending 10 minutes hooking up an external ADC. :)
Re: To serve good ads, they have to increase priv (Score:1)
Re: (Score:2)
Whenever I hear about someone who actually knows what they are doing dead bugging a component like I do, that makes me feel better about myself.
Re:Trust? (Score:4, Interesting)
Re: (Score:3)
This isn't a cookie or some way of identifying the user. In fact it's designed to be the opposite of that. It's some kind of variation of or based on Privacy Pass, which you can read about here: https://medium.com/@alxdavids/... [medium.com]
Basically when you solve a captcha instead of it being a one-time thing it generates a cryptographic token. Then the browser hands out those tokens when requested as proof-of-work so that you don't see captchas again. The tokens are anonymous and can't be traced back to the initial w
Re: (Score:2)
It serves the same purpose as the third party cookies but can be a lot worse for privacy than any third party cookie is.
When something from a major company says "trust" it means that it's quite the opposite unless it can be mathematically proven.
There is actually no reason for me as a user that there shall be any cross-site tracking whatsoever.
Re: (Score:3)
In what way does it serve the same purpose as a third party cookie? It doesn't store arbitrary information or allow identifiers to be passed, it's just a proof-of-work token. It specifically does not allow for tracking, it's actually designed to prevent that.
It's an open standard, you can participate in the ratification process with the W3C. If you have specific criticisms or issues then you should bring them up because apparently everyone else missed those insights.
Re: (Score:1)
It specifically does not allow for tracking, it's actually designed to prevent that.
Yeah, right.
User u from IP i using browser b solved captcha c. Now let's tell the world + dog about it.
No way we can track this particular user and build a profile about him. No siree!
That's the difference between this and a cookie (Score:2)
> User u from IP i using browser b solved captcha c. Now let's tell the world + dog about it.
That's what third-party party cookies do, so as you said there is a privacy concern with cookies.
With this zero-knowledge proof system, the site can tell only that the current visitor once solved SOME captcha from Google; it can't tell which captcha. The Google ad network openly publishes a public key. The browser shows that it solved a math problem using that published key.
Re: (Score:2)
How exactly can Google's public key be used to track my browser? The fact that I know Google's key means ... nothing, everyone knows their public key. That's why it's called public.
Re: (Score:2)
They can't tell which captcha you solved, the tokens are designed to be untraceable. There is mathematical proof of that.
Re: (Score:3)
In what way does it serve the same purpose as a third party cookie? It doesn't store arbitrary information or allow identifiers to be passed, it's just a proof-of-work token. It specifically does not allow for tracking, it's actually designed to prevent that.
I was trying to find out more information on how it will be used but still have one question about it will be implemented. Once a site receives the token and you logging, they can ties the token to you. This would make sense because they would know it is you and be able to tailor information to you, including what ads to serve up. Separate from the ad issue, if the token is unique then there is value in sharing or selling the information to others. Once you know who is associated with the token, you can
Re: (Score:2)
>"In the end it's still about identifying a user to serve ads."
Yep, and someone will find ways to abuse it. Certainly it will be used as part of browser fingerprinting. And use will spread beyond just ads.
And [I assume] this scheme is proprietary, so what happens when other browsers don't follow this non-official "standard" and sites block you for not having it? Welcome to people allowing Google to control the web by using Chom*.
And how can you trust a binary blob browser to do what they say it will d
Re:Trust? (Score:5, Insightful)
What am I missing?
What you're missing is any useful information in the summary or linked article about what 'trust tokens' actually are, or how they work from a privacy preserving/destroying perspective in the context of Google's 'privacy sandbox'.
Fortunately the EFF have written a pretty good summary [eff.org] of trust tokens.
I'd advise you to read that link, but the summary is:
Good:
fewer CAPTCHAs, fighting fingerprints
Good:
"privacy budget" for fingerprinting
Bad:
Conversion measurement API.
Apple's version stores 6 bits (1-64)
Google's version stores 64 bits of information (1 to 18 quintillion.
= Profiling tool.
Very bad:
Federated Learning of Cohorts. FLoC allows users to build their own, local machine learning models by sharing little bits of information at a time. A 'behavioral credit score'. Incomprensible to users, reveals incredibly sensitive information to third parties. Trackers will use it to profile build.
Re: (Score:2)
What am I missing?
What you're missing is any useful information in the summary or linked article about what 'trust tokens' actually are, or how they work from a privacy preserving/destroying perspective in the context of Google's 'privacy sandbox'. Fortunately the EFF have written a pretty good summary [eff.org] of trust tokens..
Thanks. The EFF explanation was very informative. Seem line old wine in a new bottle...
Re: (Score:2)
I would say the Conversion Measurement API is a good thing. At the moment sites use Javascript to do it. Having a single API makes it easier to block both the Conversion Measurement API and any Javascript trying to access 3rd party sites.
Also note that clearing site data also stops the conversion being recorded (https://github.com/WICG/conversion-measurement-api#clearing-site-data)
I agree Google's maximum 64 bits is too high. However Apple's isn't much better because it's 6 bits per ad server, so you can ju
Re: (Score:2)
so you can just have loads of subdomains to increase the number of bits.
Well spotted, hadn't thought of that. Feels like eTags [lucb1e.com] all over again.
Re: (Score:2)
Apple's version stores 6 bits (1-64)
Google's version stores 64 bits of information (1 to 18 quintillion.
= Profiling tool.
Or maybe Google doesn't subscribe to "640k should be enough for everyone" engineering. After-all you can profile every computer on the planet with less than 64bits. FAR less.
There is no repeated token. (Zero-knowledge proof) (Score:4, Informative)
> What am I missing? Will the token be unique each time it is used but still trusted?
Yep, that's what you were missing - it's the returned value ia different every time, so the site has no way of knowing if it's the same person from yesterday or not.
The underlying cryptography is a called a zero-knowledge proof.
One part of the math gets complicated because it's based on elliptic curves, but we can describe and understand the general idea without delving into the hard math part. Elliptic curves are a way to set up a math problem that's arbitrarily hard to figure out, but easy to check, and vice versa. For now, let's just call the solutions "odd numbers".
The ad network (Google) openly publishes their public key, which I a big number. The client takes a second to compute an "odd number" that is a multiple to of the Google key. That's the proof of work. The client keeps that "odd number" secret. The site wishing to check that the captcha has been done then sends over a large arbitrary number, such as "738384641...739462". The client's task is to multiply this challenge number with their secret odd number and send back the last two digits.
Based on the last two digits, the server can tell if you successly used an odd number in the multiplication, but can't tell which odd number you used. Each time, the last digits (what the client sends) is different, there is no way to tell what the client's secret number is, but we can tell that they do have a secret number.
It's more complicated than that, of course, and the client actually chooses two random numbers that go into the calculation, but that's the general idea. Just like of I tell you this: :)
( X * 63826281 ) % 10 = 3
You can tell X is odd, and that's all you can tell; you can never have any idea what X is, other than it must be an odd number. Similarly, with zero-knowledge proofs, the receiver can't tell what the secret is, it can only know that the sender does have a secret. Just with much harder math.
Elliptic curves are also the state of the art in TLS (SSL) cryptography, so breaking ECC would require the person to be able to read all TLS traffic.
Re: (Score:2)
Once a site receives the token and you logging, they can ties the token to you.
They solved that too. The token is sent at a random time a day or two later. Because the timing is random unless they have an extremely low number of click-throughs they won't be able to tie it to a specific user.
Will the token be unique each time it is used but still trusted?
Yes, the tokens can be unique. The trust comes from examining the proof of work.
Re: (Score:1)
Once a site receives the token and you logging, they can ties the token to you. This would make sense because they would know it is you and be able to tailor information to you, including what ads to serve up. Separate from the ad issue, if the token is unique then there is value in sharing or selling the information to others. Once you know who is associated with the token, you can track.
These tokens are specifically designed so that they can't be correlated from when they were issued to when they were used. That you asked for tokens, or that you redeemed tokens, is something someone could associate with your profile - if they were able to create a profile. Once you eliminate third party cookies, you would have to do this with heuristics (e.g. the browsers with this user agent at this IP range, the traffic coming with this TLS session, etc).
But the point here isn't to build a system that re
Re: (Score:2)
Through the Privacy Sandbox, Google aims to let advertisers still display personalized ads without users giving up too much of their personal data or browser history.
The personalized ads part tells me that this is a new way of tracking that might be a lot harder to avoid. I think that the true reason is that it's introduced so that it's going to be harder to avoid being tracked.
Re: (Score:2)
proof of what work though, you see, if that can be used ot prove you did something - like a captcha, or looked at an advert - then that's sufficient to track you and invade your privacy.
Sure, it doesn't need to store arbitrary data, it is arbitrary by itself. You just need one per advert (or campaign, or advertiser).
Re: (Score:2)
> proof of what work though
Proof that you did a math problem (based on elliptic curves) using Google's public key.
> if that can be used ot prove you did something - like a captcha, or looked at an advert - then that's sufficient to track you
That's what makes zero-knowledge proofs so interesting, and to me, just plain cool. They prove you know the answer, without revealing the answer. Here, they show your computer did the work to solve the math problem, without revealing the answer you got.
If that seem
Re: (Score:2)
Proof that *somebody* did *a* captcha at *some point*. Pretty useless since that happens millions of times a day on different sites.
Anyway they won't use a captcha in this case, that was just an example of how a similar system works. The browser will do the work internally without user intervention.
Re: (Score:2)
Thanks for sharing this information, AmiMoJo.
You're pretty smart for a fuckwit. :)
Mathematically proven - good idea!! (Score:2)
> it means that it's quite the opposite unless it can be mathematically proven.
Hey that's a great idea. Such a good idea that's exactly what WC3 did here.
The mathematical proof in this instance is that if you can figure out a user's secret token, you can break elliptic curve cryptography, which means you can read most of the TLS (SSL) traffic on the internet.
For more information, Google "elliptic curve zero-knowledge proof".
Re: (Score:1)
This isn't a cookie or some way of identifying the user. In fact it's designed to be the opposite of that. It's some kind of variation of or based on Privacy Pass
They can call it what they want, but it's still a form of a cookie.
Basically when you solve a captcha instead of it being a one-time thing it generates a cryptographic token
How about we stop the stupidity of using captchas? 2FA is harassment enough, captchas are just pawning off the work to someone else.
but for advertising fraud the obvious use-ca
Re: (Score:3)
It's not a cookie.
Privacy Pass was designed for sites that need a captcha to stop spam, if you have a better method then let's hear it.
If you use an as blocker then you won't ever click an ad so this doesn't affect you at all anyway.
Re: (Score:1)
No, not to 'stop spam.' To identify all visitors and block non-authenticated contributions. Spam is a bad thing that people get in email. Not user input that is inconvenient to a website operator.
Nice try at making the job of trying to make identifying visitors to a website easier, though.
Re: (Score:2)
How does a captcha identify you? I mean beyond what a normal web page can do.
Re: (Score:2)
What's to stop Google or some other 3rd party from taking note of who did the captcha? and then tying that info to the unique captcha 'token'.
Who can access the token?
IMO so long as advertisers and ad firms get to run code in the browser then they are not to be trusted.
Re: (Score:2)
> What's to stop Google or some other 3rd party from taking note of who did the captcha? and then tying that info to the unique captcha 'token'.
The fact that nobody but you has your secret token.
> Who can access the token?
You.
For more info, see:
https://slashdot.org/comments.... [slashdot.org]
https://slashdot.org/comments.... [slashdot.org]
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Oh they are, absolutely they are. There will be zero 3rd-party cookies in Chrome soon, "for your privacy and safety".
Google will then replace them with a different mechanism that does exactly the same thing, but one that cannot be blocked by all the ad and cookie blockers out there.
Who watches the watchmen? (Score:2)
Maybe there's a zero knowledge way to do this instead of putting google in the middle.
Re: (Score:2)
Re: (Score:1)
I'm having trouble understanding your motivating concept.
he wants to ...
killing the concept of the browser as a program
... and went straight into a bizarre irrelevantly detailed but incomplete way to describe how he would turn browsing into an operating system service, forgetting to mention any upside this idea might actually have. and back to square zero, i guess.
reminds me of my hacking/saving-the-net nights back in the day, high on katovit. i often read that stuff the next day and couldn't understand most of it myself.
Re: And I hope to still eliminate the concept of " (Score:2)
It os only "bizarre", if you haven't realized that browsers nowadays are nothing more than a duplication of an OS... on top of an OS.
And since we're coders, and not Xzibit, we generally want to "kill" duplication of code. :)
See my comment right below:
Re: (Score:2)
i don't think "code duplication" means what you think it means, competing products aren't "duplication of code".
anyway, this is quite a gross generalization, but i realize now you also want to get rid of html, all the way talking about ram cache, memory operations and access control. i'd say your plan needs a bit more work ... at the core level, but whatever floats your boat. have fun throwing around buzzwords and be safe.
Re: And I hope to still eliminate the concept of " (Score:2)
No, my motivation is primarily, to remove duplication of concepts and of code. Like is the case with the inner-platform effect, for which browsers literally are the textbook example. :)
And secondarily, to prefer the better implementation for each of these concepts, respectively. Which definitely isn't HTML5. But to be nice, I want to keep backwards compatibilty. (Where inventing the HTML5 platform counts as having walked backwards.)
Having virtualization just comes natural with that.
And it makes adding RBAC-
Re: And I hope to still eliminate the concept of " (Score:2)
Ok, it may be off-topic, but I don't want to live in a world, where my comment is off-topc, but TFA is on-topic. ;)
both users and advertisers (Score:1)
Good for Users = Bad for Advertisers
and
Bad for Users = Good for Advertisers
The relationship is 180 degrees out of phase.
Re: (Score:3)
I disagree.
No advertisers = paid websites = bad for users = no/low users = no advertisers
A middle ground of completely anonymous "trust cookies" that provides the advertisers the info they need (not want) and lets users continue browsing a "free" internet is both good for users and advertisers.
It is the extremes of this argument that are bad for either or both.
Re: both users and advertisers (Score:3)
Why are paid websites bad for users? People pay for many things. I would argue that the proliferation of ad-based content has had a negative impact.
With the current advertising model used by Google and Facebook we are paying with our lack of privacy. TV advertising worked without the ability to deliver targeted at the individual level.
Re: (Score:2)
It used to work better when ads were served by the website industry - eg I'd go to a hifi website, and get ads for hifi stuff.
Today I get ads for whatever Google thinks they can sell to the advertisers for more money because they're "relevant", and usually that means stuff i have already bought.
I guess it might work better for Facebook and co that have no discernable target to work with, and perhasp that's why they took off so much, though I think the ability to charge a lot more for "targetted" ads is prob
Re: (Score:2)
Why are paid websites bad for users?
Because most paid websites want you to stump up the funbux with real money, rather than some anonymized crypocurrency micropayment equivalent.
As soon as the financial transaction is linked to a real human identity (with all the browser-based antifraud measures that entails) an even more perfect method of identifying and tracking users is in place.
Re: both users and advertisers (Score:3)
That could be an outcome, but we have the tracking anyway. My thesis is that privacy-hostile advertising approach is due to the lack of paid web content.
The balance of power has tipped so much from the consumer to the advertising-based companies (e.g. Google and Facebook) and information brokers, that government regulation is the only solution--and it pains me to say that as a business owner.
Re: (Score:3)
You obviously were not here at the beginning. No advertising and everything was fast and worked perfectly.
it wasn't a shopping mall back then.
Re: both users and advertisers (Score:2)
Sometimes I think the web itself was a mistake. That we should have stuck with Gopher, its pure text predecessor, and never allowed hypertext and HTML to arise.
Re: (Score:2)
Advertisers do not need any user info. They just want it to target ads. Targeting ads is a want, not a need.
Re: (Score:2)
it's the targeted advertising that is so bad because it allows companies (and worse, politicians) such an insanely fine degree of control that they can do things like target ads at extremists meant to appeal to them
like why have doomsday peppers gotten so big? because an industry sprung up around selling them products for doomsday prepping and that then becomes a self-reinforcing system where the true believers feel like their beliefs are "proven" because they are constantly seeing affirmation of those bel
Re: both users and advertisers (Score:2)
More things to block (Score:1)
More work for the various ad blockers, script blockers, and other filters that have become necessary to get anything reasonable out of the web.
Re: (Score:2)
Hopes...and Dreams. (Score:2)
"...The company still hopes to eliminate third-party cookies by 2022."
Google is so large in advertising it answers to no one.
And I expected to find this confirmation of feel-good-go-nowhere marketing bullshit.
When it comes to Google and privacy or trust, Nothing will Change. That's not just a campaign slogan.
You go ahead and do that (Score:2)
"If anyone here is in advertising or marketing, kill yourself" -- Bill Hicks
Who has ever clicked on a web ad? (Score:1)
Re: (Score:2)
I don't ever recall clicking on an ad that was served to me online. The percentage that generate legitimate clicks that lead to purchases must be infinitesimally small. I guess it's still enough to make it worth their while to pester the other 99.99% of us, though.
Online advertising in the 21st Century likely has far more to do with spying/tracking than selling.
I strongly doubt it's justified by actual product revenue driven by clicks, and the advertising whores will have to prove it for me to believe otherwise.
And yeah, it's going to get worse.
trust tokens (Score:2)
Let me guess, these will require a signed certificate from an advertising agency (or two or three or four), and they 'promise' not to link your identity to them.
Hrm. (Score:2)
I banished third-party cookies in 1995. What is taking them so long?