Google Caught in Comcast Traffic Filtering? 385
marcan writes "Comcast users are reporting 'connection reset' errors while loading Google. The problem seems to have been coming and going over the past few days, and often disappears only to return a few minutes later. Apparently the problem only affects some of Google's IPs and services. Analysis of the PCAP packet dumps reveals several injected fake RSTs, which are very similar to the ones seen coming from the Great Firewall of China [PDF]. Did Google somehow get caught up in one of Comcast's blacklists, or are the heuristics flagging Google as a file-sharer due to the heavy traffic?"
I hope they get slapped (Score:3, Interesting)
Would be kind of awesome... (Score:3, Interesting)
Theory... (Score:2, Interesting)
iptables fake RST detector (Score:5, Interesting)
iptables -I INPUT -j LOG -p tcp -m tcp --tcp-flags RST RST -m conntrack --ctstate NEW,INVALID
The fake RST will probably not have a valid sequence number for the established TCP connection, so the Linux stack will flag it as a NEW connection, and the fact that you're getting a RST for a NEW connection should be good enough alarm.
Or maybe it would also work with just the matching code
iptables -I INPUT -j LOG -p tcp -m tcp --tcp-flags RST RST -m state --state NEW,INVALID
What do y'all think?
Google Web Accelerator Error (Score:2, Interesting)
Push it one step further... (Score:5, Interesting)
What if Google, a (justifiably) huge advocate of network neutrality, is deliberately sending the type of RST packets that imitate Comcast's faked packets, specifically to Comcast IP addresses, knowing the inevitable fallout that would result? It would make an already bad situation for Comcast far, far worse, and it's likely that the requested Senate investigation would turn into nails in the coffin for those who want preferential treatment of packets on the Internet.
For a company that does no evil, if they could pull it off, it would be absolutely diabolical. But then, it could easily be one of those "ends justify the means" kinds of situations. At any rate, all I can say is "MWAH HAH HAH HAH HAH!!!! Suckers!"
(No, I don't actually believe that's what's happening, but man, what an AWESOME plan to make network neutrality happen once and for all.)
going on for months with google maps (Score:5, Interesting)
Getting Comcast to fix it seems unlikely.
Go even further and ignore fake RST? (Score:5, Interesting)
time for IPSec? (Score:3, Interesting)
Now, whether MS would be cooperative in that, I dunno... I know XP supports it, but not too much about configuration specifics.
Re:unfair competition (Score:5, Interesting)
That's an interesting take on it. And as far as I'm aware there is no DSL provider in the United States doing anything like this. It certainly seems to be the case in the wireless world. The carriers removing or blocking features that may compete with their own content offerings.
One wonders what the solution to this is. Prohibit someone from being in the content business AND the delivery business at the same time? They'd fight you tooth and nail on that -- and you'd have the "free market" types after you as well.
In any case I think they will shoot themselves in the foot in the long run. What happens when all P2P traffic is encrypted and looks like any other encrypted protocol (ssh, ssl, etc)? At that point you may be able to identify WHICH subscriber is using p2p (bittorrent stands out like a sore thumb for the sheer volume of connections it establishes) but how will you identify which individual packet is p2p and shape it? Or will they just start sending random RST packets to ALL your connections, including (as TFA suggests) Google?
If bandwidth IS the issue then in the long run they only have two options. Invest in some upgrades or stop selling "unlimited" service. Personally I'd take the best of both worlds. I'd offer a "premium" package aimed at p2p users (no monthly bandwidth limit and/or higher speeds) and use the money from that to expand my network.
Re:Not me... (Score:5, Interesting)
Comcast shenaigans (Score:4, Interesting)
I've done bandwidth tests and my upstream STARTS at a nice 1.5MB/s and then 15 seconds later drops to 30K/s EVERY TIME.
What this does is give false results when people are doing speed tests. When you do your test you get great results (in my case 15Mb/s downstream and almost 2Mb/s upstream) for the first 15 or 20 seconds. Then after that it just BLOWS.
Add level3 to the list vs. Fark? (Score:1, Interesting)
Google home page, but not services (Score:2, Interesting)
I was working from home last week, so I was using my Comcast connection extensively every day. The problems with Google connection happened several times a day. Intermittently, my attempts to connect to www.google.com failed for 5-10 min at a time. Oddly enough, going directly to Google services (Gmail, Notebook, Bookmarks, etc.) worked just fine.
Re:follow the money (Score:3, Interesting)
Re:Not me... (Score:4, Interesting)
Thanks for adding anecdotal noise to the discussion that adds absolutely nothing to the discussion.
Gee, I think that anecdotal evidence is interesting, especially if you're interested in understanding what rules Comcast uses to decide which packets to block. Questions like: "Is it the whole network or just portions (I suspect just portions)?" or "Is it all the time or during peak demand?" Please try to be civil. If a comment isn't valuable, it won't be modded up. If it is valuable it will.
Re:Not me... (Score:3, Interesting)
Re:iptables fake RST detector (Score:2, Interesting)
Re:Not me... (Score:3, Interesting)
That's the right way to handle traffic in the net - drop the priority for packages that aren't sensitive and promote packages that are sensitive to delays. If the lines are up to their throughput limit this is the way to go, and doing it right will not have any really bad effect on the users.
Intentionally dropping data packages is much more evil since that interferes with the functionality and ultimately drives up the network traffic - not down - since many more packages has to be sent and re-sent to provide communication. Bad network conditions also spins power-users to tweak their network settings to be more aggressive. And if the the conditions gets really bad there is a risk that P2P software developers circumvents this by sending redundant information driving the bandwidth use even higher.
But it also has to be figured out if this really is intentional or if the ISP is using equipment with bugs that actually causes this behavior. Since Google is one of the sites that's frequently used it may be that there is a buffer overflow in a router. And if there is a company policy for a certain vendor and a certain setup of that equipment this has a tendency to spread.
Anyway - one of the interesting things reported is that accessing Google through IP address works fine, but not through the DNS resolution. This makes me suspect that the problem is rather related to a certain server or a DNS resolution problem that triggers this problem. Can be an intermediate DNS server that can't handle load-balancing but instead directs all traffic to a single server, which ultimately gets soaked. (maybe not the server, but the channel to the server).
And ultimately - there possibilities available range from being evil to being stupid. Just the kind of story you can read in Dilbert [dilbert.com].