Mathematical Analysis of Gnutella 332
jrp2 sent in a paper written by one of Napster's founding engineers. It is
a mathematical evaluation of Gnutella discussing
why the network won't be able to scale up to any reasonable size. I
have been impressed with Gnutella in the past, and have wondered along
these same lines in the past.
This is old news. (Score:3, Offtopic)
Re:This is old news. (Score:4, Informative)
Re:This is old news. (Score:2, Insightful)
I have never had much luck using Gnutella, the main problem seems to be the lack of parallel download, if you have 20 users all with the same file you want, it is dismally painful to have to pick one.
Fasttrack on the other hand (Kazaa has a linux client that is IMHO better than the bloated windows offering) works very well in this regard. Choose a file and the client download it in parallel from as many clients as it can, makes for much quicker transfer.
Totally NOT true!!! (Score:5, Informative)
My average download speeds on Xolox are around 160Mbs. Of course, I am use the ever so crappy AT&T cable modem service... so other people on faster DSL lines will most likely experience faster downloads.
Next thing you are going to tell me is that Windows is better than Linux because Linux doesn't have any good GUIs or desktop environments for it. Yeah, lets just ignore everything thats out there right now.
Not only that, but Limewire also supports multisource, segmented, or swarmed downloading. Though Limewire has only recently gotten such functionality, while Xolox has had it for the past year.
Oh, and GNUtella is free as in beer and as in speech.
Re:Totally NOT true!!! (Score:2)
Too late... (Score:3, Informative)
Will you pay attention at the SOURCE? (Score:2)
Just when they lauch their pay service. No, I assure you his/her analysis is totally and utterly impartial. Excuse me while I ask Bill Gates about the scalability of the Linux kernel.
Re:This is old news. (Score:2, Funny)
or maybe only i would find that funny.
Heh (Score:2)
Hitchhiker's Guide, Part II. (Score:5, Funny)
Napster: Sucks ass.
Gnutella: Doesn't scale.
(Mod my ass as Flamebait for this, but didn't everyone know about Gnutella's scaling problems, and for-pay Napster sucking ass, based on Slashdot stories months and weeks before today?)
Re:Hitchhiker's Guide, Part II. (Score:4, Offtopic)
you must have the new edition.
Re:Hitchhiker's Guide, Part II. (Score:2)
With the state of the world being as it is now, i'd say this is an outdated issue... Maybe "partially lethal" would be more appropriate.
Re:Hitchhiker's Guide, Part II. (Score:2)
Although "mostly harmless, mostly suicidal" would fit the bill quite well.
Re:Hitchhiker's Guide, Part II. (Score:3, Offtopic)
That's just the pretzels.
Re:Hitchhiker's Guide, Part II. (Score:2)
Yes, and in the latest version there is a lengthy explanation of why the previous guide was woefully underpowered, and how the next version is going to have a complete real-time model of the entire universe with a customized Total Perspective Vortex (TM) designed to match the buyer. They are just waiting for those dime-sized hard drives that hold 10 googolplex bytes. They are expected to hit the market Real Soon Now.
However, we have to warn you that in testing there have been problems with the UPS (Universal Positioning System) which is required to make sure that the real-time universe model doesn't try to recursively model all the other real-time universe models in the universe. Testers have confirmed that when this happens, the electronics inside tend to blow up.
old news (Score:4, Informative)
mathamatical evaluation (Score:2, Offtopic)
Someone has not passed his grammatical evaluations at school
Re:mathamatical evaluation (Score:2)
Re:mathamatical evaluation (Score:2)
jrp2 sent in a paper written by one of Napster's founding engineers. It is a mathamatical evaluation of Gnutella discussing why the network won't be able to scale up to any reasonable size. I have been impressed with Gnutella in the past, and have wondered along these same lines in the past.
It was fixed shortly after my comment.
Ancient news. (Score:5, Informative)
I mean, I know that none of us - including our fine moderators - are perfect, but are they at least paying attention?
OK,
- B
Re:Ancient news. (Score:2)
Even worse! Incosistent math! (Score:5, Informative)
For example, in the very last table (Bandwidth rates for 10qps) he says the bandwidth generated will by 8GB/s, which align with N=8, T=7. Where you to use the N and T values from the beginning, this would be 2.4MB/s, which is off by 3143 and one third times.
Going back to Joe User's Greatful Dead query, it only generates ~250KB, not 800MB.
Remember, very very few people are going to modify their TTL or open connections. This ``white paper'' grossly misstates the amount of bandwidth Gnutella generates and seems to be an anti-Gnutalla paper designed to mislead rather than an honest and fair judgment
Re:Even worse! Incosistent math! (Score:2, Informative)
That explains the different numbers.
However, I do agree that a couple numbers seem to be plucked from mid-air, but the argument and maths seems fine
Re:Even worse! Incosistent math! (Score:2)
Look at his first chart and the "N = 2" case. Why is this only being incremented by 2 each time (2, 4, 6, 8...)? Shouldn't it be multiplied by 2 each time (2, 4, 8, 16...) just as "N = 3" is multiplied by 3, "N = 4" is multiplied by 4, etc?
Re:Even worse! Incosistent math! (Score:2, Informative)
Re:Even worse! Incosistent math! (Score:2, Interesting)
Re:Ancient news. (Score:3, Funny)
Re:Ancient news. (Score:2)
Just be glad (Score:3, Funny)
Pay Napster beta testers allowed to speak (Score:3, Redundant)
.
Oh come on! (Score:3, Insightful)
Re:Oh come on! (Score:2)
What the hell? (Score:5, Funny)
The Logarithmic value of the messages exchanged ! (Score:5, Interesting)
Re:The Logarithmic value of the messages exchanged (Score:5, Informative)
The only solution is to structure the network by using "super clients" or "servants" or "super nodes", call them what you want, the later is what KaZaa and Morphus have accomplished...
This is exactly the point. This is the only way to properly distribute querys, as anyone who has set up a multi-homed ISP knows. It works on the same principle as BGP routing, i.e. there are routers (super-nodes, or whatever) that have a specific number (an ASN - or in P2P, the supernode address) but there are thousands of computers (casual modem users - p2p) on the internet that these routers have information about. If BGP routing worked this way, nothing would go anywhere. However, by having several nodes giving out information on who has what and how to get it, while the majority of users just download and give out their own info, not pass along info of others, things work much smoother. And with a correct implementation, everyone could have a route to everyone's file list at a minimal bandwidth useage.
Re:The Logarithmic value of the messages exchanged (Score:3, Interesting)
Following an election, the supernodes update the clients as to the lookup machines. I suppose you could even have it where if all the supernodes were shut down that an entirely new election process takes place creating a new set of supernodes. Kind of like having a DNS server setup where any machine can act as one of the root servers based on a criteria based election by those machines doing a lookup.
Way too much for my wee brain to work out all the details on. Sounds good in theory anyway
Re:The Logarithmic value of the messages exchanged (Score:3, Interesting)
it could be fixed, and made powerful, self scaling.
Re:The Logarithmic value of the messages exchanged (Score:2, Informative)
Re:The Logarithmic value of the messages exchanged (Score:5, Informative)
That's not logarithmic. If every client node connects to a "super node," and every other "super node," then what you have is a two-level tree. Growth at each level is O(sqrt{n}), not logarithmic.
Chord [mit.edu], a p2p research project from MIT, is truly logarithmic. Go read their SIGCOMM'01 paper [mit.edu] for an explanation of how their system works.
--Patrick
Supernoding's other advantage (Score:3, Insightful)
Of course, building an indexing system that scales arbitrarily is difficult, and building an indexing system that recognizes local topologies is also critical. A typical problem universities had with Napster was that if N people at the school wanted a given tune, most of them would be likely to fetch it across the school's limited outside bandwidth instead of most people fetching it from other sites on the fast LAN after the first one or two had downloaded it across the limited part. Napster was able to reduce this problem, at least at some schools, because having a centralized indexing service means that they can enforce more locality by making it easiest for people to find nearby peers. A decentralized system *may* be able to accomplish this, but it's a lot harder.
Re:The Logarithmic value of the messages exchanged (Score:2, Informative)
The next step is to add more sophisticated routing protocols between ultrapeers. Many of the algorithms mentioned elsewhere in this post (Chord, CAN, etc.) are contenders for that, as is LimeWire's home-grown query-routing proposal [limewire.com].
Christopher Rohrs
LimeWire
Growth of network relates to negative attention (Score:3, Interesting)
Look at ICQ. It was fairly decent as an instant messaging client until the numbers hit one million or so and then it needed to control everything under the sun and companies could spam through it. File sharing happens through it all the time too.
I don't care if Gnutella cannot scale to the levels that Napster saw. Smaller is better in my opinion!
Re:Growth of network relates to negative attention (Score:2)
Smaller is better, so just one user to search must be best of all! And the download rates are incredible!
Re:Growth of network relates to negative attention (Score:2, Interesting)
Having developed the first host caching application for gnutella, I can say that the author never fully understood how the network worked.
His equations may be accurate based on how he thought requests and replied propogated through the network, but he assumed every request had a reply.
It is true that the bandwidth overhead was large, but I rarely used more than 15KB/s during the times when there were 4000+ clients connected. He says that it might not be possible to reach all 4000 people, but in order for me to know how many users were out there, they all replied to my ping, thus searchable.
Finally, the very nature of the network doesn't lend itself to protocol updates at all. The protocol was extreamly limited, but once it caught on, not much could be done about updating it short of starting an entirly new protocol. You couldn't just shut it down, and thats the major problem.
Many proposals were written on how to implement a system without the gnutella limitations, and you are seeing them in many different implementations.
Re:Growth of network relates to negative attention (Score:2)
Unless you want obscure stuff. If i hear that XYZ indie punk band has a great album (Self, or The Proms, for example) I want to hear what they sound like before i buy it, because I don't want to order something from CDNnow or whatever and pay $20/cd and $7/shipping or whatever to get a crappy CD (the juliana theory - emotion is dead: thanks for nothin). But i do want to support indie music if it doesn't suck. So for me, it's morpheus, old-skool napster, gnutella, whatever, as long as it is big, i'll check it out.
~z
Re:Growth of network relates to negative attention (Score:2)
I listen to punk music and have always enjoyed the openness of the companies that sell the music for non-fans and fans alike to listen before buying. Most indie labels have inexpensive samplers or online mp3 download segments from artists. I listen to many obscure punk bands, and almost always there was a venue to hear them before buying. Toxic shock had the Shock Report with floppy 7" recording samplers. Notes in Thrasher Magazine [thrashermagazine.com] was an excellent review resource. Flipside had samplers. Nowadays you have The Fat Club [fatwreck.com] or Punk-O-Rama [epitaph.com]. Cheap CD offerings where you get about 10 to 15 different bands showcased. Enjoy!
Re:Growth of network relates to negative attention (Score:2)
And you're right, there is absolutely no comparason between a CD and a show. If i can, i tend to buy CD's at shows - it's like LTJ says: "Well I really don't know if it matters at all so, but we try to keep our prices low for records and our shows"
20/20 Hindsight (Score:4, Insightful)
It's sort of like calculating the maximum hull speed for steam ships crossing the Atlantic Ocean and saying there is a theoretical maximum speed to intercontinental travel. Then someone comes along and invents airplanes.
Gnutella will mutate and evolve, and will at somepoint in the future be replaced by something better when it starts to fall over.
The demand for Ms. Spears and the Backstreet Boys is just too damn strong for things to stand still.
I enjoyed that this post was next to the announcement that of the new-and-not-so-improved preview of Napster was out..
Re:20/20 Hindsight (Score:3, Insightful)
There are well known workable epidemic algorithms suitable for P2P that have been around for a long time. They generally provide statistical guarantees of success in return for scalable use of bandwidth.
Epidemic distributed systems should not be attempted by people who do not grok exponential growth. Planning for somebody wiser to innovate around your mess is not responsible.
20/20 Hindsight?! (Score:2)
Re:20/20 Hindsight (Score:2)
More to the point, it's like doing that TODAY, when airplanes already exist. Nobody is currently advocating flat p2p systems like the old gnutella in favor of supernode systems like FastTrack or extended gnutella.
Of course, this paper was written over a year ago, but it shouldn't be news to anyone now.
gnutella (Score:3, Interesting)
Re:gnutella (Score:3, Insightful)
It's depressing to think that a lot of people put their computers on a network without even understanding basic concepts like this. (It's even more depressing to call tech support at an ISP and realize you understand more about the problem then they do, but now I'm rambling.)
Re:gnutella (Score:2)
It was quite simple. Search Gnutella for text files containing the @ sign.
But one quick question: Would a Linux gnutella program let me share
But still: search for the @! There are plenty of cookies on gnutella for download. The funny thing though is that most users seem to be on dial-up.
Re:gnutella (Score:2)
On my Debian box (and my RedHat partition) just about everything in /etc is world-readable except for /etc/shadow. Not that people are really gonna be interested in a copy of my /etc/init.d/apache file anyway... It's /home you really have to worry about.
Gnutella's spawn (Score:5, Informative)
We had napster and one extreme, gnutella at the other, and in the middle a re a number of partially centralized systems with super peers like Fast Track, such as:
Open FT [sourceforge.net]
JXTA Search [jxta.org]
GNet [purdue.edu]
NEShare [thecodefactory.org]
and many others...
Then there are the alternative projects that use an entirely different mechanism. For example, social discovery as implemented in:
NeuroGrid [neurogrid.net]
ALPINE [cubicmetercrystal.com]
Or distributed keyword hash indexes like:
Chord [mit.edu]
Circle [monash.edu.au]
GISP [jxta.org]
JXTA Distributed Indexing [jxta.org]
And many others as well.
The coming year(s) will see a lot of maturity in these areas, and searching large peer networks will become ever more efficient over time. Gnutella showed us the possibilities of a fully decentralized model, and refinements of its underlying architecture can produce vastly better solutions.
2002 will be an interesting year for peer networking applications...
giFT enters network testing (Score:3, Informative)
Slashdotted? (Score:2, Informative)
Maybe you should be trying the Google cache [google.com]!
OT: Quick! Earn that Karma! (Score:3, Offtopic)
Choice (Score:2, Insightful)
If everyone was willing to share their files, then there would be no such problem with P2P programs.
Re:Choice (Score:2)
If this were not true, essentially, people would be offering files that nobody wanted, and that would just be stupid.
Re:Choice (Score:2)
Logically, if one person is downloading then another person (peer to peer) is uploading.
How do people download more than upload?
Re:Choice (Score:2)
If more and more people use the server only to receive files, and do not make files they receive available to others, then in the end, the people who were making their files available to others will no longer be able to, or they will have to severely limit the bandwidth going out to those who are taking the files.
The only way to avoid this would be to have nodes that are there simply to retrieve as many good quality files as possible and offer them up for download. But then, it's not really P2P anymore, is it?
Re:Choice (Score:2)
Problem is, though, the cable company caps me to a 128 kilobit per sec upstream, so there's an imbalance there that I can't do anything about.
But I do what I can!
Re:Choice (Score:2, Informative)
While your downloading a file, it's immediately made available for upload from you. It uses resume download to download parts of the file you want from multiple sources, some of which don't have all of the file yet too.
Re:Choice (Score:2, Informative)
I was like this for about a week before I realized why I wasn't getting any uploads. I had to open up port 6346 on my home network (linksys router). Also, Napshare lets me "force local ip" to my firewall/ external ip (assigned by RoadRunner). The linksys router does port forwarding on outside requests, so only one computer on my home network can share on that port.
This thread reminded me that RoadRunner had expired my old ip address and assigned me another and I had forgotten to update my gnutella client to reflect to new ip. So for the past few weeks or so, I had been one of the "non-sharing" people by simple oversight.
I doubt most limewire/bearshare users know any of this stuff. When running a gnutella client from work, people couldn't do this even if knew about it and wanted to.
Correction, Taco (Score:2)
I think we could add:
"... but since I was too busy doodling and writing dirty, hackish perl when I was in school, I'm glad someone else did the actual math."
Why Napster isn't P2P. No, Really. (Score:4, Insightful)
In theory, a true Peer-to-Peer file transfer network would exist in a decentralized fashion where you would never have to query a central host for routing or file availability. Napster requires you to route through one of the Napster servers for information. Even introducing Napigator still doesn't alter the Napster model because all it does is allow you to route through a different central host. It seems that all Napster did was integrate a search engine and nameserving into one element (coming from only one provider).
This isn't to knock the accomplishments of Napster, it was certainly an original idea to incorporate these areas and provide a GUI access client to boot. But it is apparent that Napster developers weren't all that revolutionary in their thinking either.
The suggestion of true P2P is revolutionary, and the perfect implementation (should it ever arrive) will also be revolutionary. But the Napster model is no different than everyone providing their MP3 list to a website who maintains a list of links on where to download MP3s. Napster simply automated this process. Napster is no more P2P than any TCP/IP connection not operated through a proxy.
Is http P2P? I'm talking directly to another system, and there is no moderator/mediator. Normally, I have to find out about that system from a 3rd party (e.g. a search engine) -- just like someone obtains a list of links from Napster.
True, I'm being no better than the author of the original article; because I too am offering no solutions. I'm just holding out hope for true P2P in the future.
Re:Why Napster isn't P2P. No, Really. (Score:2)
True Peer-to-Peer networking is two "golden shower" porn stars exchanging business cards.
Is there a limit to the gnutella horizon? (Score:3, Interesting)
Because of this basic and simple observation, I do not foresee gnutella to die anytime soon because of scalability reasons alone (however copy-protection issues are another story).
Again let me stress that my observation here is based on the strong assumption that the "search horizon" is "reasonable sized" so as not to have to search the whole gnutella network.
Re:Is there a limit to the gnutella horizon? (Score:2)
Re:Is there a limit to the gnutella horizon? (Score:2)
His chart called "reachable users" describes how the horizon grows as T or N change.
I think now that there are normally over 1000 people in your horizon possibly up to 8000.
The other thing about the article is that it was written before clients started caching replies and that changes your horizon around quite a bit.
Quite frankly caching the replies probably helps but the Gnutella protocol is still awful.
I'm more impressed with Morpheus as a decentralized file sharing network. There is an open source Morpheus client called "gift."
The weird thing is that the only way to get documentation about how Morpheus works is to download the source tarball for gift and poke around in the READMEs. There is no other public documentation for it any where on the net.
Basically it sets up tons of little mini servers that index songs for up to around 300 people. Clients have a list of these servers and query them to find files. If you want to a "horizon" of 6000 computers then you only have to make 20 or 30 queries. In Gnutella (without caching) the same horizon would be 6000 queries. No one really knows what it would be with caching and it changes depending on whether it's a popular query or not.
Actually Gnutella in that case is much worse than just 6000 queries because many computers have no songs shared and are stilled searched where in Morpheus computers that don't share songs do not index. And another thing that makes Gnutella worse is that I think the replies are relayed multiple times instead of just once.
I'm not a gift developer or user myself... But I would say it was a far better way to go than Gnutella.
This is well and truely FUD. (Score:5, Insightful)
At the end of the paper, the author coughs up the big scary number of 63GBps of traffic in the Gnutella network when the nodes each have 8 connections and are using a TTL of 8. Wow! That's a lot of traffic. That certainly isn't scaling! Well, what the author never points out is that, by his own math, the network has 7,686,400 users at this point! When we divide up the total traffic among all of those network links, we get a different view. If you do the math you discover that this is a whopping 72Kbps! Oh no! It's the end of the world! Well, no, it's not. True, it's more than a modem can handle. But it's well within the reach of most cable modem connections. Given that your computer is being expected to handle the search requests of over 7 million other people, it's not that much traffic.
Don't get me wrong, I agree that Gnutella doesn't scale all that well. But this paper is just plain FUD. The only number that really matters to users is the total bandwidth load on their pipe. By carefully avoiding that number, which isn't very big and scary at all, the auther is clearly lying by ommision. Given all of the real problems networks like Gnutella encounter, it isn't interesting to read this sort of drivel. Why don't we drag out Mathmatical and model how much bandwidth Napster wastes by transmitting the names of all the files being shared even though most of them will never get searched for. Hmmm. lets assume 7,000,000 users. Let's assume that they each share 1000 files with an average filename length of 32 characters. Why, that's 224 Gigabytes of data, and we haven't even done any searches yet! Cleary, Napster doesn't scale. Ugh. This guy might know how to use Mathematica, but I still suspect he worked in the Marketing department. With the same guys who will tell you about their 200Mbps fast ethernet.
And there's room for improvement, no less... (Score:5, Informative)
Not to mention there's still room for improvement to the protocol itself-- there's no reason a proxy couldn't cache a list of all files shared by a connected client, then answer queries directly, NEVER sending a query directly to a client. (Ultimately, as people run proxies like this more and more, you'd end up having proxies talking directly to eachother.) The ultimate Gnutella proxy would cache commonly requested files and make them available over a bigger pipe.
No money in it, but for the Gnutella enthusiast, I could see them running this kind of thing from work off of a QA box, for example, or from their support desk at an ISP. =)
Re:What uncertainty? (Score:3)
You're either an idiot, a karma whore, or both. Go ahead, download Bearshare (or any Gnutella based client) and look at the number of query responses you get to common queries-- oh look, most of them are RIGHT. Sure, you get some dead hits, but most of the time this is someone who's behind a firewall that doesn't have their client setup to work behind said firewall (or can't, because of a non-static IP address).
Saying it's "historical" is just flamebait though, but I guess I answered the call like an idiot. As the original poster said, it's all FUD-mongering. As your post is.
Re:What uncertainty? (Score:3)
Yes there are chances to optimize the protocol, but it's all fairly basic-- Kazaa's technology isn't that far removed from Gnutella. Supernodes (which is basically what I described above, a 'proxy/cache') are the next logical step to the Gnutella spec.
Re:This is well and truely FUD. (Score:2)
There are no exceptions, no corner cases, nothing unusual. If you said he has a hard time figuring out when to use commas and when to use semi-colons, that I could relate to, or when to use '=' and ':=', or especially how to write pattern substitution rules that always do what you think they do, but the brackets are really, really straightforward.
who is the author of this paper? (Score:3, Interesting)
I have problems with this 'analysis'... (Score:3, Insightful)
First, if I understand what he's driving at correctly, the bandwidth numbers he gives are for the Gnotella network as a whole, not for each and every client connected to it. This is equivelent to saying "average 'HTTP' usage generates n amount of bandwidth over the Internet", or "DNS traffic will consume x number of bytes on a given network". So what? Would anyone be really shocked if 7,000,000 web browsers generated HTTP and DNS traffic in the gigabyte range? Doesn't bother me. That might be an interesting number to your ISP but as a user of Gnotella I could care less about how much total bandwidth my query for 'The Grateful Dead' takes up. It sure sounds like alot of traffic, but it's distributed over the entire Gnotella network. As long as the traffic isn't high enough to overwhelm individual clients I don't see the problem. These numbers just don't seem to be that important, or am I missing something here?
The other item the author fails to consider (and I'm going to guess that, as one of the engineers behind Napster, he probably knows better) are client-side optimizations like search caching and differentiation of the clients. The caching arguement goes like this:
If client A sends out a query to client C looking for 'Grateful Dead' and client B sends out a very similar request to client C , say, 'The Grateful Dead', even basic caching would prevent client C from sending this request back out to the same hosts that responded to the first request made by client A. Again, am I missing something important here? I'm not sure that caching would reduce the traffic dramatically but I'd be willing to bet that it would improve matters significantly, especially for clients that remained 'up' for long periods of time (which is in itself another important factor that seems to be missing here). This just seems so obvious.
There are bunches of optimizations like this that can be done with the Gnotella application to reduce the overall bandwidth. And this leads to the other half of my point, i.e. the author assumes that each and every client will be functionally the same. They aren't. The Gnotella FAQ tells you to reduce your N if your on a slow connection. This means that not all Gnotella clients are exactly the same now anyway; some have higher N's than others. The FastTrack guys (i.e. KaZaA, Morpheous, et. al.) have already shown that it makes sence from an efficency standpoint to have some clients do more then others via 'supernodes' and the like. This seems like a fairly obvious development on the client side and I can't for the life of me understand why this isn't addressed. I mean, really, isn't the 'client-client' vs. 'client-server' approach really the underlying assumption behind why Napster will scale and Gnotella won't?
I hate to say it but it looks to me like the author is showing just a little bias here. Hey, I suppose that if I worked on a competing standard I'd trash-talk the competition too but I think his time would be better spent making the Napster approach work better. No matter how you slice it or dice it Napster is pretty much dead while the Gnotella network is still alive and kicking. Maybe it will never scale to 'billions and billions' of hosts but at least it's still around and going strong.
A possible solution to the scaling problem (Score:2, Interesting)
Since I believe IRC scales pretty good why not build the Gnutella network like that?
Not only is this old, it is outdated (Score:3, Informative)
Come on Slashdot, its 2002 not 2000. It looks pretty bad accepting this article right after the Napster one. Does Slashdot or VA own a stake in Napster or something?
New algorithm needed at the connect phase? (Score:2, Interesting)
Gnutella clients can sometimes have more "potential" connections out to the network than MAX_CONNECT (because they open, say five, expecting two and get four). If so, why not do a traceroute to each of the hosts and crop out the one that is the most hops away? Iterate cropping until there are MAX_CONNECT active connections.
This would tend to favor a network that closely reflected the underlying structure of the network - thus reducing any earth-shattering impact on the inet backbone?
To further force a short-inet-distance perhaps clients should (optionally) not accept connections from far-flung hosts?
Additionally, clients should count already-seen packets (which they are supposed to drop) against the goodness of a given link - thus reducing routing loops in the network and forcing it to flatten out instead of clump together.
These might allow clients to have a higher TTL without increasing net net (har har) bandwidth - less duplicated, circularly-routed, lengthy-path, etc, data.
I suspect (have not checked) that some clients do the latter (routing loop prevention), but I know of none doing the formers.
I will get around to coding this soon, unless somebody can tell me it's a stupid idea (for a good reason).
--Nathan
Re:New algorithm needed at the connect phase? (Score:2)
Gnutella as a DDOS tool (Score:2)
Re:Gnutella as a DDOS tool (Score:2, Insightful)
If you had 8 connections and a request comes in from 1 of them, only that 1 connection would accept a reply with the request's guid. The IP information is taken directly from your connection.
Freenet has addressed this issue from day one (Score:3, Interesting)
A good analogy might be a detective trying to find a suspect for a crime. The Gnutella approach is akin to going on TV and asking everyone in the area to let you know if they know who did it. It may work once, but the more you do it, the less effective it is. Freenet works as detectives do normally, they gradually home in on their suspect by gathering information, and using that information to refine their search.
Some say that Freenet only achieves this scalability because it doesn't do the type of "fuzzy" search Gnutella does. You need to know exactly what you are looking for in Freenet to find it. This isn't true, the Freenet searching algorithm can be generalised to allow fuzzy searching. While this has not yet been demonstrated in practice, it is definitely possible in theory.
It always amazes me that people continue to lament flaws in many current P2P architectures when Freenet has incorporated solutions to those problems almost from its inception.
Disclaimer: I am Freenet's architect and project coordinator, so you could be forgiven for thinking I am biased, but you are free to review our papers and research to decide for yourself.
Re:Freenet has addressed this issue from day one (Score:3, Insightful)
Re:Freenet has addressed this issue from day one (Score:2)
Secondly, Freenet isn't really a file-sharing app, despite receiving much inaccurate publicity as "the next Napster". It isn't well adapted to sharing mp3s, nor should it be given its goals.
We will be releasing 0.5 soon, it will be a huge improvement.
Re:Freenet has addressed this issue from day one (Score:2)
Re:Freenet has addressed this issue from day one (Score:2)
There is one reason, and one reason only, why this occurs: There is no Freenetster. No P2P file-sharing app that allows you to easily search for and download music/movies/etc. As soon as there is one, Freenet will explode (assuming it really is as scaleable and such as it is made out to be). You want Freenet to be popular? There's only one thing you have to do...
Download the new LimeWire (Score:2)
Re:Download the new LimeWire (Score:2)
Transparent Proxy (Score:2, Interesting)
The internet is more of a tree than a net, at least for the smaller ISP's. So a site can run a transparent proxy that aggregates all it's gnutella clients, and only maintain a few outbound connections for the entire site, as opposed to a few per client. In addition, incoming gnutella connections are intercepted and directed at the proxy (which is essentially another gnutella node).
This allows ISP's to limit the number of gnutella connections to the rest of the world. In fact, it would be best for them to connect only to other ISP's using a proxy as well.
This would tend to greatly improve query response time for nodes that are close by, but on the other hand would make it harder to create connections to remote nodes, because that control has been moved from the client to the proxy.
But an office or an net cafe or school could run the proxy and have a single link between it and the ISP's proxy, instantly connecting the site with all the ISP's users and cutting bandwidth considerably.
Proxy's can do other things to accelerate searches. If a request for "Grateful Dead" has been forwarded, then there is no need to forward the same query string in the immediate future (say 1 minute). And of course the is the option of caching the file transfers themselves...
RIAA (Score:2)
From above, a whopping 1.2 gigabytes of aggregate data could potentially cross everyone's networks, just to relay an 18 byte search query. This is of course where Gnutella suffers greatly from being fully distributed.
Actually, I think the RIAA suffers more, since there's no one to sue.
Re:moochers + crap == worthless (Score:2, Insightful)
This is one of the biggest problems with P2P file sharing programs. Nearly everyone wants great content for free, but very few are willing to give back and supply any of it.
Re:moochers + crap == worthless (Score:2)
If there was bandwidth capping as the default this would help. Also need to fix resume. Basically, put a decent client that has QOS built in by default, and can resume files from multiple sites. I never had a problem uploading, but when I want a file on a modem, and only 2 have the file I want, its mostly likely 1/2 way during the download, the user will log off. I have a directory of incompletes that never get resumed. Also, I have to connect to a large (again) LARGE amount of hosts to find the file I need. Its like finding a needle in the haystack. This is where a directory service like napster kicked ass. Finding the file.
But then if you want britney spears mp3s you will find thousands of hits...
Re:So, you got a better solution? (Score:2, Funny)
Re:So, you got a better solution? (Score:2)
1: No-one with the suitable skills has looked at the problem, and done the work to reimplement it.
2: No-one is experiencing the problem, so there is no pressure to fix it.
Finding obscure stuff (Score:2)
Or if you're looking for something more complex, you'll get better results by checking more places. For instance, I once searched Napster for every recording of a given Irish folk song - the versions done by the Chieftans got lots of responses, but some of the other bands who'd recorded it only got one or two, and they were performed entirely differently. Or if you're looking for live Grateful Dead performances, used in the paper's example partly because sharing them is legal, you'll probably find most of the albums on one music-sharing net or another, and the few hundred or a thousand best (or best-taped) shows they did, but you may be looking for that random show you attended in 1971 to compare how they played Dark Star with how they played it a few years later and to see how much of your memories were affected by the mood you were in (ok, or the drugs you'd been taking :-)
Re:His paper is flawed and misleading (Score:2)
If you're going to criticise the article, then you want to ask *where* this 8Gb/s is going to be - it's all very well summing up the total traffic that *might* be generated, but it's not as though I'm going to have to download all 8Gb down my 56K modem line in order to get every result matching "grateful dead live", is it? Not only am I only going to restrict myself to a couple of minutes' searching, but the traffic itself is distributed all over the 'Net so no one link has to experience the disruption.
Without analysis of the size of the network as a denominator by which to divide the above, claiming "it won't scale" is utter tripe, and I'd expect better from any article claiming to be "mathematical". Duh.