Open Content Network (P2P meets Open Source) 128
Orasis writes "The creators of Swarmcast have announced a new peer-to-peer content delivery network called the Open Content Network. The OCN will allow users to download open source and public domain content from multiple peers and mirrors in parallel. The system is designed to augment the existing mirrors with bandwidth from the p2p network and should eliminate the "Slashdot Effect" for popular open source content."
OK, but... (Score:2, Informative)
Uninformed (Score:3, Informative)
Swarmcast is a (working!) program for parallel p2p file downloading. In other words, the technology IS implimented. They basically are just making a modified program to work with a somewhat different set of files. No biggie.
Need for Checksumming (Score:5, Insightful)
Re:Need for Checksumming (Score:5, Informative)
Re:Need for Checksumming (Score:2)
It's probably worth pointing out that the solution to this problem is really orthogonal to the use of content-based addressing. Also, while signatures etc. can be used to verify the integrity and provenance of the delivered data, there's a whole separate problem of ensuring that it's current or consistent.
Re:Need for Checksumming (Score:2)
Re:Need for Checksumming (Score:1)
The example protocol messages that they have given includes a SHA1 integrity check. SHA is the secure hash algorithm, a cryptographic standard. A cryptographic hash is a map h from the set of all bit strings to strings of length k, typically 128. The map h has the additional property that it is intractable to produce two strings (or files, or whatever is being signed) x and y such that h(x) = h(y).
MD5SUM (Score:1)
Security question (Score:2, Interesting)
Perhaps this is a silly question, but I worried about it with Napster and subsequent file sharing software, too. Is is possible to contribute and be secure?
Re:Security question (Score:1)
Re:Security question (Score:1)
Re:Security question (Score:1)
Re:Security question (Score:1)
Open Content, Open Source, whatever (Score:5, Insightful)
Re:Open Content, Open Source, whatever (Score:2, Interesting)
Openft (Score:2, Informative)
1 of many alternatives. (Score:3, Interesting)
Freenet [freenetproject.org] scaleable, not vaporware, very much beta.
Alpine. [cubicmetercrystal.com]
based on trust
Gnunet. [purdue.edu] Sounds very open. based on electonic money. also seach for gnet.
chord [mit.edu] Very efficient to find files.
distrinet [sourceforge.net] At this stage: vaporware.(there is code....) But if you look at the description it beats any p2p software!
But in the end the network with the most data (gnutella/kazaa) will be used. Note that users will switch networks very quickly. Look what happened to napster.
Re: Open Content (Score:1)
(I just bought a couple of CDs from a band who make all their music available for free download, so it must work!)
Would it make much difference for software apps? They're mostly mirrored anyway, and Mozilla/OpenOffice-style distribution doesn't seem to be suffering many bandwidth problems.
Re: Open Content (Score:1)
Re: Open Content (Score:2, Interesting)
I like your logic. So obviously the record store is failing when I go in and buy my one CD. From one person. My one CD from one person. To clarify, only one CD from only one person. The record shop is obviously failing.
Yeah right.
No, it means that MP3.com is succeeding, that Aura are succeeding, and that Faithless are shafted (who made the CDs I wanted, but which I'm not gonna buy with the current state of the record industry, and their political representatives)
Here's a hint: when you have lots of people buying one thing each, you make lots of money. Find a maths book. Revise the chapter on multiplication.
Re: Open Content (Score:1)
Re: Open Content (Score:1)
The Spirit of the OLD Internet Lives (Score:3, Insightful)
Count me in.
Those were the good old days, all right... (Score:3, Insightful)
The good old days really weren't so good. It's kind of funny, though, listening to some so-called old-timers constantly whinging about the commercialization of the 'net. Do they really think the huge advance in capabilities would have come about without the economic incentive? Sure, the early days were inventive. They invented the bricks and mortar of the Internet. But the commerce guys have driven the construction of cathedrals, roads, libraries and schools with those bricks.
Re:Those were the good old days, all right... (Score:1)
Generally, the reality is in-between these two posts.
I just want to chip in on the record with the note that the longbeard crowd didn't go in and start busting heads, throwing fits and crying about conspiracies when the Bill Gates clones closed in on the net like it was a patented Microsoft trademark. Sure, people talked shit and got mad and perhaps some people might have even been a little paranoid about consipiracies, but there was no violence. It will be nice to see similar restraint from the blue shirt khaki pants crowd when the wheel completes a revolution and breaks the back of the Evil Empire.
Signed,
Some dude who's pay depends on Microsoft and will still be glad to see them complete their fall
P.S. they're already off 70% from 12/99, check the charts yourself. The end is near. Ding dong the witch is dead.
Re:Those were the good old days, all right... (Score:1)
I prefer to think that they've built the strip malls, porn shops, and phone survey firms that we all so enjoy.
--Jeremy
Re:Those were the good old days, all right... (Score:1)
Re:Those were the good old days, all right... (Score:1)
public trough. The internet was paid for by public tax money and corporate subsidies, but
unavailable to most people.
Yeah! students will be the first with their backs to the wall, when the revolution comes!
Jealous much? No need to have a hissy fit, just because you weren't there for the first wave of the 'net.
I am not an old timer, but having been on the Internet since 1992, can easily see the
how polluted it has become.
Not unlike KaZaA? (Score:1)
These usefull features have already been implemented in KaZaA [www.kazaa.com] [kazaa.com].
Kazaa doesnt download in parallel (Score:2)
Re:Kazaa doesnt download in parallel (Score:1)
(thanks to whoever responded equivalently. who knows why you were moded down)
Re:Kazaa doesnt download in parallel (Score:1)
DownloadAccelerator, on the other hand, when it has a selection of mirror sites, seems to be quite good at keeping all of the connections busy.
Re:Not unlike KaZaA? (Score:1)
Swarmcast use FEC coding of the file so it's more efficient when multiple people download the same file. (The people downloading can share the file as well.) It's
Guesses as to how long this is going to last? (Score:5, Insightful)
I'm fully expecting that if we ever reach a point where a substantial percentage of users' traffic becomes outbound traffic, the cablemodem and DSL providers are going to start to rethink the current pricing and service packages.
How long before we find ourselves NATted away, able to originate connections only? A few cablemodem providers have already done this to reduce the traffic from file sharing and to knock out code red and other such silliness. And each time a major ISP does this, it leaves a slightly smaller number of other ISPs providing the outbound service, causing the traffic on the holdout systems to rise.
At some point it's going to snowball, and most of us are going to find ourselves NATted away, with only those paying premium prices for real IP addresses getting the priviledge of having their uplink monopolized by strangers.
Re:Guesses as to how long this is going to last? (Score:2, Insightful)
At current, I'd say say we are paying. We've signed a usage agreement that says we get this speed down and that speed up.
From a business standpoint, the upstream hasn't been the interesting part of the pricing equation to date, however. But when they usage increasing, and their costs going up for something they can turn off or collect an extra fee for, you can bet we won't see so many symmetrical connections or directly-addressable and unfiltered IPs.
Re:Guesses as to how long this is going to last? (Score:1)
There's your niche, go earn top dollar.
Re:Guesses as to how long this is going to last? (Score:2)
Close, but no cigar.
There's no reason why a NATed box can't "upload" a file. The client simply needs to send a push-type request to the NATed server, rather than trying to pull the file from it.
This, of course, requires the client to have a real, non-NATed IP address. And this means the client will have access to more software in exchange for a higher fee per month.
Sure, you could be nice and use some of that bandwidth for sharing with the people who chose not to pay for the higher service level, but you would still benefit largely from it.
Re:Guesses as to how long this is going to last? (Score:2)
However, broadband providers have an interest in p2p since it is a major reason for their clients to have broadband in the first place. A cheap modem connection will handle mail and instant messaging pretty effectively. Only when you start downloading mp3/movies/... you need the bandwidth they offer. Healthy p2p networks create a demand for broadband.
My hope is that as ipv4 addresses get scarcer, adoption of ipv6 will finally happen. This would largely remove the need for NAT.
Re:Guesses as to how long this is going to last? (Score:2, Informative)
Good NAT bypassing is annoying to program (in the extreme case, it requires implementing something like TCP over UDP) but it's not a huge techncal hurdle, the main reason it's not commonly done is because too few people have hostile NATs for it to be worth the effort.
--
Benjamin Coates
If you NAT'ed i hope it's IPV6 (Score:1)
Re:If you NAT'ed i hope it's IPV6 (Score:1)
Still, to your point -- do any versions of Windows ship with IPv6 enabled by default? I think that's the true test of how ready business is for it.
I did say upgrade!!! (Score:1)
NAT's also means that you need less global address space, which makes things a bit cheeper to run.
Re:If you NAT'ed i hope it's IPV6 (Score:2)
If the net went IP6, Microsoft would either release a patch for WinME,2k,XP, and I bet some 3rd party would offer a d/l to support 95/98 (maybe even 3.1!)
Re:If you NAT'ed i hope it's IPV6 (Score:2)
That's not so bad (Score:1)
hopefully... (Score:1)
Hopefully you can configure the # of connections, otherwise we will finally be able to get slashdotted from the comfort of our own homes
Great idea, but I can see a problem (Score:5, Insightful)
The problem being that people are bastards.
Why is this a problem? Well, what's to stop an ignorant or malicious individual wrapping up some content with an CC complaint license and injecting it into OCR?
I'm thinking of:
Why would anyone do this last one? Pure malice, to open OCN up to DMCA attack, simply because people (as I said) are bastards, and can't be trusted to behave in a rational civilised fashion. OCN will be a trusted network, and that leaves it open to abuse. I really hope that an actual trustable human will vet everything injected into it.
Re:Great idea, but I can see a problem (Score:2)
People cannot be trusted. Maybe some sort of signup, registered usage is needed. Though given the caution of most OS people, that won't happen.
Re:Great idea, but I can see a problem (Score:1)
Digital sigs (Score:1)
Signing something says, I have given my permission to.... This places responsibility on someone for any copyright violations.
The network operators can kinda identify who they issued the certificate to.
Digitally signing provides a checksum.
Logi$tic problems of root CA (Score:2)
The network operators can kinda identify who they issued the certificate to.
Which means the network operators will have to make deals with notary public offices in every major metropolitan area in all 180-odd independent countries order to be able to certify that people are who they say they are. This can become expensive, and the total cost of maintaining a certificate may rise up to $200 per cert per year, making this situation no better than the SSL cert situation.
Re:Logi$tic problems of root CA (Score:1)
Re:Logi$tic problems of root CA (Score:2)
What is your address i'll post it (Score:1)
At least you can say in your defence, they were at this address, then you find them yourself.
Re:Great idea, but I can see a problem (Score:1)
I think that is a law is passed, it should not be a technocratic law, but one that mandates content signing by all uploaded material over p2p....that would be easier to enforce while at the same time, not infringing on the folks who follow the laws.
Safety in intention (Score:2)
The courts are unlikely to shut down a network like this that makes a good faith effort to be legitimate. Most other P2P services establish themselves as trading points for all manner of illegal content. They try to cover this up to look good to the courts but there's no doubt that Kazaa, etc, wouldn't be this popular were it not for piracy.
Swarmcast will MAME you (Score:2)
Someone made compressed harddrive images that mame will eventually require. Dispite compression, two of the images were half a gig and one was a gigabyte. The guy who was distributing these files used swarmcast to prevent getting swamped.
It worked pretty well in that tons of people were able to download those huge files without killing servers. However, swarmcast is new enough that swarmcast itself had some server problems. The server had to use an older version of swarmcast to be stable. That pretty much fixed the problem. It used to be that just hoasting ~40meg neogeo roms was nearly impossible, now it is possable to host half-gig files.
Eliminate the Slashdot effect? (Score:3, Funny)
such a good move? (Score:4, Insightful)
But I've reconsidered. Before you mod me down, please read what I have to say.
Basically, we are talking about P2P filesharing here. Now remember, other P2P services, like Napster, Gnutella, and IRC, were all originally based on good, sound, legal, moral ideals. But in the course of time, they each became corrupt with those who would use the infrastructure for illegal filesharing and copyright infringement.
Now, I don't want to throw the baby out with the proverbial bathwater. And I don't want to get rid of a useful tool because of a potential for abuse, since by that logic we would not have silverware, cars, or handguns. But we in the Open Source community need to ask ourselves, is now the time when we want to risk associating Linux, *BSD, and Open Source with illegal activities? Don't we have enough anti-hacker rhetoric to fight against?
We need to pick our battles. This isn't one of them.
Re:such a good move? (Score:3, Insightful)
The threat to filesharing as a technology comes from the rights-holders and from the legal system. Precident is being set all the time which threatens ISPs with liability for illegal activities on their network that they are "made aware of", DMCA notice and takedown letter or not.
Its much more scary in the UK and in Canada. Canada just passed new legislation which will make ISPs vulnerable for distribution of child pornography on their network. SOCAN Copyright Tariff 22 was just it through the Court of Appeals and makes ISPs liable for infringing material stored on their "cache servers".
P2P technology may by-pass these, but it is only a matter of time before some powerful organization convinces some judges that ISPs should be held liable for allowing P2P on their network. Blocking of ports, account terminations, and worse are all coming if the rights-holders have their way... and technology such as swarmcast which acts to distribution free software will likely get lumped in with the Napter/Kazaa/Foo P2P technologies.
Re:such a good move? (Score:1)
it is only a matter of time before some powerful organization convinces some judges that ISPs should be held liable for allowing P2P on their network... technology such as swarmcast which acts to distribution free software will likely get lumped in with the Napter/Kazaa/Foo P2P technologies.
Surely an important weapon against knee-jerk blanket bans would be for there to be a well-known, respectable, law-abiding P2P network?
Re:such a good move? (Score:2)
Re:such a good move? (Score:2)
Nonsense. The original purpose of both Napster and Gnutella was to enable the sharing of copyrighted music. That's about all they've ever been used for (well, Gnutella and Gnutella-like networks have since branched out into other forms of mostly copyrighted content)
This, OTOH, appears to be primarily designed to let people pool bandwidth, which is both legal and useful, and since bandwith costs are a big problem for the distribution of free content, it's entirely a good thing.
--
Benjamin Coates
Re:such a good move? (Score:2)
Re:such a good move? (Score:1)
What if Sun had had the same worries about JAXP?
This is a redundant argument made many times before, all technology can be used for good and bad, how this got modded up so far I have no idea.
I would bet that in two years, the majority of popular downloads will be delivered with P2P.
Eliminate the slashdot effect? bah! (Score:4, Insightful)
Some will argue that this is in essence what a P2P network is, but why not do it right, using technology we already have that everyone can use(squid.)
Other users' comments regarding the cumulative effects of NAT on P2P networks are incredibly apropos.
But realistically, theres nothing I love more than when the story submitter posts a link to a Google cached version of the content he's posting. We're an agressive bunch and that calls for aggressive measures
Re:Eliminate the slashdot effect? bah! (Score:3, Informative)
Who pays for all that equipment and bandwidth? The idea here is not to solve problems by throwing resources at a problem, but rather to solve them by using existing resources as effectively as possible. The technology involved can be applied to any resource base. The technology-intensive approach using almost-zero-cost resources might well make significant headway against the Slashdot Effect, even if you still think your capital-intensive approach based on older technology is even better.
Another factor you seem to've overlooked is that software like CAW or BitTorrent are distributed for reasons beyond scalability. For example, consider the inherent attack-resistance characteristics of a highly distributed P2P network, vs. your centrally-administered servers. There are other goals as well, such as avoiding legal culpability or financial dependence on corporate benefactors to provide the systems and bandwidth. Whether you agree or disagree with those goals, the fact remains that many people believe in them. Networks like you describe are old hat, dozens have been deployed already, and yet a lot of people still want something different. You've proposed a solution to a different problem than the one Onion Networks et al seek to solve. There's a term for that; we call it missing the point.
Re:Eliminate the slashdot effect? bah! (Score:2)
I started a project to help distribute the load. So far I've written code to pull out all the links from each Slashdot story. What's left is to cache those links, then transform the Slashdot main page HTML so that it points to the cached versions.
If anyone's interested in taking this further, you can find the Perl code here [thingone.info] .
Then just set up some machines (say, 3 to start just to test it), and cache the main page to all three machines, generating different HTML pages for each machine. When a new request comes in, round-robin it to the 3 machines. (Better algorithms can come later.)
I agree with what you say about geographically distributing them, though, which would need to be handled by an entity with money (i.e., VA?).
Wheres the code (Score:2, Insightful)
Im sorry i wish i could say im excited, its certianly a VERY good idea, and noe in desprate need of realising, but untill i see the code, its just more hype.
Certianly be a great way for non coders to contribute though, so many times my linux friends say "oh id love to contribute to open source but i can't code", this would definatly be one way, and one which requires very little effort too.
p2p meets mirror (Score:1)
This is why the file sharing system only works well with audio and video files and not software files.
Hence, such an open content network should include an advanced file recognition system with some sort of checksum or whatever : a blend of p2p and mirrors
Re:p2p meets mirror (Score:1)
From the website:
It seems that files will be referenced by their hash and thus ensure that data has not been corrupted, and also in this manner will eliminate the "renaming files changes contents" thing that many P2P networks seem to believe in.
Of course, Freenet [freenetproject.org] does this and more -- and already works -- so why not use it? Integrity checking, intelligent caching, and high anonymity to boot.
Eliminating the Slashdot Effect (Score:1, Informative)
Content Distribution Networks (CDNs), such as Akamai, have shown that significant improvements can be made in throughput, latency, and scalability when content is distributed throughout the network and delivered from the edge. Likewise, peer-to-peer systems such as Napster and Gnutella have shown that normal desktop PCs can serve up enormous amounts of content with zero administration. And more recently, systems like Swarmcast have been introduced that combine the CDN and peer-to-peer concepts to gain the benefits of both. The goal of the Content-Addressable Web is to enable these advanced content location and distribution services with standard web servers, caches, and browsers.
The main benefits of the Content-Addressable Web are:
Throughput - Browsers will be able to download content from multiple sources in parallel
Bandwidth Savings - Browsers will automatically discover and select the closest mirror for a piece of content.
Fault Tolerance - Even if a site goes down in the middle of a download, browsers will automatically locate another mirror and continue downloading.
Scalability - Any number of machines may be added to the network, creating a CDN ad hoc, with very little administration.
Security - Browsers will be able to safely download content from untrusted mirrors without risk of corruption or viruses.
The full paper describing the "HTTP Extensions for a Content-Addressable Web" is available here.
The goal of the Content-Addressable Web (CAW) is to enable the creation of advanced content location and distribution services over HTTP. The use of content addressing allows advanced caching techniques to be employed, and sets the foundation for creating ad hoc Content Distribution Networks (CDNs). This document specifies HTTP extensions that bridge the current location-based Web with the Content-Addressable Web.
1. Introduction
Content Distribution Networks (CDNs), such as Akamai, have shown that significant improvements can be made in throughput, latency, and scalability when content is distributed throughout the network and delivered from the edge. Likewise, peer-to-peer systems such as Napster and Gnutella have shown that normal desktop PCs can serve up enormous amounts of content with zero administration. And more recently, systems like Swarmcast have been introduced that combine the CDN and peer-to-peer concepts to gain the benefits of both. The goal of the Content-Addressable Web is to enable these advanced content location and distribution services with standard web servers, caches, and browsers.
There are a number of short-comings of current web architecture that the Content-Addressable Web aims to overcome. These include discovering optimal replicas, downloading from untrusted caches, and distributing content across the Transient Web.
1.1 Optimal Replicas
There are currently no mechanisms within HTTP that allows a user-agent to discover an optimal replica for a piece of content. This problem is due to the fact that HTTP caching practice assumes a hierarchical caching structure where each user has a single parent cache. Thus while one can discover an object's source URI from a cached copy, there is no mechanism to discover a list of replica locations from the source. This problem is evidenced by the fact that users must manually select the closest mirrors when downloading from Tucows, FilePlanet, or the various Linux distributions. The CAW solves this problem by providing distributed URI resolvers that user-agents can query to find an optimal replica.
1.2 Untrusted Caches
It is currently unsafe to download web objects from an untrusted cache or mirror because they can modify/corrupt the content at will. This becomes particularly problematic when trying to create public cooperative caching systems. This isn't a problem for private CDNs, like Akamai, where all of their servers are under Akamai's control and are assumed to be secure. But for a public CDN, the goal is to allow user-agents to retrieve content from completely untrusted hosts but be assured that they are receiving the content intact. The CAW solves this problem by using content addressing that includes integrity checking information.
1.3 Transient Web
The Transient Web is a relatively new phenomenon that is growing in size and importance. It is embodied by peer-to-peer systems such as Gnutella, and is characterized by unreliable hosts with rapidly changing locations and content. These characteristics make location-based addresses within the Transient Web quite brittle. Even if traditional HTTP caching was widely leveraged within the Transient Web, the situation wouldn't be helped much. This is because a single piece of content will often be available under many different URIs, which creates disjoint and inefficient caching hierarchies.
This multiplicity of URIs occurs for a couple of reasons:
The original source for a piece of content will often cease to exist or the source's URI will change.
Multiple independent sources often introduce the same content into the network.
Most applications and file manipulation tools will tend to "forget" the source URI of a piece of content.
This URI multiplicity can also occur in the normal web, although it is RECOMMENDED that caching semantics be used when an authoritative source is known. The CAW solves the above problems by providing content-specific URIs that are location-independent and can be independently generated by any host. Additionally, various URI resolution services work in coordination to resolve issues associated with having multiple URIs for a web object.
2. Scope
The HTTP extensions for CAW are intended to be used for in the above scenarios where HTTP is currently lacking. This technology is focused on mostly static content that can benefit from advanced content distribution services. The extensions are intended to be hidden under the hood of web servers, caches, and browsers and should change nothing as far as end users are concerned. So even though a new URN scheme is introduced, there are very few situations where a human will ever interact with those URNs.
One of the more interesting applications of the Content-Addressable Web is the creation of ad hoc Content Distribution Networks. In such networks, receivers can crawl across the network, searching for optimal replicas, and then downloading content from multiple replicas in parallel. After a host has downloaded the content, it then advertises itself as a replica, automatically becoming a part of the CDN.
3. Content Addressing
This specification introduces a URI scheme with many interesting capabilities for solving the problems discussed earlier. A particularly useful class of URI schemes are "Self-Verifiable URIs". These are URIs with which the URI itself can be used to verify that the content has been received intact. We also want URIs that are content-specific and can be independently generated by any host with the content. Finally, to show the intent that these addresses are location-independent, a URN scheme will be used.
Cryptographic hashes of the content provide the capabilities that we are looking for. For example we can take the SHA-1 hash of a piece of content and then encode it using Base32 to provide the following URN.
urn:sha1:RMUVHIRSGUU3VU7FJWRAKW3YWG2S2RFB
Implementations MUST support SHA-1 URNs at minimum.([footnote] A future version of this document will also specify a URN format for performing streaming and random-access verification using Merkle Hash Trees.)
Receivers MUST verify self-verifiable URIs if any part of the content is retrieved from a potentially untrusted source.
4. HTTP Extensions
In order to provide a bridge between the location-based Web and the Content-Addressable Web, a few HTTP extensions must be introduced. The nature of these extensions is that they need not be widely deployed in order to be useful. They are specifically designed to allow for proxying for hosts that are not CAW-aware.
The following HTTP extensions are based off of the conventions defined in RFC 2169. It is RECOMMENDED that implementers of this specification also implement RFC 2169.
The HTTP headers defined in this specification are all response headers. No additional request headers are specified by this document.
It is RECOMMENDED that implementers of this specification use an HTTP/1.1 implementation compliant with RFC 2616.
4.1 X-Content-URN
The X-Content-URN entity-header field provides one or more URNs that uniquely identify the entity-body. The URN is based on the content of the entity-body and any content-coding that has been applied, but not including any transfer-encoding applied to the message-body. For example:
X-Content-URN: urn:sha1:RMUVHIRSGUU3VU7FJWRAKW3YWG2S2RFB
4.2 X-URI-RES
The X-URI-RES header is based off of conventions defined in RFC 2169 and provides a number of flexible URI resolution services. These headers provide various ways of locating other content replicas, including additional sources for a multiple-source download. One can also build an application that crawls across the resolution services searching for an optimal replica. Many other uses can be imagined beyond those given in this specification. The general form of the header is as follows:
X-URI-RES: ; [; target uri]
The service URI specifies the URI of the resolution service. It is not necessary for the service URI to conform to "/uri-res/ ?" convention specified in RFC 2169.
The service type identifies what type of resolution is being performed and how to interpret the results from the service URI. The types are those defined in RFC 2169 and include "N2L", "N2Ls", "N2R", "N2Rs", "N2C", "N2Cs", "N2Ns", "L2Ns", "L2Ls", and "L2C".
The target URI is the URI upon which the resolution service will be performed. The target URI can be any URI and is specifically not limited to the URI specified by the X-Content-URN header. If there is only a single X-Content-URN value, the target URI can be left off to imply that the X-Content-URN value is to be resolved.
It is RECOMMENDED that receivers assume that the URI resolver services are potentially untrusted and should verify all content retrieved using a resolver's services.
It is believed that N2R, N2L, and N2Ls will be the most useful services for the Content-Addressable Web, so we will cover examples of those explicitly.
4.3 N2R
The N2R URIs directly specify mirrors for the content addressed by the URN and can be useful for multi-source downloads. For example:
X-URI-RES: http://urnresolver.com/uri-res/N2R?urn:sha1:; N2R
or
X-URI-RES: http://untrustedmirror.com/pub/file.zip; N2R
The key difference between these headers and something like the Location header is that the URIs specified by this header should be assumed to be untrusted.
4.4 N2L and N2Ls
These headers are used when other hosts provide URLs where the content is mirrored. This is most useful in ad hoc CDNs where mirrors may maintain lists of other mirrors. Browsers can simply crawl across the networks, recursively dereferencing N2L(s). For example:
X-URI-RES: http://urnresolver.com/uri-res/N2L?urn:sha1:; N2L
and
X-URI-RES: http://untrustedmirror.com/pub/file-mirrors.list; N2Ls; urn:sha1
For the N2Ls service, it is RECOMMENDED that the result conform to the text/uri-list media type specified in RFC 2169.
4.5 Proxies and Redirectors
It is useful to allow CAW-aware proxies that provide content-addressing information without modifying the original web server. This allows CAW-aware user-agents to take advantage of the headers, while simply redirecting user-agents that don't understand the Content-Addressable Web. It would be inappropriate to return an X-Content-URN header during a redirect, because HTTP 3xx responses often still include a message-body that explains that a redirect is taking place. Instead it is preferred to return a result of the text/uri-list media type that includes one or more URNs that would normally reside in the X-Content-URN header.
4.6 Example Application
The above HTTP extensions are deceptively simple and it may not be readily apparent how powerful they are. We will discuss an example application that will take advantage of a few of the features provided by the extensions.
In this example we will will look at how the CAW could help at linuxiso.org where ISO CD-ROM images of the various linux distributions are kept. The first step will be to issue a GET request for the content:
GET
Host: www.linuxiso.org
The abbreviated response:
HTTP/1.1 200 OK
Content-Type: Application/octet-stream
Content-Length: 662072345
X-Content-URN: urn:sha1:RMUVHIRSGUU3VU7FJWRAKW3YWG2S2RFB
X-URI-
X-URI-RES: http://123.24.24.21:8080/uri-res/N2R?urn:sha1:; N2R
X-URI-RES: http://123.24.24.21:8080/uri-res/N2Ls?urn:sha1:; N2Ls
With this response, a CAW aware browser can immediately begin downloading the content from www.linuxiso.org, linuxmirrors.com, and 123.24.24.21 all in parallel. At the same time the browser can be dereferencing the N2Ls service at 123.24.24.21 to discover more mirrors for the content.
The existence of the 123...21 host is meant to represent a member of an ad hoc CDN, perhaps the personal computer of a linux advocate that just downloaded the ISO and wants to share their bandwidth with others. By dereferencing the N2Ls, even more ad hoc nodes could be discovered.
4.7 Replica Advertisement
The URI-RES framework provides a significant amount of flexibility in how replica advertisement and discovery can be implemented. One example implementation will be provided in a future specification.
4.8 Acknowldgements
Gordon Mohr (gojomo@bitzi.com), Tony Kimball (alk@pobox.com), Mark Baker (distobj@acm.org)
I'm dumb (Score:1)
"Now, why in the hell didn't I think of that?"
Why not just use existing P2P networks (Score:1)
Re:Why not just use existing P2P networks (Score:1)
Because, most people who download from P2P don't bother to check gpg signatures (or checksum if you prefer). If people start using P2P networks to download executable code (either binary or source code), it's going to make Outlook Express look like a securely designed e-mail client. All existing P2P network are desinged to deliever content, where security is not a large concern. This proposes a system to deliver code, which requires a tie-in to a master server that verifies authenticity of files on behalf of the clients.
MD5 sum (Score:2)
A feature to take detatched/attached MD5 sums, GPG signatures or the like could be pretty easily added in.
You're right, it is needed.
split, cat, gnutella (Score:2)
split -b 65m filename.iso filename.iso.
breaking the 650+ Mb iso in about ten 65 Mb chunks with the suffixes
Share them on Gnutella, KaZaA and any other P2P services.
Once downloaded, cat all the files together into one and check the MD5 sum (also downloaded, or embedded like RedHat 7.3 does).
Re:split, cat, gnutella (Score:1)
The perfect task for Freenet (Score:2)
Re:The perfect task for Freenet (Score:1)
-glenn
Re:The perfect task for Freenet (Score:2, Informative)
Right on target. Freenet [freenetproject.org] accomplishes these goals, and actually works right now. Freenet is essentially an anonymous, distributed caching system into which anyone can insert data and retrieve it later. It supports both locating information by content hashes or by a human-readable redirect, as well as lots of really cool features like anonymous websites ("freesites"). So... what are you waiting for? Install Freenet today!
</plug>
Re:The perfect task for Freenet (Score:1)
Freenet has intentional features that an open content network would see as bugs. It has too much freedom-without-accountability.
Freenet users don't have control over what content they mirror. So while you download file x and want to share it and reduce the load on other servers, they system in its inscrutable wisdom actually has you sharing file y, just because it's something that a lot of other people want.
If the network is used in a manner that either The Man with the guns or your own conscience doesn't like, and it is decided that you must stop contributing to the abuse, the only thing you do is completely drop your Freenet node. For example, you can't selectively mirror GPLed stuff and not mirror (pick your apocalpyptic horseman) warez, kiddie porn, mafia bookie records, terrorist communications, Microsoft ads, etc.
I'm not saying that's bad (well, actually, yes I do think that's bad, but that's beside the point), but it's not quite what this other project is for.
Re:The perfect task for Freenet (Score:1)
And you are correct -- the only way to be sure you're not helping out the child porn sickos is to shut down your node. But then again, that would prevent the people in China from learning about things their government denied access to or to help distribute the latest kernel tarball. Besides, if you could control (or even knew) what you were mirroring, other people could figure it out, and you could get into legal headaches. As is, there's no way to prove that data is actually on your node without possibly helping that data spread.
Correct, the open content network and Freenet do have different philosophies -- however, they share similar technical goals and Freenet (unlike the other project) actually does something.
so, if this covers all open content, mabye (Score:3, Interesting)
Is "Open Content Network" just a PR thing? (Score:2)
I'm just not sure that I buy the description of the "Open Content Network":
"the OCN will allow users to download open source and public domain content from multiple peers and mirrors in parallel."
I presume that it could just as easily be used for copyrighted material and is in no sense different from Napster etc. in its restrictions and potential (read: probable) use.
Sounds a bit like a PR thing: our network is for Open Source material, if people use it for other things - well that's none of our business.
Personally, I agree that they should have the right to focus and brand themselves however they want. I also agree that they shouldn't be held liable for the type of files users actually submit (unless they're either actively screening them or branding themselves as the "Illegal Warez Network" or something). I'm just not sure that this approach will help to limit their liability (although I sure hope it does). Or did I miss something and they are proposing some method of ensuring the content meets some guidelines, thus avoiding any of the Napsteresque controversy?
A Law-Abiding Distribution Network (Score:3, Interesting)
The GDN furthermore offers a scalable solution to the problem of finding the nearest replica (i.e., a scalable URI resolver service in OCN terms), and facilities for dynamically replicating content in areas with many downloaders.
Publications on the GDN, the underlying Globe middleware, and its initial implementation (BSD license) can be found on http://www.cs.vu.nl/globe [cs.vu.nl]. The best description of the anti-abuse measures of GDN are found in the paper titled ``A Law-Abiding Peer-to-Peer Network for Free-Software Distribution'' published at the IEEE NCA'01 Conference.
No way! (Score:2)
I was thinking about this only 3 or 4 days ago.....
hehe. Ah well, I'm glad somebody else is doing it really, I have more than enough on my plate right now. Perhaps they should check out the Creative Commons? [creativecommons.org]
p2p apt (Score:1)
machines neer me can get these files faster
A local debian mirror except its done by apt and i still get my packages list from the debian servers so i know they have the correct md5sums and stuff
however i suspect somone is already doing this
'piracy' is killed off as the only packages that are mirrored ar the ones listed at the debian servers (some on could create a pkg containing the files and host it on there own server but then they are thepoint of access) the servers still hold the originals but it meens that ordinary users can contribute to debian ( a fantastic dist
Something like mnet? (Score:1)
When you "publish" something to the mnet it splits the file in X parts and puts those parts on Y servers. Your download is swarmed from these servers. The file stays on the net as long as there are atleast 1 server per block. Those servers also check what blocks are more popular and "purchase" those blocks from other servers to make the file more easily accessable.
Or so I'm told anyway.
I think the idea is quite nice.
You can read more about it here [sourceforge.net].
wbr
.haeger
edonkey (Score:1)
Files are represented as checksums md5 (no filename confusion). It is free, fast, realiable, secure. Files can be uploaded while being downloaded. This insures that a rare file that is wanted by many people will be distributed as quickly as possible. Support multiple server. This file sharing network is primary used for sharing movies, cd images(appz, games,
Check out feature list: here [edonkey2000.com]
Unicast Replica Meshes (Score:1)
"We propose centralized algorithm of dat distribution in the unicast p2p network. Good example of such networks are meshes of WWW and FTP mirrors. Simulation of data propogation for different network topologies is performed and it is shown that proposed method performs up to 200% better then common apporaches".
BitTorrent (Score:3, Informative)
We've had several large deployments of files which are a couple hundred megabytes and up, getting sustained downloads of a couple hundred downloaders at once, serving off a dsl line, and it's worked well.
By the way, BitTorrent, Swarmcast, and OCN all check secure hashes under the hood, so data integrity isn't an issue.
This already exists.... it's called USENET (Score:2)
Set up an alt.binaries.geektoys and post all the Videogame Demos, Distros, Open Source Software, Movie Trailers and the like that we're all interested in. You can use RARs, PARs, SFVs, etc to make sure the file is downloaded properly. Then the only issue is making sure the checksum matches up with what you got off the original web site.
ISPs already carry Usenet, so the infrastructure is set up, and this is definitely a useful, non-infringing use of Usenet.
"What was I downloading? Why, the latest version of Mandrake!" Sounds good to me...
Finally!! Thank you! (Score:2)
Haiku! (Score:1)
Peer to peer downloading of
Open source software