Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Networking Software IT

Use BitTorrent To Verify, Clean Up Files 212

jweatherley writes "I found a new (for me at least) use for BitTorrent. I had been trying to download beta 4 of the iPhone SDK for the last few days. First I downloaded the 1.5GB file from Apple's site. The download completed, but the disk image would not verify. I tried to install it anyway, but it fell over on the gcc4.2 package. Many things are cheap in India, but bandwidth is not one of them. I can't just download files > 1GB without worrying about reaching my monthly cap, and there are Doctor Who episodes to be watched. Fortunately we have uncapped hours in the night, so I downloaded it again. md5sum confirmed that the disk image differed from the previous one, but it still wouldn't verify, and fell over on gcc4.2 once more. Damn." That's not the end of the story, though — read on for a quick description of how BitTorrent saved the day in jweatherley's case.


jweatherley continues: "I wasn't having much success with Apple, so I headed off to the resurgent Demonoid. Sure enough they had a torrent of the SDK. I was going to set it up to download during the uncapped night hours, but then I had an idea. BitTorrent would be able to identify the bad chunks in the disk image I had downloaded from Apple, so I replaced the placeholder file that Azureus had created with a corrupt SDK disk image, and then reimported the torrent file. Sure enough it checked the file and declared it 99.7% complete. A few minutes later I had a valid disk image and installed the SDK. Verification and repair of corrupt files is a new use of BitTorrent for me; I thought I would share a useful way of repairing large, corrupt, but widely available, files."
This discussion has been archived. No new comments can be posted.

Use BitTorrent To Verify, Clean Up Files

Comments Filter:
  • Anonymous Coward (Score:3, Informative)

    by Anonymous Coward on Sunday May 04, 2008 @07:37PM (#23295468)
    Those of us who use BitTorrent for *ahem* illegal purposes have been doing this since the beginning. The only way to get rare and complete downloads was to take the files to other trackers and match them against another md5 to finish the download.

    It's like getting parity files over on usenet to fix that damned .r23 file which is just a bit too short for some reason :)
  • Scheduling (Score:4, Informative)

    by FiestaFan ( 1258734 ) on Sunday May 04, 2008 @07:38PM (#23295476) Homepage

    Many things are cheap in India, but bandwidth is not one of them. I can't just download files > 1GB without worrying about reaching my monthly cap, and there are Doctor Who episodes to be watched. Fortunately we have uncapped hours in the night
    I don't know about other bittorrent clients, but uTorrent lets you set download speed caps by hour(like 0 during the day and unlimited at night).
  • by DiSKiLLeR ( 17651 ) on Sunday May 04, 2008 @07:46PM (#23295552) Homepage Journal
    I've used bittorrent for this purpose many times in years gone by.

    Especially with our slow links, or worse yet, on dialup (if I go enough years back) in Australia.

    Before bittorrent I would use rsync. That required me to download the large file to a server in the US on a fast connection, then rsync my copy to the server's copy to fix what is corrupt in my copy.

    It works beautifully. :)
  • Re:Nice (Score:5, Informative)

    by gomiam ( 587421 ) on Sunday May 04, 2008 @08:11PM (#23295742)
    It should be quite simple. Let's say torrentA leaves you with a corrupt/incomplete filesetA (one or more files, it doesn't really matter). Let's supose torrentB contains the files in filesetA, perhaps with different names in its own filesetB.

    Ok, you load torrentB in your favorite Bittorrent client, and start it up. It will automatically create 0-sized files with the names in filesetB (at least, all clients I know do that). Stop the transfer of torrentB, and substitute the 0-sized files in filesetB with the corresponding files in filesetA (may require some renaming). As you restart torrentB, your Bittorrent client will recheck the whole filesetB, keeping the valid parts in order to avoid downloading them. Voilá! You have migrated files from one torrent to another.

    Note: You should make sure that the files you are substituting in are the same files you want to download through torrentB or, at least, keep a copy around until you see that the restart check accepts most of their contents.

  • by Anonymous Coward on Sunday May 04, 2008 @08:16PM (#23295762)
    Yes, there are- though most of the latest ones are SHA-1 digests. They're not usually seen in the "public front page" download areas and aren't universal, but are generally present for the downloads for updates and security patches through links from the tech literature and developer sections.
  • by complete loony ( 663508 ) <Jeremy@Lakeman.gmail@com> on Sunday May 04, 2008 @08:28PM (#23295826)
    The TCP checksum offloading on nForce 4 motherboards (I have one) were notorious for corrupting TCP packets and allowing them to be received by the application. That's the most likely kind of failure that would be able to reproduce this problem.
  • by Anonymous Coward on Sunday May 04, 2008 @08:33PM (#23295868)
    It's obvious you have no clue how the Internet actually works. Shit happens, but the Internet is designed for it. Dropped packets cause retransmission, not corrupted data; the Internet drops packets *by design* and the entire system is designed around that. Flipped bits happen, but they are detected by multiple checksums which make it astronomically unlikely for corrupt data to remain undetected. Nope; if you receive corrupt data, the blame is squarely on some piece of software fiddling with your packets and changing the checksums to match. Maybe it's the crappy cheap NAT router, or the ISP's deep-packet-inspection P2P filter, or their (not so) transparent HTTP proxy. But whatever the cause, it's almost certain that software is to blame.

    I'd bet $100 that if he did the same download over HTTPS, thus preventing software meddling of the packet contents, it would come out perfect.
  • Re:Scheduling (Score:3, Informative)

    by urbanriot ( 924981 ) on Sunday May 04, 2008 @08:39PM (#23295910)
    Azureus also has an excellent scheduling plugin written for it - http://students.cs.byu.edu/~djsmith/azureus/index.php [byu.edu]

    I don't know about other bittorrent clients, but uTorrent lets you set download speed caps by hour(like 0 during the day and unlimited at night).
  • by BobPaul ( 710574 ) * on Sunday May 04, 2008 @08:44PM (#23295942) Journal

    As per the topic, Bittorrent fixed the problems - didn't cause them - so a failing router is not likely the problem.
    You misunderstood his comment; please read it again. In his story, bittorrent didn't cause any problem either--it identified a problem by use of the same mechanism (hash checks of file parts) that it solved the problem in the OP.

    While I agree that bad ram is most likely the issue, it's still possible bad ram in a router or even something goofy going on in a router, such as the firmware bug described, could have caused problems. The bits were mangled before they were written to the disk. They could have been mangled by anything that processed those bits as they traversed from apple's website to his HD, including Apple's website and the HD itself. That embedded devices tend to be more reliable does not mean they don't break and do weird things sometimes.
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Sunday May 04, 2008 @08:51PM (#23295988) Journal

    It's obvious you have no clue how the Internet actually works. Shit happens, but the Internet is designed for it... Maybe it's the crappy cheap NAT router
    I'm fairly sure that's what GP meant.

    Oh, and TCP checksumming isn't perfect.
  • by CastrTroy ( 595695 ) on Sunday May 04, 2008 @08:56PM (#23296028)
    I had the same problem. What's really terrible is that I don't think they ever fixed the problem. That drove me nuts for a few weeks trying to figure out why all my downloads were corrupted.
  • by zippthorne ( 748122 ) on Sunday May 04, 2008 @09:08PM (#23296096) Journal
    To be fair, very few British cops know how to use guns. At least, if the gun control advocates on my side of the pond can be believed.
  • by Skapare ( 16644 ) on Sunday May 04, 2008 @09:17PM (#23296170) Homepage

    Flipped bits happen, but they are detected by multiple checksums which make it astronomically unlikely for corrupt data to remain undetected.

    I actually saw this happen once ... the astronomically unlikely [1]. TCP accepted the corrupt packet. I'm sure it will never happen again. Fortunately, rsync caught it in the next run.

    One problem I ran into once with a certain Intel NIC was that a certain data pattern was always being corrupted. TCP always caught it and dropped the packet. There was no progress beyond that point because of the hardware defect always corrupted that data pattern. Turns out there was a run of zeros followed by a certain data byte (I tried a different data byte and with different run lengths and those never got corrupted). What the NIC did was drop 4 bytes, and put 4 bytes of garbage at the end. I suspect it was a clocking syncronization error. I got around the problem by adding the -z option to rsync (which I normally would not have done with an ISO of mostly compressed files). Another way would have been to do the rsync through ssh, either as a session agent (like rsync itself can do) or as a forwarded port (how I do it now for a lot of things).

    [1] ... approximately 1 in 2^31-1 chance that the TCP checksum will happen to match when the data is wrong (variance depending on what causes the error in the first place) ... which approaches astronomically unlikely. Take 1 Terabyte of random bits. Calculate the CRC-32 checksum for each 256 byte block. Sort all these checksums. You will find 2 (or more) data blocks with the same checksum (or a repeating pattern in your RNG). Why? Because CRC-32 has 2^32-1 possible states, and you have 2^32 random checksums.

    But whatever the cause, it's almost certain that software is to blame.

    Agreed. Since it is at least software's responsibility to detect and fix it, if the problem happens, the famous finger of fault points at the software.

    I'd bet $100 that if he did the same download over HTTPS, thus preventing software meddling of the packet contents, it would come out perfect.

    Your $100 is safe.

  • Re:!new (Score:5, Informative)

    by Anonymous Coward on Sunday May 04, 2008 @10:09PM (#23296454)

    I don't think this tactic is very common, though, as most people seem to have no fucking clue how BitTorrent works. I've seen torrents with gigantic multipart RARs, with an SFV of those. Let's see... so, my torrent software is already checksumming everything, and RAR has a builtin checksum too, or at least, acts like it does (it says "ok" or not) -- and on top of that, there's an SFV checksum (crappy CRC32), too. Never mind that RAR saves you at most a few megabytes (video is already compressed), which, based on the size of these files, you'll spend more time unpacking the RAR than you would downloading the extra couple megs. Or that, once you unpack and throw away the RAR, you can't seed that torrent from the working video. Or that multipart anything is retarded on BitTorrent, as the torrent is splitting it into 512k-4meg chunks anyway.
    People who aren't aware of the full situation often make this complaint. These multipart rar files are "scene releases".

    First of all, scene releases are _never_ compressed; it's always done with the -0 argument, this makes is basically equivalent to the unix split program. If a file is to be compressed, it is done with a zip archive, and the zip archive is placed inside the rar archive. This is because rar archives can be created/extracted easily with FOSS software, but cannot easily be de/compressed. This was more of an issue before Alexader Roshal released source code (note:not FOSS) to decompress rar archives.

    Second, people often have parts of, or complete, scene releases and they are unwilling to unrar them (often because it's an intermediary, like a shell account somewhere where law isn't a problem).

    Third, people follow "the scene" and try and download the exact releases that are chosen by the social customs of the scene (I am not going to detail those here), thus, "breaking up" (ie, altering) the original scene release is seen as rude.

    Fourth, the archive splitting is in precise sizes so that fitting the archives onto physical media works better; typically the archive size is some rough factor of 698, ~4698 and ~8500.

    Fifth, archives are split due to poor data integrity on some transfer protocols (though this is largely historical nowadays); redownloading a corrupted 14.3mb archive is easier than redownloading a 350mb file.

    Sixth, traffic of the size is measured in terabytes, with some releases being tens, or sometimes hundreds of gigabytes in size. Thus, there become efficiency arguments for archive splitting; effective use of connections, limited efficiency of software(sftp scales remarkably poorly, though that is beginning to change - not that sftp is used everywhere), use of multiple coordinated machines and so on. This is an incomplete list of reasons; it is almost as though every time a new challenge is presented to the scene, splitting in some way helps to solve it.

    AC because I'm not stupid enough to expose my knowledge of this either to law enforcement, or to the scene (who might just hand me over for telling you this - it has been done). Suffice to say that this is more complex than you understand, and that even this level of incomplete explanation is rare.
  • by Tawnos ( 1030370 ) on Sunday May 04, 2008 @10:43PM (#23296642)
    TCP has a 16 bit checksum. That means there's a 1 in 2^16 chance of an error getting by the checksum. Let's assume, for a moment, that the packets were sent 1kb at a time (ethernet max is greater than this, but it's an easy number). In a 1.5Gb file (assuming base 10 throughout for simplicity), this means a total of 1,500,000 packets must be transmitted. Using only the TCP checksum, 22 of these packets would be corrupt, but allowed through. Even though there are additional checks at layer 2, the fact is that when dealing with large amounts of data, relying on TCP for data integrity is not enough.
  • Re:Nice (Score:2, Informative)

    by X0563511 ( 793323 ) on Sunday May 04, 2008 @10:52PM (#23296706) Homepage Journal
    And then, Par2 came along, and allowed more flexibility.

    We still use them, on usenet anyways.
  • Re:Nice (Score:2, Informative)

    by Christophotron ( 812632 ) on Sunday May 04, 2008 @11:02PM (#23296758)
    no, actually that's how the old par system worked. In the newer, more advanced .par2 system, the individual .rar files are divided into "blocks" and each par file can recover a certain number of "blocks". It's much more advanced than the old par files that you are referring to, and it's similar to the hashing mechanism in bittorrent. I haven't seen any of the old-style pars in a long time.

    For example, if you are missing a total of 3 blocks (one block from 3 different files) you only need to download a very small par2 file that says "+3 blocks" and it will repair the three missing blocks. Of course, if you are missing a lot more data, even entire files, you can get several of the larger "+128" par files and it'll repair everything (assuming there is enough parity data). Often you can even request additional parity blocks, but that's only necessary if you have a *really* crappy nntp provider.

  • The first rule (Score:5, Informative)

    by tux0r ( 604835 ) <magicfingers+sla ... m minus math_god> on Monday May 05, 2008 @12:33AM (#23297282) Homepage
    The first rule of Usenet: don't talk about Usenet.
  • Re:Nice (Score:3, Informative)

    by maskedbishounen ( 772174 ) on Monday May 05, 2008 @12:39AM (#23297292)
    Usenet [wikipedia.org]. All "files" (posts) are stored server-side, and folks generally have a fast pipe to their ISP (or other provider).

    With multipart binary posts, a single file is split up between so many posts. Between fifteen to fifty, let's say. It's common for usenet providers to not receive all the posts, so folks are sometimes left with incomplete/corrupt files. Enter the small, spanned archive formats. It's quite common to see up to 10% parity per usenet posting, especially for large files. Small split set sizes make for easy reposting, as well.

    In regards to the grandparent, this likely relates to why the said torrents are healthier. Folks who can bypass the leeching process and go straight to the seeding. The only other means of really sharing on such binary groups would be posting (or reposting) stuff for folks. Due to ever limiting server retention, a lot of the binary groups look down on heavy posting.

    I gave up on usenet years ago, though; well, not really. My ISP gave up on it, and I was too much of a bum to pay someone for decent service. I would encourage anyone to check if their ISP offers usenet access if they're into P2P and don't like the "2P" part that much.
  • by tucuxi ( 1146347 ) on Monday May 05, 2008 @03:28AM (#23297968)

    First, as rdebath argues, you only get 16 bits of CRC on TCP headers.

    And furthermore, if you start calculating CRCs off random data, chances (>50%) are you will get a collision (two chunks of data with the same CRC) around the 256th try (this is known as the "birthday paradox" in criptography). Of course, to be really sure to get a collision you will need to try at most 65536 values; but you will reach a very high probability of clash much sooner than intuition may tell you.

    See birthday attack [wikipedia.org] for the math.
  • Re:Nice (Score:4, Informative)

    by meza ( 414214 ) on Monday May 05, 2008 @04:58AM (#23298322)

    I can't be bothered to do the math of how many possibilities that would be at this time of night, but it'll certainly go faster than continuing to download it from nobody.
    Not quite so. Because if you do the math (and if mine is correct at this late hour) you would see that it actually takes a pretty long time(tm). Imagine if only 1MB was missing. You would have to calculate the hash of every single possible 1MB file, so that is 2^8000000=~10^2300000 files. If you had a computer that could, quite unrealistically, calculate one hash each clock-cycle at 1GHz that would still take you 10^2299991 seconds. As a reference the universe according to Wikipedia is roughly 10^17 seconds old.

    Besides that there is the information theory problem too. If the hash is 128bit long then every 2^128th file will have the same hash. This might seem unlikely if you only compare a few files (such as all the files ever created by man) but compared to the 2^8000000 hashes we where going to calculate it is actually quite substantial.
  • Re:!new (Score:5, Informative)

    by dk.r*nger ( 460754 ) on Monday May 05, 2008 @07:07AM (#23298764)

    First of all, scene releases are _never_ compressed; it's always done with the -0 argument, [...] This was more of an issue before Alexader Roshal released source code (note:not FOSS) to decompress rar archives.
    So, historical, and pointless. And anyway, just an excuse if there's any point in using RAR anyway. Let's see..

    Second, people often have parts of, or complete, scene releases and they are unwilling to unrar them (often because it's an intermediary, like a shell account somewhere where law isn't a problem).
    So they should use BitTorrent. Run a seed on your [strike]compromised windows host[/strike] "shell account".

    Third, [....] social customs of the scene (I am not going to detail those here), thus, "breaking up" (ie, altering) the original scene release is seen as rude.
    Oh, I think we're at the core of the problem. Pale teenagers in their mothers basements getting hurt feelings. I appreciate that someone will rip the Lost episodes in HD pretty much as they are being broadcast, and I actually look for some "group names" in the torrents I get - because they provide one file, not a RAR. In other words, provide what people want, and they will respect you for that. Make their life hard, and they will not care about your 1998 social customs. Like anything else in life.

    Fourth, [...]fitting the archives onto physical media works better
    Yawn. 1998 called, they want their infrastructure back. Harddrives are cheaper than dirt. Five years ago "the scene" at my college exchanged 250 gb harddrives.

    Fifth, archives are split due to poor data integrity on some transfer protocols
    SO USE BITTORRENT! It easier and faster and better and more fun, but of course less 'leet than using [strike]compromised windows hosts[/strike] "shell accounts"

    Sixth, [...] Thus, there become efficiency arguments for archive splitting;[...]it is almost as though every time a new challenge is presented to the scene, splitting in some way helps to solve it.
    No, BitTorrent does ALL this for you. ALL of it.

    AC because I'm not stupid enough to expose my knowledge of this either to law enforcement, or to the scene (who might just hand me over for telling you this - it has been done).
    Badass gangster!

    Suffice to say that this is more complex than you understand, and that even this level of incomplete explanation is rare.
    What? Moving files around on the internet is "more complex" than we understand? It probably the simplest fucking thing there is. Let me put it very simple for you: 1) Multi-file RARs made sense back when people got their stuff from FTPs and newsgroups. 2) It's the past. It's pure nostalgia. Get over it. If you're not using your "scene" FTP servers as Torrent seeds instead, you're wasting your resources.
  • Re:Nice (Score:2, Informative)

    by sexconker ( 1179573 ) on Monday May 05, 2008 @12:08PM (#23301654)
    No, you can't.

    A checksum is not unique.
    A 32 MB file has 8388608 ways to generate the same 32 bit checksum.

    Using "given" data to help narrow the search is a bad idea as well - there is no guarantee that the given data is correct, unless you have individual checksums for them. Bittorrent does do checksumming on each individual chunk (I believe), so you could narrow your search space to the size of the incomplete and missing chunks only. The existing data in incomplete chunks would be almost useless, since you don't know if that data is correct. But you can start your search assuming it's correct, (it probably is, mostly) and speed things up.

    But the bottom line is that checksums are smaller than the data they verify. Much smaller. Consider a simple example of a 2 bit checksum on an 8 bit chunk of data. Our checksum simple counts the ones, and rolls over.

    00000000 : 00 (0 ones)
    00011010 : 11 (3 ones)
    10110111 : 10 (6 ones - 110 is truncated to 10)

    There are 64 ways to get any particular checksum.
    2^8(data length) / 2^2(checksum length) = 2^6 = 64. And that's with us having 25% of our data duplicated in checksums.

    A checksum is a check. It is not a guarantee nor is it a blueprint from which you can reconstruct the original data. In certain cases it would be feasible - if you're downloading a thesis about Skittles, and it's corrupted, you could perform a brute force search (like you described) on the (small - it's just text) data, and then sort the matches by # of times "Skittles" is present, then by the % of data that is ASCII. You then hand verify the top 20 results or so, and you'll probably have it.

    The same could theoretically be applied to AVIs by enforcing the AVI frame structure (throw out checksum matches that don't generate valid AVI files), attempting to grab audio out of the generated files, and then doing a frequency analysis of the audio - rank the results in terms of % of audio that falls within normal listening ranges (since it's almost guaranteed that audio in an AVI will be compressed in a lossy format).

    You could do analysis of the video frames and such too. But the bottom line is that it's a HUGE undertaking - just redownload the damned thing, pay for it, or write your own paper. If it's vital data then go ahead, brute force it and waste your life away.

    PAR2 files are neat - they give you chunkettes of parity data at different offsets. This allows you to potentially patch holes in data and reconstruct the original files. WinRAR (and other programs) do give you the option to create a recovery record that's placed in the original RAR (or whatever format) files. The problem is that you then have to download the recovery data. With PAR files, you don't download them unless you need them. The downside is that availability then becomes a problem.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...