Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

IETF draft on different IPv4 addressing scheme 248

skuzbunny writes "The IETF [?] draft The Mathematical Reality of IP Addressing in IPv4 Questions the need for Another IP System of Addressing has some really interesting comments on IPv6 [?] . Quote: "I was indeed successful in the elimination of the problems associated with IP Address Flooding inherent in IPv4 and the complexities of IPv6. In short, small business and single family dwellings can now have the option of having their own private IP Addressing Scheme," " Interesting, particularly if I understand the math correctly. Can anyone who's actually qualified to comment on this comment below?
This discussion has been archived. No new comments can be posted.

IETF draft on different IPv4 addressing scheme

Comments Filter:
  • First lets consider this:

    All gripes about the poor writing style aside, this comes across as mostly non-sense. He's missed the point of classing: a Class A address has a netmask of 255.0.0.0, that means that if you take 100.x.x.x as a class A address, then 100.2.0.30 is under that, and you can't use 255.255.0.0 and say oh, I have 100.2.0.30 and its on a different network...no, it isn't because its still in the orignal class A subnet.


    Now, take the obvious fact, that reguardless of what is done, at some point we will *HAVE* to go to a new ip scheme anyhow. This is simply a poorly planned idea to delay the inevitable. My advice: Everyone start learning IPv6 and how it works, so that 20 years from now when we're doing this again, I don't have to read another article this poorly thought out.
  • The server seems to be a bit /.ed at the moment, so I haven't read the whole thing. I don't think there's any maths in the first few hundred lines though. In my experience you can tell a reasonable proof even if the language is poor. I couldn't see any coherent logic flow here.
  • I couldn't stand reading the whole thing, but here's what I got out of it: He's saying that first, Class D and E aren't being used, and we could simply use those addresses. It's actually not a bad idea. His second idea is idiotic. He's saying that the binary addresses don't need to be 8-bits. Right now, every address is 8-bits, even if the address is 32.32.32.32 (0001 0000.0001 0000.0001 0000.0001 0000 in binary) He's saying there's an alternate address of 10000.10000.10000.10000. Unfortunately, we'd have to re-write how TCP/IP works in order to do that (so why not just implement IPv6 is my question). If you didn't update, you'd choke when trying to get to the binary 10000.10000.10000.10000 website.
  • I wish it was so.

    Cisco pretends to do this, but is not entirely successful. When setting up a router for the first time, you still have to figure out which network class your IP network is, and subtract subnet bits from that. (If you use the setup dialogue. It saves time otherwise)

    ie. Subnet bits: 8 != cidr /8

    This can sometimes be annoying. (every time I forget, it's annoying. :-)

    We still use CIDR for anything else, including IP address delegation for the customers of the company where I work.

    Oh, well.
  • I have never read such incoherent gibberish in my life. Every sentence is tortured beyond measure.

    Here is his first sentence:

    "This paper was necessitated by an overwhelming desire; an attempt to end the apparent disparity in the dissemination of information absent of the logical and thoroughness in rendering an explanation of the IP Addressing Scheme."

    I _think_ that this is a rough translation:

    "I wrote this paper because I found all existing descriptions of the IP Addressing Scheme to be inadequate."

    I suggest that either Mr. Terrell has ensnared us in some rather bizzare joke, or that English is not a language he has ever learned.

    I honestly don't know whether to vote for the former or the latter.

  • english->german -> english

    This really improves the spelling too!

    ABSTRACT:

    This paper was required by an overwhelming desire; an attempt to terminate the
    obvious difference in the data communication of the information absent from the
    logical and from the Thoroughness, if an assertion of the turning design will
    transfer IP. in order to transfer a more pointed fact, I had to lead a check of
    certificate of Cisco. However this can be never completed, if the information,
    which is used necessarily and in the preparation of it, lacks passage and
    reproduces the errors, which concern fundamental information. To say
    unnecessarily, my efforts were not in the vein. That is, like a direct result of this
    transfer, I the unterstreichenen errors recovered, calculates a possible alternative
    approximation to the turning design IPv4 and does not extend its category system
    (that any longer inside use is).

    http://babelfish.altavista.com/cgi-bin/translate ?
  • The fact that you can't subdivide gold beyond the atom level has nothing to do with the axiom of choice. I could challenge you to carve a Mandelbrot set out of gold, and you couldn't do that, for the same reason: atoms provide a maximum resolution.

    The axiom of choice is only required to distinguish between some of the things that atomic structure prevents you carving out of gold, and other things that atomic structure prevents you carving out of gold. It isn't that the AoC is inapplicable to real life in this instance: it's that real life doesn't let you get as far as the point where you find out whether you can apply the AoC in practice or not.

    The AoC deals with uncountable things, but that isn't why it gets bad press as an axiom: lots of Cantor's stuff is far more widely accepted than the AoC even though it deals with uncountable things at least as much.

    Mathematical pedantry over. Sorry.

  • There should be a best of /. award for this !! Were I am a moderator (of course) this would off the real number scale be.
  • Hmm actually, in a class B it would be (256 * 256) - 2. You _can_ have 'all zeroes' or 'all ones' in the final _octet_, as long as the local IP space in its entirety doesn't contain 'all zeroes' or 'all ones'.

    e.g.
    192.168.0.[1-255] are all valid
    192.168.[1-254].x are all valid
    192.168.255.[0-254] are all valid

    Thus, the only addresses which aren't valid in this instance are 192.168.0.0 and 192.168.255.255.

    (I think this is buried somewhere in RFC1812.)

    "What do you want to boot today?"
  • The main problem with IPv4 that IPv6 is trying to solve is a lack of address space. By using IP masquerading, that problem can be alieviated indefinately, at the cost of increasing the lag time. You get one IP address, which you then use IP masquerading to get up to 2^32 (minus oddballs like 127.*) addresses internally.

    What if the masquerading machine fails?

    Masquerading introduces a single point of failure, as there is no way to balance the load between different masquerading servers. Since every node uses the masquerading machine's IP-address, you can't tell it to switch to another machine without losing all established connections. The only solution is to use different subnets with different masquerading gateways, so only half of your network loses its net-connection when a masquerading machine goes down.

    And if that's not enough addressing for you, you can run IP masquerading on each machine of your internal network, increasing the layers indefinately.

    It doesn't make any sense to masquerade as a private IP. Just use a private class A network.

    IPv6 is way too scary to actually work :)

    Scary? It's actually ment to be invisible :-)

  • > ummm....fermat's last theorem has been proven.

    Yes.
    Which is why in the third ed. of D.E.Knuth's
    "The art of Computer Programming"
    it has been degraded to a level of 45 instead of
    50.
    the references given in TAOCP are
    - A. Wiles, Annals of Mathematics 141 (1995), 443-551
    - P. Ribenboim, "13 Lectures on Fermat's Last
    Theorem"(NY, Springer, 1979)
    - W.J.LeVeque, "Topics in Number Theory 2"
    (reading, mass. addison-wesley, 1956), Chapter 3

    I seem to recall Wiles being credited with the proof.
  • When I first started reading this draft, I thought to myself, "I could help this guy redraft this in a more readable manner. It's a pity to have good ideas made inaccessible by poor writing".

    This inclination quickly vanished when it became clear that good ideas were in short supply.

    Up until a year ago, I was employed by an ISP. One of the major parts of my job was assessing and making IP assignments for customers. As a European ISP, we worked through RIPE and following the RIPE procedures.

    There is, in principle, plenty of address space. IPv4 allows more than four billion theoretical addresses. That's plenty. The problem is that there are limits on how efficiently you can assign that space to end users. You can't realistically have 192.133.50.5 on a network in the UK and 192.133.50.6 in Australia, because it would mean having a seperate routing table entry for each individual IP address. Imagine, a routing table 8Gb in size.

    So space is allocated in blocks. ISPs are allocated large blocks (typically 32x255 or 64x255 addresses at a time) and in turn assign small sub blocks to their customers. There only needs to be a small number (often one) of routing entries for the entire ISP and all its customers. Instead of having 4 billion routing entries, you have a few tens of thousands.

    For those who don't know, the old "classful" system of fixed-size networks (i.e enormous, very big, or just big) is long gone. The Real Live Internet todays runs "Classless Interdomain Routing" (CIDR), which allows any number of bits in the netmask, rather than just the traditional 8, 16 or 24 which characterised classes A B and C respectively.

    So when a customer with three staff members and four computers purchased a leased line, you'd assign them perhaps a /29 (that is, 29 bits in the subnet mask, allowing six hosts + broadcast + network base) or a /28. Making smaller assignments makes more efficient use of the available address space. In the "bad old days" you'd have to in practice assign a class C (253 hosts) as a minimum.

    CIDR is vastly more efficient, and it's what has kept IP4 running up 'til now. For random distribution of network sizes, it's around 75% efficient (that's not a real-world figure for various reasons). The old classless system, by contrast, was only around 38% efficient. For both of these numbers you have to assume that everyone is being conciencious and using the smallest network they sensibly can.

    Now, back to the draft.

    One of the (several, and mutually exclusive) things he seems to be proposing is a 64-bit address space with the old 8-bit boundaries back on netmasks. This would give more space, but would be exactly as efficient as the classful addressing scheme - i.e. much less efficient than what we do now.

    He seems to want to get the extra 32 bits from the netmask. This reveals thinking which is muddled beyond all hope of salvation. He quite clearly doesn't understand how IP routing works in even the most fundamental sense. You see, fundamentally, the netmask is *not* carried around with the IP address. It's a setting on your host only.

    His other big idea seems to be that if we do the calculations in decimal instead of in binary, we will get different - and better, no less - answers. He does seem a little confused on this point, but of course, it's all the fault of those bad ole' IP designers for being deliberately obtuse:

    "Nonetheless, it should be emphasized, that the authoritative community as a whole; i.e. Authors of IP Addressing or Internetworking Fundamentals, have shown a lack of continuity and consistency regarding the actual methods, determination and or actual explanation of the processes involved in these calculations. Where by, it has been a consistent error regarding the confusion or inability to differentiate between the calculation of the Decimal Number and the Binary Number for their individual determination. Which, to say the very least, has rendered the understanding of the most significant part of the concept of Internetworking ( that of IP Addressing ) almost an impossible undertaking."

    (Indeed, an actually impossible one in his case).

    He also comments at one point that the failure of the IP architects to consider extending the address range is probably the cause of the Y2K problem.

    Oh well. The man's an idiot. He has exactly the understanding of IP addressing you'd expect from someone who attended (and slept through) two lectures on the subject, glanced briefly at RFC791, again at RFC1550, understands neither and thinks he's an expert. The sort of person who tells you about the font size in his .txt text/plain document. The sort of person who cites himself twice, and then cites the same RFC twice too.

    In short, an idiot. Ignore him.
  • People could care less which is more practical and efficient; NATing with IPv4 or plain old IPv6, whichever is the latest release and has all of the fancy new bells and whistles (IPv6), most of which are nice extras but unnecessary.
  • I attempted to read it. It was unreadable.
    --
  • Actually, Class A,B,C are things of the past. Almost everything/everywhere uses CIDR these days.

    --

  • An entertaining little rant.

    In case anyone was wondering whether the guy is a crank, Reference [1] contains Mathematical Proof:


    1. E. Terrell ( not published notarized, 1979 ) " The Proof of
    Fermat's Last Theorem: The Revolution in Mathematical Thought "
    Outlines the significance of the need for a thorough understanding
    of the Concept of Quantification and the Concept of the Common
    Coefficient. These principles, as well many others, were found to
    maintain an unyielding importance in the Logical Analysis of
    Exponential Equations in Number Theory.


    To complete the Proof, simply use the corollary of the Taniyama-Shinomura Conjecture, which is that if you have proved FLT and your name is not Andrew Wiles, you are a crank.

  • If this guy doesn't pass his Cisco exam, will he turn into a pool of iridescent goo and be claimed by Nyarlathotep? It would almost make up for his horrible writing.
  • by gsfprez ( 27403 ) on Wednesday August 25, 1999 @11:51AM (#1725345)
    The problem that many of my detractors (who Should be Obvious to you by now). Is that They have more problems with, ( of course ) the subnet of my presentation ( table 1 ). Needles to say, Nevertheless. That they more than Likely do not comprehend ( of course ) the Fundamentals of the I'm a Fucking Retard Rule ( Needless to say, similar to my Octet rule ).

    Never the less, it should be Obvious why I didn't ( or should i say, Couldn't ). Needless to say, pass the fucking Cisco exam because my head ( or never the less, what is on top of my head ) is so far.

    Just imagine! Shoved up my ass, that this paper should be my addmitance paperwork out of computer ( or network ). Consutlting/IT Professional, and into scooping M&M's for Dary Queen.


    if you read this hampsters paper all the way thru.. take off two points. Take off 3 if you printed it out to read it later.
  • Have you seen what an IPv6 looks like? I believe it's 8 groups of 4 hexadecimal digits.
    So take the number of IPs you just mentioned, and square it.
    --
  • That's a terrible idea. There are private network ranges set aside for private internal networks. Use them!

    There's absolutely no situation where having a clashing namespace is better in any way!

    --

  • Here it is:

    "All of those *.FF and *.00 addresses are wasted! Let's assign them to hosts! This won't break anything! I can prove it mathematically! It's all because nobody realizes that binary numbers are magical!"

    In other words, the whole paper is essentially garbage.

    Interestingly enough, the author mentions that all of this incoherent rambling is the result of studying for a Cisco Certification examination. Someone should contact Cisco, and inform them that the brain-eating forces of Yog-Sothoth have taken over their textbook editing department.

  • This paper reminds me of an article I read a long time ago (1988?) in Scientific American. The author (Professor Arlo Lipof) claimed to have invented a mathematical equation that allowed him to cut a 1"x5"x8" block of gold, and reassemble it into a 1"x8"x8" block (which resulted in a volume increase of about 1.5%.) The article was complete with diagrams and went on for 3 pages on the topic, very much like this paper.

    (The SA article immediately activated my BS meter, but I got about 1/3 of the way through before realizing that it was published in the April edition try to make an anagram of Arlo Lipof, and see what you get :o). If this is a joke, he's a little out of season.)
  • exactly..

    if you're going to muck up IPv4 addressing, then why not just do it the _Right Way_ and do IPv6?

    Besides, we're just going to continue to see more and more firewalls and more and more NAT to solve the problem, anyway.
  • The main problem with IPv4 that IPv6 is trying to solve is a lack of address space. By using IP masquerading, that problem can be alieviated indefinately, at the cost of increasing the lag time. You get one IP address, which you then use IP masquerading to get up to 2^32 (minus oddballs like 127.*) addresses internally. And if that's not enough addressing for you, you can run IP masquerading on each machine of your internal network, increasing the layers indefinately.


    IPv6 is way too scary to actually work :)

  • Firstly, this isn't an official draft of an IETF working group - anyone can submit a draft, even if it's this lousy. (Working group drafts are of the form draft-ietf-working_group_name-*)

    Secondly, IPv6 isn't really that complex, especially considering this proposal isn't exactly simple (would it really be easier to roll this out instead?!). An excellent starting point is the Internet Architecture Board Case for IPv6 [ietf.org]. You can also get some good information and links at the IPv6 Imformation Page [ipv6.org]. I have to say I don't like the way this guy slates IPv6 without explanation, maybe he needs to read up a bit more on the subject.

    Finally, although one day we may run out of IPv4 addressing, that's not the immediate addressing problem - the problem is of uneven distribution of addresses. While the USA might be alright, where every corporation who could shout "Me Too!" got a class A, there are other places in the world who are very short on addresses. I've heard it said that Madagascar has just 200 global IPv4 addresses! A whole country run through NAT! *Shudder* (I reserve the right for this to be an urban legend ;)

    Anway, there's loads of other really neat stuff in IPv6 aside from extending the address space to keep us all happy....
  • 0.0.0.0 is also used as a source address by machines attempting to acquire an IP address from their local BOOTP/DHCP server on initialization, IIRC.

    "What do you want to boot today?"
  • It's pretty easy to recognize an official IETF draft, the filename looks something like:
    draft-ietf-rap-cops-07.txt
    with ietf as the second word.
    Not like this draft that we are currently talking
    about, draft-terrell-math-ipaddr-ipv4-00.txt which was written by some guy called terrell.
    Just to make sure there are no misunderstandings an IETF draft is NOT a standard. Not even all the RFCs are standards. When someone makes a reference to an RFC, check to see if it is Standards Track or Informational or whatever.
  • er...I stand corrected. Never the less, I'm sure it was posted as a Sokal-like hoax for the authors jollies.

  • Let me give this a run through my Slashdot Translater

    >The Subnetting features of IPv4 did not offer much through options and
    >choice regarding IP Address assignment, allocation, or Networking in
    >general. And while Subnetting the Network ( The sub-division of the
    >Parent Network IP Address ) did relieve congestion, provided
    >performance gains, and improved management. Needless to say, these
    >were indeed significant benefits for the groping beginnings. Still,
    >it did nothing to increase the number of IP Addresses for allocation
    >to establish a new Network, that is, offer another outside connection:
    >the Parent Network. However, it did provide the IETF with a foundation,
    >if exploited, would have avoided the necessity of an urgency fostered
    >by explosive growth, to implement a new IP Addressing Scheme.

    Processing for translation.....
    ...
    ...
    Translation successful.....outputting

    Oh man...IPV4 was like so uncool...I mean...it brings me down....I can't deal. This subnet junk was like....suck....but it was cool...I think...for a while dude. And like were runnin out of ip's man...its like..damn...I can't deal....er...just forgot that...think of it like this dude...if you take small hits from the bong...you can buzz for a lot longer because you will like save some smoke for later or somethin.
    Oh man...my head is spinnin..

    End Translation....

    Hmmm...back to the drawing board...guess I was doing something special during that night of coding


  • I don't blame people who didn't make it that far through, but it's the first reference to the author's own, unpublished, 1979 proof of Fermat's Last Theorem that *finally* convinvced me of the merits of the proposal.
  • but I figured I could fit it in because it had something to do with random phrase generation...

    go check this out, it's called The Jedi Training Generator [brunching.com]. Click on "yodify" and it makes cool, randomly generated (I believe) Yoda-like statements.
  • by DrZiplok ( 1213 ) on Wednesday August 25, 1999 @11:22AM (#1725363)
    If his math is anything like his grammar, you can basically write it off straight away. And if it's not, it's still impossible to work out what he's really trying to say since he's not communicating with any sort of precision.
  • Jon Postal's ghost is haunting this fool. Presumably whomever allowed him to graduate from primary school also is (or at least quitting eduction in disgust.)
  • I dunno if anyone is still paying attention to this sub-thread but you can read more about the WEPT at http://iml.umkc.edu/english/wept001.html [umkc.edu].

    I mention that cause I've received a couple of emails about it.
  • Sorry, his main premise stands that you can use the netmask as part of the IP address and ... blamo ... you get 64 bits to use. Of course, datagrams don't USE the netmask when they are moving around.

    And the bits about binary being different than decimal... those are actually pretty funny. "Mathematically" if you have two different sets that map 1:1 for all values (0-255, 00000000-11111111, 00-ff) then they are the same. The only way it would be "different" is if they encoded the "2" then encoded the "5" then encoded the "5", which of course, is wrong. (and what got us into the y2k problem in the first place)

    Here's an example: http://18446744072649904394/ [18446744072649904394]

    Which comes from this program: #include <iostream.h>

    main() {
    unsigned long long a;

    a = (192<<24) + (215<<16)+ (17<<8) + (10) ;

    cout << a << endl;
    }

  • This guy is a BLITHERING IDIOT! This is either a hoax, or the standards for IETF submissions have drastically dropped while I was vacationing on LV426.

    I believe what he is trying to wrap his puny brain around is Classless Internet Domain Routing, or CIDR. This is the big workaround that was put in place a while back to extend the life of the IPv4 protocol. CIDR basically allows for variable length subnet masks from 8 to 30 bits in length. The old classful method, 'IPv4 Classic', had only three possible subnet masks: 8, 16, or 24 bits, which correspond to Class A (255.0.0.0), Class B (255.255.0.0), and Class C (255.255.255.0). Since there were only these three netmask options, and since most people have a hard time with hex numbers, the 'dotted quad' IP number notation was invented. This made it easy for a human person to 'mask' off the network portion of an IP address in their head, i.e. 87.129.44.66 is network 87, host 129.44.66. Or more likely, network 87, subnet 129.44, host 66. While this worked great for binary-impaired humans, it really brutalized the 32-bit IP address space. The smallest network that could be created could support 254 hosts, so if your donut store only has four servers and twenty workstations, then you just wasted 230 IP addresses.

    With CIDR, you can use a 27-bit netmask that will support up to 30 hosts. The relation between X and Y, where X is the number of netmask bits needed and Y is the number of IP addresses needed, is: Y = 2^(32-X)-2

    • 8 bit netmask => 2^24-2 => 16777214 hosts (Class A)
    • 9 => 2^23-2 => 8388696
    • 10 => 2^22-2 => 4194302
    • 11 => 2^21-2 => 2097150
    • 12 => 2^20-2 => 1048574
    • 13 => 2^19-2 => 524286
    • 14 => 2^18-2 => 262142
    • 15 => 2^17-2 => 131070
    • 16 => 2^16-2 => 65534 (Class B)
    • 17 => 2^15-2 => 32766
    • 18 => 2^14-2 => 16382
    • 19 => 2^13-2 => 8190
    • 20 => 2^12-2 => 4094
    • 21 => 2^11-2 => 2046
    • 22 => 2^10-2 => 1022
    • 23 => 2^9-2 => 510
    • 24 => 2^8-2 => 254 (Class C)
    • 25 => 2^7-2 => 126
    • 26 => 2^6-2 => 62
    • 27 => 2^5-2 => 30
    • 28 => 2^4-2 => 14
    • 29 => 2^3-2 => 6
    • 30 => 2^2-2 => 2

    Thus, using CIDR, your ISP can allocate just enough IP addresses to suit each customer's needs, at least to within the next highest power of two.

    The other technique that is widely used to preserve IP addresses is the use of Network Address Translation, NAT, a.k.a. masquerading, in conjunction with private IP network addresses. Using this scheme allows one to use a very minimal external IP range, i.e. 27 or 28 bit netmask, to support any number of internal hosts.

  • The grammer in that paper is apalling. I hope that english is not the author's native language.
  • The only real purpose of having a subnet number is for multicasting to all machines in a subnet.
    Err...no. The purpose of the subnet mask is to let you determine, given an arbitrary IP address, whether the host is on a local network (and thus you send the packet to it directly) or not (in which case you send the packet to a router or gateway that will forward it for you).

    cjs

  • It was better in the original Yiddish.
  • Same passage, english to portuguese then portuguese to english:


    ==
    The features of Subnetting de IPv4 had not very offered with the options and the
    choice regarding the attribution of the IP address, the allocation, or networking in
    the general. E when Subnetting the network (the subdivision of

    The IP address of the network of the father) alliviated congestion, since that profits
    of the performance, and improved management. Needless to say, these were
    certainly significant benefits for the starts groping. Still,

    fêz nothing not to magnify the number of addresses of the IP for the allocation to
    establish a new network, that is, offers one another exterior connection: the
    network of the father. However, it supplied the IETF with a foundation, if explored,
    would prevent the one necessity urgency promoted by the explosive growth, to
    execute a project directing itself new of the IP.

    ==
    Hmm, that's not really all that different from the original ...

    --
  • I just can't understand complicated stuff. What's so bad about dumbing things down to plain old english?
  • >Just look at RISC computers which use a 32-bit
    >opcode. They sure as heck don't implement 4
    >billion different instructions.

    Being in the throes of a PowerPC assembler, let me tell you that it sure as heck does *feel* like 4 billion different instructions. There are, to pick one random example, something like 24 variants of the integer 'add' instruction...

    don't mind me, I haven't had any coffee today.

    -Mars
  • This guy might have something worthwhile to say but I find it almost impossible to follow his reasoning. The author's communication skills are sorely lacking.

    -josh
  • With a reference like this in the document, it's gotta be a hoax.

    1. E. Terrell ( not published notarized, 1979 ) " The Proof of Fermat's Last Theorem: The Revolution in Mathematical Thought " Outlines the significance of the need for a thorough understanding of the Concept of Quantification and the Concept of the Common Coefficient. These principles, as well many others, were found to maintain an unyielding importance in the Logical Analysis of Exponential Equations in Number Theory.
  • Exccceeelllent. If that's what they want us to do, I'll be sharpening up my consulting pencil. It's what I already do, and I've set it up for people before. It's really a good option; keeps your network private. But it's technically more complicated; time to hang out the networking admin contracting shingle again...
  • I didn't know that joke RFCs had drafts. Note that the bibliography contains an unpublished "proof" of Fermat's Last Therom.

    Near as I can tell (given the grammar) he assumes:

    1. Numbers in different bases are "really" in some sense different. In other words, 192(decimal), C0(hex), and 11000000(binary) are all "different numbers". So he has no problem with getting more than 2^32 addresses in 32 bits, because he is working in decimal, not binary.[1]

    2. The "dots" in the standard notation for Internet addresses really mean something. (In reality, they're just placeholders, like the commas in "100,000,000".)

    3. A "subnet" is magically attached to an IP address. (He seems to catch this error later on.)

    What he is doing is letting his addresses overlap.

    Now, since he has ambiguous addresses, he has to carry the subnet mask around in the IP packet. Instead of having a 32 bit address, we have a 32 bit address and a 32 bit subnet mask that we have to carry around in our packet. For some reason, he thinks this is better than a 64 bit address. (This would also make routing essentially impossible, but that's another story.)

    If you're going to mess with the address length (no matter how you define address!), you might as well go straight to IPv6. The whole point of IPv6 was that *any* change at all in IP was going to be such a pain that we might as well go to a whole new IP protocol.

    If you need local addressing, use address masquerading. It works, and the protocol wonks hate it.

    --
    [1] Q: Why can't Real Programmers tell the difference betwen Halloween and Christmas?

    A: It's obvious that 31(oct) = 25(dec)
  • by Anonymous Coward
    true dat. did you see one of the references was his unpublished "proof" of Fermat's Last Theorem? i think that indicates his authority on the subject. Kewlio
  • He ought to read Strunk's The Elements of Style [columbia.edu].

    Erik

    Has it ever occurred to you that God might be a committee?
  • by syNaK ( 7216 )
    And this guy wants to be CCIE qualified!?!?!?!

    *ROFL*
  • Two Points:
    this is a draft of an informational nature that didn't come from a working group in IETF. Anyone can do this. And it's not on a standard track.
    This would require rewriting TCP/IP which asks the question, why not just do IPv6?
  • I tried to read it and now my head hurts.

    I'm not sure that class A blocks were the problem. The shortage seemed to be with the class B blocks. Class As were too big, class Cs were too small for many organizations. The way these blocks were allocated led to routing table bloat in the core IP routers.

    There was a nice paper (Mockapetris?) that asserted that "a name is not an address is not a route", they should be three distinct entities. Unfortunately, addresses are becoming blurred with routes.

  • by PimpBot ( 32046 ) on Wednesday August 25, 1999 @12:00PM (#1725385) Homepage
    My translation:

    Jen stepped over to the couch, slowly rocking her hips with each step, accenting the graceful curves of her body. She quickly move in next to him, noting the warmth coming from her lover. His warm hands started at her thighs, and crept up until they were her under her red sweater. He moved his lips next to her face, giving a quick nibble or her ear, and losing himself in the scent of her soft blond hair. She moaned softly, and brought her face closer to her man's ear.

    "Rob," she moaned, "show me your Commander Taco."


    How's my translation?
    --------------------------
  • I can't believe someone could actually write prose like that. My best guess right now is that the author wanted to post an encrypted message and used a high-order Markov model to encode the ciphertext as a plausible English document.

    The training set for the model might be real RFC's, or possibly the U.S. Congressional Record ;-).
  • ...not "spelling".

    Though both the poster's and the author's spelling are poor.

  • Blech, I haven't read such bad prose since I took technical writing in college. And no, it wasn't from other students, but the postmodernist drivel the teacher forced us to read as part of the class.

    Which gives me an idea.. what if this article is in fact a hoax, à la Alan Sokal, but directed toward the Internet community by some spiteful English lit student? Take some bogus mathematics, sprinkle in some jargon with a rudamentary understanding of network architecture, and mix it together in a dense, grammatically flawed style. "Ha! Those nerds will never know the difference! Now the jokes on them! *cackle* *cackle*"
  • This article has a flawed premise: that the only need (or even pressing need) for IPv6 is a lack of address space in IPv4.

    For a quick introduction to some of the issues in the design of IPv6, I recommend RFC 1752 [isi.edu] "The Recommendation for the IP Next Generation Protocol". Also peruse the RFC Index [isi.edu] for some of the whitepapers submitted as input to the IPng process, which led to the current IPv6 [isi.edu] Proposed Standard.

  • Pedantically, you're right.

    I was playing fast and loose with words in my post because I am trying to explain highly mathematical concepts to readers in a way they can understand. A friend of mine once said that sometimes the incorrect explanation is just clearer.

    You're right of course. Strictly speaking the problem is that real life doesn't let you apply the axiom of choice. And uncountability is not the feature of the axiom of choice that leads to the BT paradox.

    But if I sit here and try to explain all the finer mathematical details, my post becomes 3 times as long and 1/100 as clear.

  • I discovered quite by accident that fish have a natural propensity to route IPv4. Cisco have naturally chosen to supress this information
    which could be of practical benefit to millions of netsurfers and pet-shops alike.

    I must submit this to the IETF before it is too late...
  • [Warning, extremely off-topic, though this does relate to IP-on-Linux issues]

    Madwand wrote:
    (NATs, which the Linux weenies renamed "IP Masquerading" for no good reason)

    Erm, sort of.

    Having just tried to set up a GW/FW on a RH 6.0 machine with two NICs, I can tell you that IP Masquerading in the full Linux sense != NAT (masquerading maps all internal IP addresses to the GW's IP address, which isn't really NATing at all - NAT lets you map addresses in both directions). There are, in fact, NAT patches for the kernel that you can compile in and which, to all appearances work fine. The control software for the one I used is ipnatadm and it's modeled on ipfwadm.

    My problem wasn't with ipnatadm or with ipchains, however, it was with the fact that Linux's arp is basically broken (when compared with BSD 4.4 or System V arp) in that you can't arp a second IP address onto a MAC address physically in the machine (at least, not properly), which makes it hard to NAT. (Yes, I know I could have used a VIF and done the assignment of the IP to the MAC address through ifconfig, but then I would have had to do some kind of virtual NATing which none of the Linux NAT packages are capable of doing yet.)

    The basic schtick is that I threw in the towel and went back to getting OpenBSD to cooperate with the SCSI card in the GW/FW machine.

    I'd dig up some links for all this stuff, but I need to run to lunch. I'll try to come back and reply to my own message with urls after that.
  • Oh, let's not stop there.

    There's also the minor problem that it continues to propagate the horrific notion that each and every end-to-end flow on the public Internet ought to flow across a dozen different addressing realms just because that's the only way we can keep our router tables from mushrooming to the size of the Encyclopaedia Galactica.

    Gack. Slashdot has been trolled by the IETF.


  • if you read this hampsters paper all the way thru..

    Hey! Don't insult hamsters. My hamster's produce much better papers than this!

  • I think you are exactly right. He does seem to be using the same addresses in "different" networks. Does he really believe it, or is he having a good laugh?
  • Whew! Now I know why I took a few writing courses.

    I think the author is trying to say something fairly simple:

    if the "subnet identifier" (not subnet mask) for an IP address is known, the IP address can refer to different hosts depending on the subnet identifier.
    True enough, simple in concept. BUT, where are routers going to get the subnet identifier? We will have to (1) modify the IPv4 packet header, (2) change the DNS A record, (3) modify the network system calls, (4) modify the programs that use the system calls, (5) et cetera.

    Tacking on a few extra octets and calling it IPv4.1 would probably be simpler.

    Time for a reality check on this one.

  • by gr ( 4059 )
    Linux Network Address Translation [linas.org] is a really good explanation of what's available for Linux and how NAT works in general. (Or, at least, links to those things.)

    I think the package offered at Linux IP NAT Forum [tu-chemnitz.de] is the one I tried to use. There's nothing wrong with it, but Linux's arp is inherently broken to my eye and it had become too great an irritation to make Linux do what I knew I could do in an hour in...

    OpenBSD [openbsd.org], using ipf and ipnat (the real and original way to do this, also available on Solaris, I believe).
  • I don't buy the idea that it's a hoax. The comparison with Sokal seems bogus: Sokal's paper, while deliberately opaque in the manner of the field it was parodying, was literate. If it had been, littered with, spurious punctuation and Random Capitalisation, it wouldn't have worked.

    ISTM that a would-be Internet Draft hoaxer would not use a semi-literate style guaranteed to cause 90% of their readers to dismiss them as a flake within a few pages.

    An effective hoax would elicit one of two responses: amused admiration if you 'get it', or 'hey, this might work!' if you don't. If this paper *is* a hoax, it has failed.

    I think the guy has a tinfoil helmet.
  • I'd agree with the tinfoil helmet perspective IFF he actually wrote a "Proof of Fermat's last theorem" (or so he beleives). In any case, his goal my have been merely to "get it published" (same as Sokal), which succeeded. I don't know the review process for the Internet Draft process, he may be attempting to make the point that he can get drivel published in tech circles. After all, Sokal pulled the hoax on the Social Text editors, there's no info on whether its readers bought it.
  • Especially considering that your primary sources are cited first according to the CBE documentation style (1). Now we know that this person is obviously no english major, but he's _citing_ _himself_. That's pretty lame, since neither of the works he uses as his primary reference is publicly available. He doesn't even say where we could find those and discredit them.

    1. Fowler HR, Aaron JM. The Little, Brown Handbook. 7th ed. New York: Longman; 1998. 882 p.

    (which, btw, is the correct CBE style)
  • I take exception to your unfair characterisation of the B.E.F.o.Y.-S. The nit who calls himself "Eugene Terrell" has no connection with our organisation, and is in fact probably from Connecticut.
  • If someone finds a kernel of truth or reason in this article, please speak up. But don't go in there without your brain firmly strapped in.


    I've made it through most of the article. AFAICT, he postulates adding bits as a method of getting around the number-of-addresses problem, and proposes a different way of organizing subnets.


    The *one* (1) saving grace that his article has is that his proposed organizational scheme makes it relatively painless to increase the number of bits down the road, without having to reassign addresses. OTOH, it's easy enough to do that with the present system too (treat your 4-byte IP as the _least_significant_ part of a larger address).


    The article was poorly organized and incredibly obfuscated. I really do hope that this person isn't really a member of any decision-making organization. I could give a summary containing all of the useful information on it in a tenth the space, and more clearly.


    In fact, I'm seriously considering doing this just so that nobody has to wade through this monstrosity in its original form.

  • This is, of course, why the 10.x.x.x and 192.168.x.x networks are there to begin with; they're specifically setup as non-routeable addresses for firewalls (NAT/IPMasq or otherwise).
    ---
    "'Is not a quine' is not a quine" is a quine.
  • Grammar and spelling are usually considered two separate entities. One can have impeccable grammar and abhorent spelling concurrently.
  • Replicated IP's with logically 'AND'ed subnet masks.. My head hurts, if not from the bad grammer and english, then from the contemplation of an actual address space that big. While his conclusions on expansion of ipv4 appear correct, I am terribly afraid to check his extrapolation of the current scheme. skull.technos.com had a score of trouble during the first period of numeric contemplation, and I fear another will force my brain-kernel to panic. (Or bluescreen. on some days I seem to be running NT in there.)
  • that's not real connectivity however.. :-)
    If I want my toaster at work, to talk with my coffee maker at home... I can't reasonably use IP masq... unless I have some weird port forwarding stuff... which.. while I'm sure it exists.. isn't that great of a solution.

    end of rambling.
  • This paper was necessitated by an overwhelming desire; an attempt to end the apparent disparity in the dissemination of information absent of the logical and thoroughness in rendering an explanation of the IP Addressing Scheme.
    I did this because the FAQs were all different...

    To render a more pointed fact, I needed to pass a CISCO Certification Examination.
    ... and for a paper.

    However, this can never be accomplished, if the information that is needed and used in the preparation thereof, lacks continuity and propagates errors pertaining to foundational information.
    ...
    Needless to say, my endeavors were not in vein.
    I am cool.
    That is,as a direct result of this undertaking, I corrected the underlining errors,
    I fixed it.
    derived a possible alternative approach to the IPv4 Addressing Scheme,
    I fixed it.
    and expanded its Class system ( that is no longer in use ).
    I fixed it.
    In other words, I was indeed successful in the elimination of the problems associated with IP Address Flooding inherent in IPv4 and the complexities of IPv6.
    I fixed it.
    In short, small business and single family dwellings can now have the option of having their own private IP Addressing Scheme, without the disparity resulting from the steep learning curve presented in IPv6. You can all have warez sitez at home. Easy.
    While the Internet Community at large, will not suffer a shortage of the availability IP Addresses for assigned distribution. Especially since, while the number available IP Addresses do not exceed the amount reported to be provided, if IPv6 is implemented.
    IPv6 does it better.
    It does indeed, provide enough IP Addresses to cover their continued issuance for at least another 100 years or so. Which is dependent upon the adoption of an adequate scheme for its allocation and distribution. But it's hard. So there.
  • Here's a try:

    Subnetting under IPv4 is a good idea not fully realized. While it is useful in its present form, it is flawed in that it allows the IP address space to be divided up wastefully. Use of a superset of the current subnetting functions will be needed to remedy address waste.

    At least, I think that's what he's trying to say.

    (Hey, isn't there an term used in OS-theory circles regarding overly-general division of resources? I recall it from the memory management chapter...can't recall the term....)

  • NATs are the bane of net existance, they break the end-to-end model that the Internet is based on. IP security won't work with NATs, neither will many apps these days (lots of games don't work through a NAT).


    I happen to like NATs - they are a good way of making sure that the network inside my workplace or home isn't visible to the outside world. As far as the ISP is concerned, my house consists of the firewall machine, and my workplace consists of a firewall and a mail server, which IMO is as it should be.


    I readily agree that using NATs as a means of packing more machines into the address space is a Bad Idea - I'd like to have the potential for more than a few billion world-visible boxes. They're also a bad idea on an internal network that has to be able to see all parts of itself from all parts of itself, and for cell phone networks. However, I don't see why they're intrinsically evil.


    I haven't had a problem running games behind a masquerading firewall. Tribes 'net play works fine. Quake 'net play works fine.

  • Look at table 4, and you'll see he's getting the extra combinations from the subnet bits. He's seems to be operating under the delusional assumption that the subnet mask somehow floats alongside the IP address. Combine that with his delusional ramblings about decimal vs. binary vs. hexadecimal. (Dude, those are just representations, just like 90 degrees is PI/2 radians is a right angle! I could write the numbers in octal if I wanted to and it wouldn't change their values.)

    The reality, of course, is that we can get at most 2**32 (~4.3 billion) globally unique addresses if we completely remove any artificial partitioning and special encodings that would use up encoding space. This guys "mathematical proof" reads like some of those "random data compression" patents that Jean-loup Gailly (of Info-Zip/ZLib fame) likes to discredit on his homepage.

    Of course, having partitions and special values does simplify things alot, which is why we don't get all ~4.3Billion addresses. Just look at RISC computers which use a 32-bit opcode. They sure as heck don't implement 4 billion different instructions.

    --Joe

    --
  • by Gleef ( 86 ) on Wednesday August 25, 1999 @12:40PM (#1725456) Homepage
    It looks to me like this draft is saying:
    A) The author feels nobody explains IP Addressing well;
    B) There is some discrepancy between the standard decimal representation of an IP address and the standard binary representation of it;
    C) The original class A/B/C method of assigning IP addresses is obsolete;
    D) The 32 bit IPv4 system could be used for another hundred years without upgrading to IPv6 if you use some obscure addressing scheme that appears to depend on B, above, and hiding some of the address in the subnet mask;
    E) Adopting this scheme will be easier than teaching people how to use IPv6.

    Well, point A is obvious, if he considers this draft to be a "logical...explanation", than no previous documentation would quite pass muster.

    He provides no clear evidence for point B. The number 119 is the same if you represent it in decimal (119) or binary (01110111). If this is not the case, I want to hear it from a mathemetician, not an IETF draft.

    Point C is true, that's why we no longer use it. He apparently has either not read or not understood RFC 950 [isi.edu], which describes how to get away from the unnecessarily coarse class A/B/C system, without using his equally coarse class A-1/A-2/A-3/B-1/... system.

    Point D is not adequately documented to be of any use to anyone. The current IPv4 address allocation scheme still has a lot of wasted addresses, which could extend its life if tapped. I can't even tell if this scheme taps them, or if it just pushes big words around on the page.

    Point E is false in this instance, since fully grocking this draft is much harder than understanding and implementing IPv6. Even if it is translated and better explained, I doubt any scheme to tap a significant number of wasted IPv4 addresses would be easier than just upgrading to IPv6. This is because most of the waste is considered "expansion space" by the owners of the network addresses. Any use of these addresses would require not only reprogramming many routers, but spending a lot more time maintaining the resulting routing tables as addresses here and there get used.

    The bottom line, IPv4's not dead yet, but IPv6 is still inevitable, and this paper proposes nothing coherant.

    ----
  • The main problem with IPv4 that IPv6 is trying to solve is a lack of address space. By using IP masquerading, that problem can be alieviated indefinately, at the cost of increasing the lag time.


    Not quite. If we only want to have an arbitrarily large number of user machines that aren't serving anything to the world at large, that works, but if we want to have an arbitrarily large number of world-visible servers, it breaks down.


    Also, you only have 65536 ports on your masquerading firewall. If you put that at, say, the top of a class A private subnet and more than 65536 machines try to access the world at a time, congestion becomes a problem.


    Though I'll admit that congestion won't be *much* of a problem under real conditions, for the next little while (Fermi 100 trillion users maximum).

  • Do you actually have any evidence that space is necessarily infinitely subdividable? If not, don't be so sure that it is.

    There was actually a Discover magazine article a few years back that talked about the possibility of space being quantized. While I haven't heard anything about the subject since, I assume that's because the question is still open, not because it's been settled one way or the other.

    Energy and matter are quantized, so it is certainly conceivable that space and time are also quantized. Again, unless you have clear evidence to the contrary, I don't think the possibility can be dismissed.

    In any case, there is an additional difficulty with applying the Banach-Tarski paradox in real life: you do have to make an uncountable number of very exacting, precise choices at once. Considering that there are only countably many seconds (and possibly even finitely many) in the lifespan of the universe, it seems like it would be difficult to pull that off.

    Just because something is out there mathematically doesn't mean we'll ever see it in the real world. For instance, the decimal expansion of pi is infinite nonrepeating. We will never see all the digits of pi laid out in sequence, since there are only finitely many atoms in the known universe, and hence only finitely many sheets of paper to write it on. The B-T paradox is the same kind of thing. I'm quite confident that you will never be able to achieve it in real life.

  • It stands for Classless Inter-Domain Routing. The CIDR FAQ [rain.net] offers a pretty good explanation.

    --

  • I think everyone is forgetting that IPv6 does more than give us ~2^128 IP addresses. IPv6 also tries to make performance improvements. For example, in IPv4 any router is allow to fragment packets to squeeze them through the hardware's MTU. In IPv6, fragmentation is only allowed at the source of the packet. This means that the MTU for the entire path must be determined ahead of time and packets fragmented accordingly. This will lighten the load on the routers in between the source and destination because fragmentation would have already been done and packets won't need to be broken up/reassembled. There are other improvments as well but the point I'm trying to make is that IPv6 is a result of years of learning experience with the current IP protocol and is much more than simply solving an address space problem.
  • by David Jao ( 2759 ) <djao@dominia.org> on Wednesday August 25, 1999 @12:45PM (#1725483) Homepage
    This kind of thing is very hard for non-mathematicians to understand, but ... it actually is possible to cut up one sphere into pieces and rearrange the pieces into two spheres, each the same size as the original sphere. This particular result is famous enough to have its own name: the Banach-Tarski paradox.

    The catch (of course there is one!) is that you need to accept the axiom of choice, which basically allows you to make arbitrary choices even if those choices are too many to count. The cuts you have to make along the sphere involve choosing an uncountable number of unknown real numbers in each of the three spatial coordinates all at once.

    In real life you could not make such choices, since you are constrained to splitting a gold bar along gold atoms, which are discrete units. This lack of applicability of the axiom of choice to real life has led many in the field to reject the axiom of choice as invalid ... but that's a whole other story.

  • The netmask is a per-computer think. It basically controls the broadcast address your computer will use for that subnet.

    Example: 255.255.255.128
    means that 192.168.0.1 through 192.168.1.126 will be valid, and fine. But .127 will be the broadcast for that subnet, and it will not see the packets in the 128 - 254 subnet (unless there is a n explicity connection). It has to do with logical hiearchies, etc.

    If this man is saying he can use Netmasks as extra address bytes, he has clearly pointed his ass at the computer and spewed forth bullshit.

    Class C subnets are Of the form net.net.net.node
    (and have a netmask of 255.255.255.0)..
    There are more defined in the applicable RFCs. Like class A, and B.

    The problem with IPv4 was that class C was 255 addresses, class B was 65,025 addresses, and class A was 16,581,375 addresess. If your corporate network had more than 65,000 PCs (possible if you had many servers, and happened to be a huge accounting firm), you basically had to take 16 MILLION addresses away from the global pool because that's how IANNA assigns IPv4 numbers.

    Ludicrous! But logical, and in fact proper. This is why IPv6 is good. We *can* piss away IP addresses easily :-)
  • That's a good insight because you were able to relate to the author. I see how the concepts might be confused. I can tell you that the subnet number, even if it were transmitted, cannot be used to augment the address. The only real purpose of having a subnet number is for multicasting to all machines in a subnet. Think of each machine as having two IP addresses, one being the multicast address. If a machine has the address 10.20.30.40 and its subnet number is 255.255.0.0, its multicast address is 10.20.255.255. When it wants to broadcast to all machines on the subnet, it simply sends to 10.20.255.255. All machines on the same subnet will listen.

    An example: I like to use class A addresses (10.x.x.x) in my masqueraded network. Within the little network, I set up Samba to communicate with my laptop. Initially, I set the subnet number of the Linux box as 255.0.0.0 while I set the laptop to use subnet 255.255.255.0. Samba has to use multicasting to perform some of its functions. When broadcasting, the Linux box was broadcasting to the address 10.255.255.255 while the laptop was listening for broadcasts on the address 10.0.0.255. Thus Samba did not work.

    On the other hand, when I did not understand the subnet number, I set up many computers that should have been 255.255.255.0 as 255.255.0.0. Nothing ever went wrong! The computers were able to browse anywhere on the Internet and log in to the IPX-based Novell network, which was all that seemed to matter.
  • Does that say that families and businesses can have their own subnets? That's what it sounds like to me, but then again, I'm ignorant. :)

    With adequate equipment, an otherwise monolithic candidic legume may be segmented vertically, or horizontally, into smaller, more easily manipulated fragments.
  • ...but that is some profoundly lousy writing.

    In fact it sucked so much that I was suspicious of it being a genuine IETF draft. I couldn't imagine releaseing to the public a "professional paper" with the horrific language use therein.

    Silly me...

    My university has a thing called the WEPT (Written English Proficiency Test) that ALL undergrads must pass before receiving a degree. I used to think it was foolish...

    This guy would have failed.
  • by Rupert ( 28001 ) on Wednesday August 25, 1999 @11:31AM (#1725499) Homepage Journal
    If I understand it correctly (and I'm not sure that I do, due to the incredibly obfuscated language) he is claiming some expansion of the IPv4 address space by using multiple instances of the same IP address, differentiated by subnet mask.

    I gave up after Chapter 3, as my head was starting to hurt.

    His mathematics is extremely suspect, both in his calculations and in his apparent amazement that binary and decimal notations do not coincide. Competent mathematicians writing for a technical audience do not generally point this out three times a paragraph.

    If someone finds a kernel of truth or reason in this article, please speak up. But don't go in there without your brain firmly strapped in.

    Rupert
  • Nearly everyone has remarked how extremely awful his writing is, so I won't add to the pile here. People have also noticed his startling revelation that "The distinction [between decimal and binary] is that, this is a Logical expression, that has no Equivalence. [LOL]"

    If you actually want to read his paper, just skip to the bottom where he displays his amusing tables . Any ideas what those small numbers in the last column mean (the 1, 10, and 110 ones)?

    However, if you were at all like me and dissected his "paper" for what he was really trying to say, you may have actually noticed (if you were successful) that he considers the subnet mask part of the address (look at Table 1 in his "appendix"). Since TCP/IP fundamentally routes a IP datagram around using only the destination IP address, this won't work at all. Datagrams don't keep a subnet mask around with them, they are nodal notions only. His scheme will actually yield several thousand hosts which have the same IP address, which definitely won't work.

    Oh and I love : "To render a more pointed fact, I needed to pass a CISCO Certification Examination." That says it all :).

  • by agravaine ( 66629 ) on Wednesday August 25, 1999 @12:58PM (#1725521)
    His math reminds me of something I saw about 10 years ago - there was some stir in the comp.compression newsgroups over a press release by a company called WEB something-or-other (short for Wider Electronic Bandwidth) anyway, this company claimed they had 'almost perfected' a breakthrough compression algorithm that could losslesly compress any file by a ratio of exactly 16:1. They claimed you could even do this recursively on the output of their compressor, until you reached a size of 'about 1k'. Imagine it! They actually believed that they could take absolutely any n-byte file, and map it one-to-one with some file of m bytes, where m is 1024 or so.

    you could argue, I suppose, that with godlike foreknowledge, you could 'number' all the files humanity will ever produce, and the serial number for any document ever produced could fit into under 1k, but, of course, you decompression tables would be *enormous*. -- oh, and I guess that table would be a file, so it would need a new serial number, and thus a new table, ad infinitum. :^)

    As I recall, they even issued press releases announcing they had received VC, and were about to release a product as soon as they figured out how to solve the 'highly unusual situation when four identical numbers are at the corner of a matrix' -- they never explained this cryptic gobbledygook, and never released any details of their scheme.

    But the really amazing thing was how many yoyos in the newsgroups bought it, hook, line, and sinker, and spouted nonsense such as: "people thought Galileo was crazy, too, but it turned out he was right! Maybe there are things about your precious number theory that we haven't discovered yet!"

    Some poor soul tried to explain that there is no "advanced number theory" involved, just plain counting - there is no way to do a one-to-one mapping from one-byte to 16-bytes. You would think a reasonable person could generalize this principle to understand that you also can't do a one-to-one mapping from 1kb to 16kb, but alas, many pundits wrote back, calling the first guy an idiot for not 'noticing' that the company had 'already admitted' you could only carry out the process until a size of 1k.

    The whole thing was pretty funny, but rather pathetic at the same time...
  • Ghod, that was one of the most unreadable pieces of crap I've seen in a while. I hope it didn't say anything important, I couldn't finish reading it.

    The guy needs to go back to grade school and, relearn basic. Rules of English punctuation. He sprinkles commas. At random with, no apparent clue about where periods belong ( to say nothing of the strange spaces around parens ) .

    I don't trust his math, either.

    IPv6 is coming, anyway. Doesn't almost everything that counts already support it?
  • A plan to squeeze a few more IPs out of IPv4 is simply a quick and dirty solution that, given the exponential growth of the Internet, would only last about a few years (I have no idea how he thinks this will last another 100 years - I assume his math is as bad as his grammar.)


    He actually did propose extending the number of bits in IP addresses. The main point of the new subnetting scheme, AFAICT, is to make it easier to add these bits while keeping older addresses valid. However, his new scheme isn't necessary for that (click on "user info" to see my previous response).

  • You're right, the netmask is not transmitted in the datagram. I've tried very hard to include the diagram from RFC 791 [ietf.org] here but Slashdot does not allow me to do so in a legible way :-(

    See p.10 of the RFC for the table.

    Erik

    Has it ever occurred to you that God might be a committee?
  • I got this from the comp.compression FAQ:


    9.3 The WEB 16:1 compressor

    9.3.1 What the press says

    April 20, 1992 Byte Week Vol 4. No. 25:

    "In an announcement that has generated high interest - and more than a bit of skepticism - WEB Technologies
    (Smyrna, GA) says it has developed a utility that will compress files of greater than 64KB in size to about 1/16th
    their original length. Furthermore, WEB says its DataFiles/16 program can shrink files it has already compressed."
    [...]
    "A week after our preliminary test, WEB showed us the program successfully compressing a file without losing
    any data. But we have not been able to test this latest beta release ourselves."
    [...]
    "WEB, in fact, says that virtually any amount of data can be squeezed to under 1024 bytes by using DataFiles/16
    to compress its own output multiple times."

    June 1992 Byte, Vol 17 No 6:

    [...] According to Earl Bradley, WEB Technologies' vice president of sales and marketing, the compression
    algorithm used by DataFiles/16 is not subject to the laws of information theory. [...]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    9.3.2 First details, by John Wallace

    I called WEB at (404)514-8000 and they sent me some product
    literature as well as chatting for a few minutes with me on the phone. Their product is called DataFiles/16, and their
    claims for it are roughly those heard on the net.

    According to their flier:

    "DataFiles/16 will compress all types of binary files to approximately one-sixteenth of their original size ... regardless
    of the type of file (word processing document, spreadsheet file, image file,
    executable file, etc.), NO DATA WILL BE LOST by DataFiles/16." (Their capitalizations; 16:1 compression only
    promised for files >64K bytes in length.)

    "Performed on a 386/25 machine, the program can complete a
    compression/decompression cycle on one megabyte of data in less than thirty seconds"

    "The compressed output file created by DataFiles/16 can be used as the input file to subsequent executions of the
    program. This feature of the utility is known as recursive or iterative compression, and will enable you to compress
    your data files to a tiny fraction of the original size. In fact, virtually any amount of computer data can be compressed
    to under 1024 bytes using DataFiles/16 to compress its own output files muliple times. Then, by repeating in reverse
    the steps taken to perform the recusive compression, all original data can be decompressed to its original form
    without the loss of a single bit."

    Their flier also claims:

    "Constant levels of compression across ALL TYPES of FILES" "Convenient, single floppy DATA TRANSPORTATION"

    From my telephone conversation, I was assured that this is an
    actual compression program. Decompression is done by using only the data in the compressed file; there are no
    hidden or extra files.


    9.3.3 More information, by Rafael Ramirez :

    Today (Tuesday, 28th) I got a call from Earl Bradley of Web
    who now says that they have put off releasing a software version of the algorithm because they are close to signing a
    major contract with a big company to put the algorithm in silicon. He said he could not name the company due to
    non-disclosure agreements, but that they had run extensive independent tests of their own and verified that the
    algorithm works. [...]

    He said the algorithm is so simple that he doesn't want anybody
    getting their hands on it and copying it even though he said they have filed a patent on it. [...] Mr. Bradley said the
    silicon version would hold up much better to patent enforcement and be harder to copy.

    He claimed that the algorithm takes up about 4K of code, uses only integer math, and the current software
    implementation only uses a 65K buffer. He said the silicon version would likely use a parallel version and work in
    real-time. [...]


    9.3.4 No software version

    Appeared on BIX, reposted by Bruce Hoult :

    tojerry/chaos #673, from abailey, 562 chars, Tue Jun 16 20:40:34 1992 Comment(s).
    ----------
    TITLE: WEB Technology
    I promised everyone a report when I finally got the poop on WEB's 16:1 data compression. After talking back and
    forth for a year
    and being put off for the past month by un-returned phone calls, I finally got hold of Marc Spindler who is their sales
    manager.

    _No_ software product is forth coming, period!

    He began talking about hardware they are designing for delivery
    at the end of the year. [...]


    9.3.5 Product cancelled

    Posted by John Toebes on Aug 10th, 1992:

    [Long story omitted, confirming the reports made above about the original WEB claims.]

    10JUL92 - Called to Check Status. Was told that testing had uncovered a new problem where 'four numbers
    in a matrix were the same value' and that the programmers were off attempting to code a preprocessor
    to eliminate this rare case. I indicated that he had told me this story before. He told me that the
    programmers were still working on the problem.

    31JUL92 - Final Call to Check Status. Called Earl in the morning and was told that he still had not heard from
    the programmers. [...] Stated that if they could not resolve the problem then there would probably not
    be a product.

    03AUG92 - Final Call. Earl claims that the programmers are unable to resolve the problem. I asked if this
    meant that there would not be a product as a result and he said yes.


    9.3.6 Byte's final report

    Extract from the Nov. 95 issue of Byte, page 42:

    "Not suprisingly, the beta version of DataFiles/16 that reporter Russ Schnapp tested didn't work. DataFiles/16
    compressed files, but when decompressed, those files bore no resemblance to their originals. WEB said it would
    send us a version of the program that worked, but we never received it."

    "When we attempted to follow up on the story about three months later, the company's phone had been disconnected.
    Attempts to reach company officers were also unsuccessful. [...]"


    --
    Why are there so many Unix-using Star Trek fans?
    When was the last time Picard said, "Computer, bring
  • by mattdm ( 1931 ) on Wednesday August 25, 1999 @11:40AM (#1725540) Homepage
    Ok, I must admit I'm struggling with the grammar and punctuation. But the part I'm really confused by is where the author says (as far as I can tell) that the binary representations of numbers are not equal to their decimal representations, and that if you do a calculation in decimal, you'll get a different result than you would doing the same calculation in binary. What?

    Also, I was surprised to not find an mention of CIDR [rain.net] in the entire document. The IP class system has been obsolete for nearly five years....

    --

  • by Anonymous Coward
    While the expansion given be Table 4, renders the number of available IP Addresses as being approximately 5.46 * 10^9. Which, to say te very least, is nearly double the original value, while the Address Range remained Constant; i.e. 32 Bits.
    This guy is a total wacko if he thinks 32 bits can represent more than approx 4.3 x 10^9 unique values. He seems to think he can get twice as many, then lose a bunch to various reservataions and still come out with over 5 gigahosts.
  • by schon ( 31600 )
    I always thought that 0.0.0.0 was the loopback address

    No, 127.0.0.1 is the loopback address (localhost) 127.0.0.0 is the loopback network, and 0.0.0.0 is default gateway.

    $ route -n
    Destination Gateway Genmask Flags Metric Ref Use Iface
    192.168.20.18 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
    127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 1 lo
    0.0.0.0 192.168.20.1 0.0.0.0 UG 0 0 138 eth0
  • by pb ( 1020 )
    Is jwz testing dadadodo again, or did someone else write their own dissociator?

    ...or perhaps someone was feeding zippy the pinhead and emacs doctor too many RFC's...

    ...and I have a feeling they never passed that Cisco exam.
  • So does anyone have a list of names of people who were involved with WEB? It would be interesting to track down where they work now.

    I wonder what other scams they've tried to defraud venture capitalists?
  • I don't think he knows, what he is talking about:

    "There yet remains a value in the IPv4 addressing Scheme, which surpasses the promises of IPv6, and could conceivably satisfy our needs indefinitely without an expansion beyond the 32 Bit address range. That is, if it were distributed with country and or state codes as its prefix."

There are two ways to write error-free programs; only the third one works.

Working...