Microsoft Leaks Details of 128-bit Windows 8 581
Barence writes "Microsoft is planning to make Windows 8 a 128-bit operating system, according to details leaked from the software giant's Research department. The discovery came to light after Microsoft Research employee Robert Morgan carelessly left details of his work on the social-networking site LinkedIn. His page read: 'Working in high-security department for research and development involving strategic planning for medium and long-term projects. Research & Development projects including 128-bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP and IBM.' It has since been removed."
More information (Score:5, Informative)
Re:Not really (Score:5, Informative)
Either we're not reading the same article, or I suspect you didn't read it at all. At no point is a filesystem mentioned.
Re:Not really (Score:3, Informative)
Noooooo! I want to be able to say I have a 23488102 bit OS if that's the size of my bzImage! And once I have 1TB of porn I can call it a 8.79609302*10^12 bit operating system!
Seriously - it's one thing for some IT marketing types not to know that a 128bit OS would need a 128bit processor (which would be a Big Thing, especially if HP were getting back into the market of CPU design and manufacture), but for the submitter and eds to not point it out makes it look a little daft.
Re:Not really (Score:5, Informative)
It refers to a 128 bit filesystem ala ZFS, not the whole OS.
Either we're not reading the same article, or I suspect you didn't read it at all. At no point is a filesystem mentioned.
I'm with you, I don't know where he got filesystem from:
The senior researcher's profile said he was: "Working in high security department for research and development involving strategic planning for medium and longterm projects. Research & Development projects including 128-bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP and IBM."
Clearly says architechture.
Filesystem, or FPU... not processor or memory (Score:5, Informative)
This has been discussed on OSNews and it is most likely about the filesystem or FPU and not memory addressing.
http://www.osnews.com/story/22301/128-Bit_Support_in_Windows_8_9_ [osnews.com]
Re:128, 64, 32, 16, 8 (Score:5, Informative)
If we start using PCRAM then we are likely to want to use byte-addressable filesystems, rather than keep relying on blocks, which reduces the size you can address with 64 bits to 16EB, which is a lot less; there are almost certainly already people with datasets larger than this. Because PCRAM has similar characteristics to DRAM, the most convenient way of addressing it is likely to be mapping it directly into the CPU's address space, rather than treating it as a device. You could use paging tricks and only map accessed files, but having two MMUs doesn't make life very simple for operating system writers, so ideally you're going to want to have all of your persistent storage in your address space (like MULTICS: everything old is new again). If you do this, then you may well want to have more than a 64-bit address space within ten years. And, when I say 'you' I mean 'companies with a lot of spare money to spend on IT infrastructure'.
Re:Not really (Score:5, Informative)
Why is that important? Because it does not mean that Windows 8 will necessarily be 128bit, just capable of being 128bit - for all we know, his entire role is ensuring that the teams code to a set standard which allows ease of porting to 128bit in future.
Re:Fuck Everything (Score:5, Informative)
I almost wet my pants during the Fusion ads in the Superbowl. Becaues they did go to 5 (+1) blades.
http://www.theonion.com/content/node/33930 [theonion.com]
Re:Not really (Score:5, Informative)
I'm still confused.
What's the point of having 128 bit compatibility? 128 bit CPUs don't even exist yet. Heck most of us are still just using 32, and haven't even visited the 64 generation yet.
PAE doesn't hide mem, just can't use all at once (Score:4, Informative)
PAE doesn't "hide" memory, really. You can only address 4GB (i.e. a 32-bit address space) of virtual memory at once but that can be *anywhere* across the 36-bit physical address space. As long as no individual app needs more than 4GB of memory you're (mostly) OK. The kernel can alter the mappings as it needs to poke at anywhere interesting in all of physical RAM. It's less efficient than mapping it all in at once but you can manage quite well.
Re:PAE hides that memory (Score:5, Informative)
Let me guess: you've never written any ring 0 code for x86. PAE doesn't hide the memory. It modifies the page table structure slightly (so does 64-bit, by the way, it makes the page tables deeper which makes every TLB fault slower). You have a 32-bit virtual address space and a 36-bit physical address space. No process can see more than 4GB of RAM, but if you have two processes then they can each see a different 4GB of physical RAM. None of my processes currently uses more than 760MB of address space, but I have 3GB of RAM and 3GB of swap used, so with a PAE system and 8GB of RAM each process would be using physical memory and I'd have 2GB for filesystem cache.
Oh, and when people talk about PAE, they also often mean PAE or PSE. PSE just makes pages bigger (up to 4MB), which can be used to address 64GB of RAM without changing the size of the page tables. This is better in some situations, because it involves smaller page tables and fewer TLB faults, but it means that you are swapping 4MB at a time, which can be very slow if you are swapping a lot.
Re:128, 64, 32, 16, 8 (Score:3, Informative)
Re:Not really (Score:3, Informative)
Not too long ago (15-20 years, maybe?) 64-bit processors would have been unheard of on the desktop. I see 64-bit being stretched as we put more high-definition video into our datasets. And then we'll have the next "ultra high def" format that will stretch it even more. And then you have a small (in terms of units shipped), but very profitable business in supercomputing. Protein folding and subatomic research folks would probably jump at the chance to rerun their simulations with a higher resolution.
Just to put this into perspective, the forthcoming IBM Sequoia [wikipedia.org] supercomputer will have 1.6 petabytes of RAM, and only a very small fraction of this can be accessed by a single compute node. The total amount of RAM in this machine is still 4 orders of magnitude smaller than what can be addressed with a single 64-bit pointer.
Re:Not really (Score:3, Informative)
None of the linked articles say that the 128 bits is for the filesystem only, but I still believe you're right:
Making the entire os 128-bit would simply waste a _lot_ of memory, for zero real gain. (Rather the opposite: A larger working set always leads to slower code.)
Right. There's no widely-used 128-bit-native processor architecture either. And there is no reason to have 128-bit address bus either.
I don't think there are 2^128 bytes of DRAM on the planet, even. Lessee... that's 2^98 GiB. Which is almost 10^20 GiB of RAM for every single person on the planet. I think that I personally can account for 10 GiB or so. Maybe 100 GiB if my parents have a secret DRAM trust fund for me that I don't know about. So yeah, 128-bit memory addresses are waaaaay off. I believe current 64-bit processors are currently limited to 40-bit external address buses... that'd be 1 TiB of RAM.
Re:Not really (Score:5, Informative)
Re:Not really (Score:3, Informative)
Itanium is not unsuccessful for VMS machines (you cannot put VMS on an x86 based chip, 64bit or no), and VMS is used in mainframe and other ultra-high availability applications. The Itanium just didn't pan out for any sort of windows-based operating systems, because windows is so tied to its x86 legacy.
I believe they also have a successor that will be compatible with Itanium as well, I'm not sure though. I mainly only looked at Itanium from the VMS point of view. They certainly have a future their though, their only competitor is the Alpha by HP, and these tend to be very very very expensive applications they are used for.
Re:Not really (Score:5, Informative)
No, IBM never produced an "OS 500". The branding went from OS/400 to i5/OS to today's "IBM i".
No, the system never had a 128-bit address space. The address space of OS400 went from 48-bit to 64-bit when IBM started using 64-bit Power-based processors in those systems.
Yes, the instruction set uses 128-bit pointers, but only the rightmost 64 bits of the pointer are used in the current system.
Yes, The 64-bit address space covers both system memory and disk storage.
This Wikipedia article about IBM System i [wikipedia.org] is a pretty good reference about this kind of stuff.
Re:Ha ha (Score:3, Informative)
Even the netbook processors (Intel Atom and VIA Nano) have full 64-bit support.
Educate yourself [intel.com]. Only two shipping Atom models have x64 support - 330 and 230 - and I'm not aware of any netbooks in production using either one (Intel itself positions them for "nettops", and the rest of the model line for "netbooks"). Most certainly, all popular netbooks are not x64-capable.
Re:CPU? (Score:3, Informative)
The x86 line permits chaining of basic binary arithmetic operations to any level of complexity. However, why would we want 128-bit operands? Double precision arithmetic is 64-bits, and there isn't a significant clamor for more precision in scientific circles. (More speed = yes, Vector Operations = yes, More precision = no).
Computer hardware has supported wider data buses than CPU bus widths for some time now. Wide data buses are useful for vector operations, and to quickly fill CPU caches. Nvidia has a 512-bit GPU. I think IBM has at least experimented with 512-bits for the Power Platform. Currently, an external data bus wider than 128 bits remains expensive. However, internal to the CPU, the Core i7 processor uses cache line widths of 128-bits and 256-bits, so someone might argue the Core i7 is a 256-bit processor. In the past, Intel has adopted misleading marketing practices with regarding data bus sizes.
Programmers care about the unit word size for key operations. 64-bits is likely to be sufficient for all practical uses for some time now, particularly for PC usage. Essentially, a 64-bit processor can directly address 18 exa-bytes of hard drive storage to the byte level. Barring massive breakthroughs, for the near future, multi-exabyte supercomputers/compute clusters will be scarce.
Additionally, 1 exa-byte of storage is only useful in a cluster. At a 10 GB/sec (80 Gb/sec), which is faster than pretty much any single storage device currently in existence, it takes 3 years to move 1 Exabyte of data. That's a long time to back up a hard drive. Even DDR-3 2000 RAM requires multiple devices to reach 10 GB/sec transfer rates, and who wants 3 years worth of data sitting in RAM? As such, 64-bit addressing is only useful in the context of supercomputers/compute clusters that have the massive parallelism required to read and write Exabytes of data quickly.
If Microsoft expects serious personal computer uses for 128-bit addressing by the time Windows 9 ships, Microsoft must be planning on Windows 9 shipping sometime next century.
Re:Ha ha (Score:3, Informative)
What is Windows missing in terms of 64 bit migration, and what else can Microsoft do about it?
Make long 64 bits. On Win64, int and long are 4 bytes, long long and void* are 8. A huge amount of legacy code assumed that you can always store a void* in a long without truncation. On pretty much every mainstream or near-mainstream platform that assumption is valid... except for win64.
Re:128, 64, 32, 16, 8 (Score:3, Informative)
Software people get this wrong all the time... leave it to a hardware guy to straighten it out. :)
It's not the bus size, it's the size of the ALU inside the CPU (the ALU actually performs the operations). The 68000 was a 16 bit processor NOT because of the 16 bit bus, but because the ALU was only 16 bits. The 68000 has a full 32 bit architecture, but because the ALU was 16 bit, it took two operations to perform 32 bit instructions. It wasn't until the 68020 that the M68K family had their first 32 bit processor. The 386SX may have had a 16 bit bus, but internally had a 32 bit ALU, so it was still a 32 bit processor.
Re:128 bit C data type? (Score:3, Informative)
Because MS Visual Studio STILL doesn't support it, methinks.
Re:April fools! (Score:3, Informative)