Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Microsoft Windows Technology

Microsoft Leaks Details of 128-bit Windows 8 581

Barence writes "Microsoft is planning to make Windows 8 a 128-bit operating system, according to details leaked from the software giant's Research department. The discovery came to light after Microsoft Research employee Robert Morgan carelessly left details of his work on the social-networking site LinkedIn. His page read: 'Working in high-security department for research and development involving strategic planning for medium and long-term projects. Research & Development projects including 128-bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP and IBM.' It has since been removed."
This discussion has been archived. No new comments can be posted.

Microsoft Leaks Details of 128-bit Windows 8

Comments Filter:
  • Ha ha (Score:2, Interesting)

    by DavidR1991 ( 1047748 ) on Thursday October 08, 2009 @12:01PM (#29682083) Homepage

    They haven't even got 64-bit mig. done / smooth for their users, and they're looking at 128-bit already? Is this some kind of a joke? Or have they just realised they've missed the 64 bit boat and they're just going to work seriously on the next step up?

  • Re:Not really (Score:5, Interesting)

    by Chris Burke ( 6130 ) on Thursday October 08, 2009 @12:22PM (#29682373) Homepage

    Clearly says architechture.

    Okay, but the question is what does that mean? If it just means 128-bit operations or registers, then that's been around since the original SSE. If it means 128-bit addressing (like it usually does), then who the fuck is making those chips and why? Very few 64-bit chips actually support the full 64-bits of address space (certainly not Intel or AMD), simply because there's no need. You could make every computer on earth part of a huge shared-memory system and have room to spare, not that you'd ever do such a thing. Once systems get far enough apart, shared memory stops making sense as maintaining coherence/consistency becomes too much of an overhead. If you were building a cluster as a shared memory system, and each node had 1 TB of RAM, you could fit ten million nodes in before you started to have address space problems. Even the most wasteful of Stupid Virtual Memory Tricks aren't going to put a lot of pressure on 64-bit addressing any time soon.

    I mean I guess I can see the point for the distant future, and hey who the hell knows when Windows 9 is planned for much less will actually arrive, so it can't hurt to make sure it's 'compatible'... I'm just more surprised that any of the partners listed would have 128-bit on even far-reaching roadmaps.

  • Re:Not really (Score:5, Interesting)

    by JSBiff ( 87824 ) on Thursday October 08, 2009 @12:23PM (#29682399) Journal

    Someone else posted a link to an ArsTechnica article about this. They had more info from the LinkedIn post, which indicated that the work was being done to target the IA-128 instruction set (which is currently only available as a simulator, no actual silicon, *yet*). But, since Intel hasn't abandoned Itanium yet, and they are targetting it at Enterprise and High Performance Computing, I could totally see Intel evolving the Itanium architecture from 64-bits to 128-bits. After all, there are a few servers in the world that handle truly epic amounts of data, and really might be able to use more than 64-bits.

    It's probably that they are laying the groundwork now, for release 5 or 10 years down the road.

  • by gr8_phk ( 621180 ) on Thursday October 08, 2009 @12:27PM (#29682447)
    When I had a computer with 16K of memory and the new 32bit processors were becoming available, 4Gig seemed unimaginable. Now I have 1GB in my old system and imagine the next will have a bit more - if only because DDR2 & DDR3 seem to come in larger module sizes. Normally I don't see a need for 64 bit addressing, but I have actually thought about using 48bit or larger addresses for a large sparse data structure. If you've got a sparse binary tree for example, you could lay it out mathematically and waste a huge amount of address space (mostly empty) but get rid of the pointers. It's crazy, but you push the work onto the cache and virtual memory system - I never tested this to see if it performs well in my application. Anyway, if it worked, it would be a use for 64bit addressing on the desktop. 128 seems a stretch, but my mind is at least open to such things these days.
  • Re:Ha ha (Score:4, Interesting)

    by dave420 ( 699308 ) on Thursday October 08, 2009 @12:43PM (#29682645)
    64-bit is smooth as silk in Windows 7. But I guess I'm a M$ shill, so ignore this comment.
  • Re:Also (Score:4, Interesting)

    by saider ( 177166 ) on Thursday October 08, 2009 @12:50PM (#29682755)

    How quickly we forget!

    The original 8086 processor could address 1 megabyte of memory (20 bits) with a 16 bit processor. It used two registers (one shifted left by four bits) to address memory.

    A 64 bit processor could trivially access a 128-bit address space by using the same segment:offset method.

  • SSE2+? (Score:2, Interesting)

    by dmesg0 ( 1342071 ) on Thursday October 08, 2009 @01:03PM (#29682929)
    Maybe they mean optimizing windows kernel using SSE2 instructions (xmm registers are 128-bit wide)? It improves performance of many memory transfers, and currently and makes a lot of sense in quite a lot of applications - even the kernel itself.
  • Re:Not really (Score:5, Interesting)

    by frank_adrian314159 ( 469671 ) on Thursday October 08, 2009 @01:03PM (#29682935) Homepage

    The original IBM System 38 and its descendants, such as OS/400, OS/500, etc., had a 128-bit address space. In these architectures, the large number of address bits were used to provide an address space that spanned both memory and disks and was used to provide processor-level protection for objects stored there. Using large address spaces to ensure hardware protection of system objects is a good start on a highly secure OS and is probably where this is going.

    And Intel is no stranger to hardware object protection, either. The iAPX-432 chipset, although not a commercial success, showed that hardware-level protection of objects is feasible, with more complex access controls than can be provided with reasonable performance than with software implementations of complex access control schemes (note I said complex - one of the reasons that the chip failed commercially is that, besides having a braindead two-chip implementation and instruction lengths that varied at the bit level, it could not support simple protection schemes as quickly as software was able to do). Intel is looking for what to do with the extra transistors that feature shrinks provide - adding better protection at the hardware level might be a win.

  • Because with huge servers and clustering you can get some insanely huge numbers when it comes to HDD space. But as I'm sure we all know when it comes to a 128 OS, we are usually talking about addressing, ala 16bit, 32bit, 64bit.

    You don't need 128 bits for addressing. 2^32 is "only" 4 gigabytes, which was always achievable in theory and actually achieved in practice over a decade ago.

    Having a memory — RAM or disk — above 2^64, however, is not achievable in even in theory... 2^64 is only 100 times less, for example, than the estimated number of sand-grains on Earth [wolframalpha.com].

    Being able to process as much as 128 bits in one CPU-instruction is nice, and SSE extensions allow that. But neither size_t nor off_t need to exceed 64 bits. Ever... In fact, in the amd64 instruction set [wikipedia.org], only 48 bits can be used to address memory — the rest are for the CPU instruction, so that both the operation and the operand fit in one 64-bit word. The amd64-architecture is thus "limited" to 256 TB — that's the largest RAM an amd64-machine can have and the largest file and amd64-machine can mmap [wikipedia.org].

    64-bit systems were truly useful, because — by making size_t and off_t the same, they allowed software to be rid of having to segment access to files, which could, potentially, be too large to memory-map in their entirety (many legacy mmap-implementations are still limited to 2- or 4-Gb files). 128-bit systems are not adding that benefit...

    (And, of course, most systems — including even the most modern Linux and BSD — still have rather poor mmap-implementations, compared to their highly-optimized read and write calls... But that's another topic...)

  • Re:Not really (Score:5, Interesting)

    by Tawnos ( 1030370 ) on Thursday October 08, 2009 @01:22PM (#29683165)
    He's right, there's no Robert Morgan here (well, there's a v-, but no Robert Morgan in R&D).
  • Re:Also (Score:3, Interesting)

    by inviolet ( 797804 ) <slashdot@@@ideasmatter...org> on Thursday October 08, 2009 @01:31PM (#29683285) Journal

    How quickly we forget!

    The original 8086 processor could address 1 megabyte of memory (20 bits) with a 16 bit processor. It used two registers (one shifted left by four bits) to address memory.

    A 64 bit processor could trivially access a 128-bit address space by using the same segment:offset method.

    How quickly we forget!

    Writing code to use 'near' and 'far' pointers was a constant headache, of the same magnitude of C++'s requirement that you be constantly aware of character width when manipulating strings.

  • Re:Not really (Score:3, Interesting)

    by sexconker ( 1179573 ) on Thursday October 08, 2009 @01:41PM (#29683403)

    No, it means a 128-bit architecture will still be able to run Windows 8.

    That is, the architecture supports a different mode that the Windows 8 kernel includes.

    Knowing the history of teh bits, this simply means Windows 8 will be available in both 32-bit and 64-bit versions, and 128-bit processors will be able to run in 32-bit mode, but not 64-bit mode.

    So yet again, we will be stuck without 64-bit drivers or optimization, let alone 128-bit drivers or optimization.

    32-bits should be more than enough for anybody.

    (I HOPE Windows 8 is 64/128, and 128-bit processors are 32/64/128, but I know better than to expect anything sensible).

  • by Orne ( 144925 ) on Thursday October 08, 2009 @01:41PM (#29683411) Homepage

    Somehow, we all managed to survive from 1984 to 1995 by swapping 64k chunks in Expanded Memory [wikipedia.org]. I remember writing assembly to do it, and I personally do not miss that headache. That being said, old ideas die hard, and if we can get some larger page sizes (how about swapping that 4th GB in address space to point at a 5th, 6th, etc?), almost all reasonable applications (by today's standards) could fit in the expanded memory space.

  • Re:Not really (Score:4, Interesting)

    by nabsltd ( 1313397 ) on Thursday October 08, 2009 @01:42PM (#29683419)

    In these architectures, the large number of address bits were used to provide an address space that spanned both memory and disks and was used to provide processor-level protection for objects stored there. Using large address spaces to ensure hardware protection of system objects is a good start on a highly secure OS and is probably where this is going.

    But, even 64 bits is enough for that for a long time.

    Since you can address over 17 billion terabytes with 64 bits, that means that even with a doubling of storage density every year (which is much faster than things are really happening), that means we have over 20 years before arrays of a couple thousand disks would start to reach the limit.

    By then, there will be 128-bit CPUs. So, unless Windows 8 is targeted for 2020, it really doesn't need any 128-bit features.

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...