Microsoft Leaks Details of 128-bit Windows 8 581
Barence writes "Microsoft is planning to make Windows 8 a 128-bit operating system, according to details leaked from the software giant's Research department. The discovery came to light after Microsoft Research employee Robert Morgan carelessly left details of his work on the social-networking site LinkedIn. His page read: 'Working in high-security department for research and development involving strategic planning for medium and long-term projects. Research & Development projects including 128-bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP and IBM.' It has since been removed."
128, 64, 32, 16, 8 (Score:5, Funny)
Who needs 128? I haven't even used all 64 of my current bits yet.
-l
220... 221... whatever it takes (Score:3, Funny)
Yeah, well I'm working on an OS that'll be 129 bits!
Re:128, 64, 32, 16, 8 (Score:5, Informative)
If we start using PCRAM then we are likely to want to use byte-addressable filesystems, rather than keep relying on blocks, which reduces the size you can address with 64 bits to 16EB, which is a lot less; there are almost certainly already people with datasets larger than this. Because PCRAM has similar characteristics to DRAM, the most convenient way of addressing it is likely to be mapping it directly into the CPU's address space, rather than treating it as a device. You could use paging tricks and only map accessed files, but having two MMUs doesn't make life very simple for operating system writers, so ideally you're going to want to have all of your persistent storage in your address space (like MULTICS: everything old is new again). If you do this, then you may well want to have more than a 64-bit address space within ten years. And, when I say 'you' I mean 'companies with a lot of spare money to spend on IT infrastructure'.
PAE doesn't hide mem, just can't use all at once (Score:4, Informative)
PAE doesn't "hide" memory, really. You can only address 4GB (i.e. a 32-bit address space) of virtual memory at once but that can be *anywhere* across the 36-bit physical address space. As long as no individual app needs more than 4GB of memory you're (mostly) OK. The kernel can alter the mappings as it needs to poke at anywhere interesting in all of physical RAM. It's less efficient than mapping it all in at once but you can manage quite well.
Re: (Score:3, Interesting)
Somehow, we all managed to survive from 1984 to 1995 by swapping 64k chunks in Expanded Memory [wikipedia.org]. I remember writing assembly to do it, and I personally do not miss that headache. That being said, old ideas die hard, and if we can get some larger page sizes (how about swapping that 4th GB in address space to point at a 5th, 6th, etc?), almost all reasonable applications (by today's standards) could fit in the expanded memory space.
Re:PAE hides that memory (Score:5, Informative)
Let me guess: you've never written any ring 0 code for x86. PAE doesn't hide the memory. It modifies the page table structure slightly (so does 64-bit, by the way, it makes the page tables deeper which makes every TLB fault slower). You have a 32-bit virtual address space and a 36-bit physical address space. No process can see more than 4GB of RAM, but if you have two processes then they can each see a different 4GB of physical RAM. None of my processes currently uses more than 760MB of address space, but I have 3GB of RAM and 3GB of swap used, so with a PAE system and 8GB of RAM each process would be using physical memory and I'd have 2GB for filesystem cache.
Oh, and when people talk about PAE, they also often mean PAE or PSE. PSE just makes pages bigger (up to 4MB), which can be used to address 64GB of RAM without changing the size of the page tables. This is better in some situations, because it involves smaller page tables and fewer TLB faults, but it means that you are swapping 4MB at a time, which can be very slow if you are swapping a lot.
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Software people get this wrong all the time... leave it to a hardware guy to straighten it out. :)
It's not the bus size, it's the size of the ALU inside the CPU (the ALU actually performs the operations). The 68000 was a 16 bit processor NOT because of the 16 bit bus, but because the ALU was only 16 bits. The 68000 has a full 32 bit architecture, but because the ALU was 16 bit, it took two operations to perform 32 bit instructions. It wasn't until the 68020 that the M68K family had their first 32 bit proces
Re:128, 64, 32, 16, 8 (Score:5, Funny)
With that uid, it's because your pr0n is ASCII art.
Re:128, 64, 32, 16, 8 (Score:5, Funny)
I'd make you a little ASCII lawn to get off of but I'm still looking for my dentures.
-l
Re:128, 64, 32, 16, 8 (Score:5, Insightful)
Lameness filter encountered. Post aborted!
Filter error: Please use fewer 'junk' characters.
It's a sad day when you can no longer post ASCII art onto a forum. Have we come so far that we've forgotten where we come from?
Re: (Score:3, Interesting)
Re:128, 64, 32, 16, 8 (Score:4, Funny)
Re:128, 64, 32, 16, 8 (Score:4, Funny)
Re:128, 64, 32, 16, 8 (Score:4, Funny)
128 bit? (Score:5, Funny)
Volume: 11 (Score:5, Funny)
Re:Volume: 11 (Score:4, Funny)
In my 128-bit OS, the volume goes all the way to 340,282,366,920,938,463,463,374,607,431,768,211,455.
Re:Volume: 11 (Score:5, Funny)
In my 128-bit OS, the volume does nothing because SoundMax hasn't released drivers yet.
Re:Volume: 11 (Score:5, Funny)
bare with me
*Shudder*
Re:Volume: 11 (Score:5, Funny)
Well, his (her?) user name IS "no undies".
And our friend Robert (Score:5, Funny)
Fuck Everything (Score:5, Funny)
We're doing five blades.
Re:Fuck Everything (Score:5, Funny)
Re:Fuck Everything (Score:5, Funny)
WE'LL DO IT LIVE!!!
Re:Fuck Everything (Score:5, Funny)
Re:Fuck Everything (Score:5, Informative)
I almost wet my pants during the Fusion ads in the Superbowl. Becaues they did go to 5 (+1) blades.
http://www.theonion.com/content/node/33930 [theonion.com]
Where's Windows 7? (Score:3, Funny)
That would make... (Score:5, Funny)
Re:That would make... (Score:4, Funny)
That would also make Windows a 0x10000000bit wrapper around a 0x1000000bit implementation of a 0x100000bit extension for a 0x10000bit patch to an 0x1000bit operating system, originally coded for a 0x100bit microprocessor, written by a 0x10bit company, that can't stand 0x1bit of competition.
Putting numbers in hex (0x notation) doesn't make this a "programmer joke". Especially if you get your base wrong since you're obviously trying to use binary in which case the value 128 would be represented as 0b10000000 or 10000000b, not 0x10000000 (which is actually 256M or 268,435,456 in decimal).
.... equals 0xff.
Here's a programmer joke (which sounds better spoken than read): To be or not to be
When will MS learn (Score:4, Insightful)
Windows 7 isn't even officially released and already nonsense is leaking about the next release with promises they can't keep.
FIrst let them release WinFS.
Re: (Score:3, Insightful)
Nice baseless assertions fanboi.
Re:When will MS learn (Score:4, Insightful)
This first came up for me a couple years ago running Stalker: Shadow of Chernobyl, a mainstream Windows video game. The default Windows user address space limit on a 32 bit system is 2 gigs and Stalker wanted to use about 1700 megs of RAM. Problem is, video memory is also mapped into the user address space and I had a 512 meg video card.
In that case i was able to fix the problem by using a tool to hack the binary to make Windows give 3 gigs to the user process instead of the default of 2 gigs (the OS needs to keep a big chunk of address space for the kernel).
In any case, the moral of the story is that 32 bit address spaces have been cramped - for common applications, in practice - for a while now. Any application using more than a gig of RAM would be better off on a 64-bit machine. It's possible to work around this with silly hacks, and there's a lot of that going on, but it won't be too long before 32-bit users are a small enough minority to ignore for RAM-intensive apps.
128 bit OS? (Score:5, Funny)
16.8 million terabytes of RAM should be enough for anyone.
Re:128 bit OS? (Score:4, Funny)
Upgrade paths (Score:5, Funny)
Well, that settles it, then! Why on earth would I buy a paltry 64-bit Windows 7 when a much shinier and newer 128-bit Windows 8 is right around the corner? I'd best hold off until then! Thanks, Microsoft!
Re: (Score:3, Funny)
Re: (Score:3, Funny)
You see, they can no longer produce an OS which is very much better than the last so they've started releasing really sucky ones so the _next_ one looks so much better.
Didn't they start doing that about 10 years ago?
Filesystem, or FPU... not processor or memory (Score:5, Informative)
This has been discussed on OSNews and it is most likely about the filesystem or FPU and not memory addressing.
http://www.osnews.com/story/22301/128-Bit_Support_in_Windows_8_9_ [osnews.com]
For security (Score:5, Funny)
Quoth Balmer, "Let's see hackers find our security holes in this address space!"
Why they need 128 bits? (Score:5, Funny)
- That is what requires Security Essentials to have a string sample in memory of every Windows virus/trojan before 2006
- Bill Gates finally agreed that 640k wasnt enough for everyone.
- Codenamed Windows TNG, where no bit has gone before
- You actually will need all that memory to not require swapping (unless you load more than 3 apps)
Surely this is a bit early (Score:4, Insightful)
And it's not like there's been much perception of a need for 128 bit CPUs. 64 bit processors have been around since the 1960's with fairly mainstream CPUs sine the early 90s. I don't think this is like RAM. I think there's a limit to how many bits we can use.
idiot (Score:3, Funny)
Yeah right. Gob like the mersey tunnel.
128 bit C data type? (Score:5, Funny)
long long long?
really long long?
Re: (Score:3, Funny)
It'll actually be called the John Holmes.
Re:128 bit C data type? (Score:5, Insightful)
int128_t?
It blows my mind how few people use stdint.h when it makes a lot more sense to use that these days.
Re: (Score:3, Funny)
typedef UNSIGNED_JOHNHOLMES
Corrected Summary (Score:3, Funny)
Glendale University (Score:4, Funny)
According to Wikipedia
http://en.wikipedia.org/wiki/Glenndale_University [wikipedia.org]
this University isn't even accredited!!
Re:Not really (Score:5, Informative)
Either we're not reading the same article, or I suspect you didn't read it at all. At no point is a filesystem mentioned.
Re:Not really (Score:5, Informative)
It refers to a 128 bit filesystem ala ZFS, not the whole OS.
Either we're not reading the same article, or I suspect you didn't read it at all. At no point is a filesystem mentioned.
I'm with you, I don't know where he got filesystem from:
The senior researcher's profile said he was: "Working in high security department for research and development involving strategic planning for medium and longterm projects. Research & Development projects including 128-bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP and IBM."
Clearly says architechture.
Re:Not really (Score:5, Informative)
Why is that important? Because it does not mean that Windows 8 will necessarily be 128bit, just capable of being 128bit - for all we know, his entire role is ensuring that the teams code to a set standard which allows ease of porting to 128bit in future.
Re:Not really (Score:5, Informative)
I'm still confused.
What's the point of having 128 bit compatibility? 128 bit CPUs don't even exist yet. Heck most of us are still just using 32, and haven't even visited the 64 generation yet.
Re: (Score:3, Insightful)
I'm still confused.
What's the point of having 128 bit compatibility? 128 bit CPUs don't even exist yet. Heck most of us are still just using 32, and haven't even visited the 64 generation yet.
Maybe because it's easier to include now the ability to extend compatibility to 128-bit processors instead of trying to bolt it on later? Who knows, maybe Microsoft really did learn something from their experience with Windows security.
Re: (Score:3, Informative)
Itanium is not unsuccessful for VMS machines (you cannot put VMS on an x86 based chip, 64bit or no), and VMS is used in mainframe and other ultra-high availability applications. The Itanium just didn't pan out for any sort of windows-based operating systems, because windows is so tied to its x86 legacy.
I believe they also have a successor that will be compatible with Itanium as well, I'm not sure though. I mainly only looked at Itanium from the VMS point of view. They certainly have a future their though
Re: (Score:3, Interesting)
No, it means a 128-bit architecture will still be able to run Windows 8.
That is, the architecture supports a different mode that the Windows 8 kernel includes.
Knowing the history of teh bits, this simply means Windows 8 will be available in both 32-bit and 64-bit versions, and 128-bit processors will be able to run in 32-bit mode, but not 64-bit mode.
So yet again, we will be stuck without 64-bit drivers or optimization, let alone 128-bit drivers or optimization.
32-bits should be more than enough for anybody
Re:Not really (Score:5, Interesting)
Clearly says architechture.
Okay, but the question is what does that mean? If it just means 128-bit operations or registers, then that's been around since the original SSE. If it means 128-bit addressing (like it usually does), then who the fuck is making those chips and why? Very few 64-bit chips actually support the full 64-bits of address space (certainly not Intel or AMD), simply because there's no need. You could make every computer on earth part of a huge shared-memory system and have room to spare, not that you'd ever do such a thing. Once systems get far enough apart, shared memory stops making sense as maintaining coherence/consistency becomes too much of an overhead. If you were building a cluster as a shared memory system, and each node had 1 TB of RAM, you could fit ten million nodes in before you started to have address space problems. Even the most wasteful of Stupid Virtual Memory Tricks aren't going to put a lot of pressure on 64-bit addressing any time soon.
I mean I guess I can see the point for the distant future, and hey who the hell knows when Windows 9 is planned for much less will actually arrive, so it can't hurt to make sure it's 'compatible'... I'm just more surprised that any of the partners listed would have 128-bit on even far-reaching roadmaps.
Re:Not really (Score:5, Interesting)
The original IBM System 38 and its descendants, such as OS/400, OS/500, etc., had a 128-bit address space. In these architectures, the large number of address bits were used to provide an address space that spanned both memory and disks and was used to provide processor-level protection for objects stored there. Using large address spaces to ensure hardware protection of system objects is a good start on a highly secure OS and is probably where this is going.
And Intel is no stranger to hardware object protection, either. The iAPX-432 chipset, although not a commercial success, showed that hardware-level protection of objects is feasible, with more complex access controls than can be provided with reasonable performance than with software implementations of complex access control schemes (note I said complex - one of the reasons that the chip failed commercially is that, besides having a braindead two-chip implementation and instruction lengths that varied at the bit level, it could not support simple protection schemes as quickly as software was able to do). Intel is looking for what to do with the extra transistors that feature shrinks provide - adding better protection at the hardware level might be a win.
Re:Not really (Score:4, Interesting)
In these architectures, the large number of address bits were used to provide an address space that spanned both memory and disks and was used to provide processor-level protection for objects stored there. Using large address spaces to ensure hardware protection of system objects is a good start on a highly secure OS and is probably where this is going.
But, even 64 bits is enough for that for a long time.
Since you can address over 17 billion terabytes with 64 bits, that means that even with a doubling of storage density every year (which is much faster than things are really happening), that means we have over 20 years before arrays of a couple thousand disks would start to reach the limit.
By then, there will be 128-bit CPUs. So, unless Windows 8 is targeted for 2020, it really doesn't need any 128-bit features.
Re: (Score:3, Insightful)
Re:Not really (Score:5, Informative)
No, IBM never produced an "OS 500". The branding went from OS/400 to i5/OS to today's "IBM i".
No, the system never had a 128-bit address space. The address space of OS400 went from 48-bit to 64-bit when IBM started using 64-bit Power-based processors in those systems.
Yes, the instruction set uses 128-bit pointers, but only the rightmost 64 bits of the pointer are used in the current system.
Yes, The 64-bit address space covers both system memory and disk storage.
This Wikipedia article about IBM System i [wikipedia.org] is a pretty good reference about this kind of stuff.
Re:Not really (Score:5, Funny)
Even the most wasteful of Stupid Virtual Memory Tricks aren't going to put a lot of pressure on 64-bit addressing any time soon.
You heard it here first, folks: 64-bit ought to be enough for anybody.
Re: (Score:3, Insightful)
128 Bit Architecture = cloud computing (Score:5, Insightful)
Shared memory space among lots of computers, using IP (possibly IPv6) as a protocol.
That's probably what they are referring to if they mean 128 bit address space (not datapath).
Comment removed (Score:4, Insightful)
You don't need 128 bits for addressing (Score:5, Interesting)
You don't need 128 bits for addressing. 2^32 is "only" 4 gigabytes, which was always achievable in theory and actually achieved in practice over a decade ago.
Having a memory — RAM or disk — above 2^64, however, is not achievable in even in theory... 2^64 is only 100 times less, for example, than the estimated number of sand-grains on Earth [wolframalpha.com].
Being able to process as much as 128 bits in one CPU-instruction is nice, and SSE extensions allow that. But neither size_t nor off_t need to exceed 64 bits. Ever... In fact, in the amd64 instruction set [wikipedia.org], only 48 bits can be used to address memory — the rest are for the CPU instruction, so that both the operation and the operand fit in one 64-bit word. The amd64-architecture is thus "limited" to 256 TB — that's the largest RAM an amd64-machine can have and the largest file and amd64-machine can mmap [wikipedia.org].
64-bit systems were truly useful, because — by making size_t and off_t the same, they allowed software to be rid of having to segment access to files, which could, potentially, be too large to memory-map in their entirety (many legacy mmap-implementations are still limited to 2- or 4-Gb files). 128-bit systems are not adding that benefit...
(And, of course, most systems — including even the most modern Linux and BSD — still have rather poor mmap-implementations, compared to their highly-optimized read and write calls... But that's another topic...)
Re: (Score:3, Insightful)
"Having a memory â" RAM or disk â" above 2^64, however, is not achievable in even in theory..."
Why?
Just 5 mins ago I'd bought 32Gb or RAM for $240. So 2^48 of RAM is just about $2 millions. A lot of money, but certainly within the realm of possibility. In 10 years (two iterations of Windows) $2 millions will buy you 2^53 of RAM. And that is also uncomfortably close to the upper limit of 2^64.
If you look at hard drives, 4Tb (2^42) of space is about $500 now. In 10 years that'll be 4Tb for $15, so 2
You misspelled "640K". (Score:4, Insightful)
Having a memory — RAM or disk — above 2^64, however, is not achievable in even in theory... 2^64 is only 100 times less, for example, than the estimated number of sand-grains on Earth [wolframalpha.com]
So? There are more efficient encodings than one byte per sand-grain, you know.
As it turns out, 2^64 is much smaller than Avogadro's Number, the number of molecules in a mole of a chemical compound. If you could find a way to encode information in a 3D hunk of silicon, such that you needed slightly more than 1000 atoms to store each byte, 2^64 bytes of storage would amount to a bit less than one ounce of bulk silicon, occupying less than one cubic inch.
I FULLY expect to see secondary storage approaching this density within the next few decades, and I fully expect that there will be good reasons to support it in a flat address space.
Re: (Score:3, Insightful)
A grain of sand is pretty big. If a single bit was the size of a grain of sand, then by conservative estimates, 4 GB of memory would weigh about 40 kilos.
Re: (Score:3, Informative)
Noooooo! I want to be able to say I have a 23488102 bit OS if that's the size of my bzImage! And once I have 1TB of porn I can call it a 8.79609302*10^12 bit operating system!
Seriously - it's one thing for some IT marketing types not to know that a 128bit OS would need a 128bit processor (which would be a Big Thing, especially if HP were getting back into the market of CPU design and manufacture), but for the submitter and eds to not point it out makes it look a little daft.
Re:Not really (Score:5, Interesting)
Someone else posted a link to an ArsTechnica article about this. They had more info from the LinkedIn post, which indicated that the work was being done to target the IA-128 instruction set (which is currently only available as a simulator, no actual silicon, *yet*). But, since Intel hasn't abandoned Itanium yet, and they are targetting it at Enterprise and High Performance Computing, I could totally see Intel evolving the Itanium architecture from 64-bits to 128-bits. After all, there are a few servers in the world that handle truly epic amounts of data, and really might be able to use more than 64-bits.
It's probably that they are laying the groundwork now, for release 5 or 10 years down the road.
Re:Not really (Score:5, Insightful)
None of the linked articles say that the 128 bits is for the filesystem only, but I still believe you're right:
Making the entire os 128-bit would simply waste a _lot_ of memory, for zero real gain. (Rather the opposite: A larger working set always leads to slower code.)
Having 128 bits available for filesystem/storage makes it quite feasible to have globally unique addresses for everything, across huge populations of machines.
This has been done before, afair IBM has used a 128 (or 129!) bit address space for their AS400 platform, where everything is memory mapped.
I.e. there is no visible file system, you just access objects by address (which is really a handle).
I believe Amazon's cloud storage is similar, in that the only way to access a blob of data is via a 128-bit handle.
Terje
Re:Not really (Score:5, Insightful)
None of the linked articles say that the 128 bits is for the filesystem only, but I still believe you're right:
Making the entire os 128-bit would simply waste a _lot_ of memory, for zero real gain. (Rather the opposite: A larger working set always leads to slower code.)
Having 128 bits available for filesystem/storage makes it quite feasible to have globally unique addresses for everything, across huge populations of machines.
This has been done before, afair IBM has used a 128 (or 129!) bit address space for their AS400 platform, where everything is memory mapped.
I.e. there is no visible file system, you just access objects by address (which is really a handle).
I believe Amazon's cloud storage is similar, in that the only way to access a blob of data is via a 128-bit handle.
Terje
Since Win8 / Win9 won't be out for 5/10 years...
Why am I getting flashbacks to a discussion that people had back in the 8 bit days?
"Making the entire os 32-bit would simply waste a _lot_ of memory, for zero real gain. (Rather the opposite: A larger working set always leads to slower code.) ... Having 32 bits available for filesystem/storage makes it quite feasible to have globally unique addresses for everything, across huge populations of machines."
I never heard this discussion, but you know it happened. Probably almost verbatim.
Re: (Score:3, Informative)
None of the linked articles say that the 128 bits is for the filesystem only, but I still believe you're right:
Making the entire os 128-bit would simply waste a _lot_ of memory, for zero real gain. (Rather the opposite: A larger working set always leads to slower code.)
Right. There's no widely-used 128-bit-native processor architecture either. And there is no reason to have 128-bit address bus either.
I don't think there are 2^128 bytes of DRAM on the planet, even. Lessee... that's 2^98 GiB. Which is almost 10^20 GiB of RAM for every single person on the planet. I think that I personally can account for 10 GiB or so. Maybe 100 GiB if my parents have a secret DRAM trust fund for me that I don't know about. So yeah, 128-bit memory addresses are waaaaay off. I believe
Re:Also (Score:4, Interesting)
How quickly we forget!
The original 8086 processor could address 1 megabyte of memory (20 bits) with a 16 bit processor. It used two registers (one shifted left by four bits) to address memory.
A 64 bit processor could trivially access a 128-bit address space by using the same segment:offset method.
Re: (Score:3, Interesting)
How quickly we forget!
Writing code to use 'near' and 'far' pointers was a constant headache, of the same magnitude of C++'s requirement that you be constantly aware of character width when manipulating strings.
Re: (Score:3, Insightful)
The original 8086 processor could address 1 megabyte of memory (20 bits) with a 16 bit processor. It used two registers (one shifted left by four bits) to address memory.
Have you ever programmed in that model?
Having pointers split into segment:offset pairs meant that you couldn't (easily) have a single array span more than 64kB. Any program that needed to access arrays above that had to split its arrays into arrays of arrays, each of which was smaller. Fun, fun, fun.
A 64 bit processor could trivially access a 128-bit address space by using the same segment:offset method.
I'll let you do the programming, this time. I'll stick to the flat memory model of today's architectures, if that's all right with you.
Re: (Score:3)
Microsoft has been aiming towards high-performance computing recently, working with companies like Nvidia. If you are going to have racks and racks of CPU's/GPU's, it would make sense to have everything accessible using a single memory space. [tgdaily.com]
Re:Not really (Score:4, Funny)
It refers to a 128 bit filesystem ala ZFS, not the whole OS.
Oh. I thought they pulled a Vista again and the 16 exabytes of RAM provided by 64-bit was not enough for their latest crime against humanity.
Re:Not really (Score:5, Funny)
The is no Robert Morgan that works at Microsoft. Not sure who this guy is but if he does work at MS its not his real name.
Well, not anymore, anyway. :-)
Re:Not really (Score:5, Funny)
Just like that other chap who was always making wild statements about what Microsoft was going to do next.
They let him go too. What was his name again? Will? Billy? ...something.
Re:Not really (Score:5, Funny)
Yeah, I think I remember him. He thought the Internet was a passing fad, claimed he'd single-handedly defeat spam, promised Microsoft was taking security seriously in 2000, all kinds of nonsense like that. Had a really dorky haircut too.
Re:Not really (Score:4, Insightful)
The is no Robert Morgan that works at Microsoft. Not sure who this guy is but if he does work at MS its not his real name.
Well, we don't know who you are, either, so why should your input on this be paid any attention?
Re:Not really (Score:5, Interesting)
Re:Not really (Score:5, Informative)
Re:Not really (Score:5, Insightful)
The senior researcher's profile said he was: "Working in high security department [emphasis mine] for research and development involving strategic planning for medium and longterm projects. Research & Development projects including 128-bit architecture compatibility with the Windows 8 kernel and Windows 9 project plan. Forming relationships with major partners: Intel, AMD, HP and IBM."
My first reaction was that if you can't fix the security problems in the people, you surely can't expect to fix the security problems in the software. But that might be a little hasty.
My guess is that the actual security gaffe here was little or nothing. He mentioned he worked in this department, and that they have future plans that exceed today's capabilities. Meh. So what. If he had posted the details of what he was doing, then it would have been newsworthy. As it is, this barely notable. Any one of us here could probably guess that MS likely has people looking into the progression beyond 64 bit technology.
It is reasonable to believe that at some point in the next several years the hardware companies he mentions will have some plan to start building 128 bit cpus. My guess is that this guy's job is to make sure that MS has input into the design process where it can, and to provide feedback to the MS dev teams so MS can start planning to include compatibility features relatively early on, to hopefully be the OS of choice when this hardware someday becomes available. I'm guessing that Windows 8 probably won't be seen for a long time. The article mentions 2012, but given MS's rush to push out 7 to stem the bleeding caused by Vista they may rely on it for longer than normal, much like they did with XP after the ME debacle. If I were writing an OS that would likely debut in 4 to 8 years, I would probably want a heads up from the hardware vendors about how to write an OS for their next gen proc. Also, if MS were planning a future move to a fully 128-bit OS, they might start by inserting 128-bit code into a 64-bit OS.
Re:Not really (Score:4, Funny)
So, that's right on schedule for Windows 8 then.
Re:Not really (Score:4, Insightful)
Not too long ago (15-20 years, maybe?) 64-bit processors would have been unheard of on the desktop. I see 64-bit being stretched as we put more high-definition video into our datasets. And then we'll have the next "ultra high def" format that will stretch it even more. And then you have a small (in terms of units shipped), but very profitable business in supercomputing. Protein folding and subatomic research folks would probably jump at the chance to rerun their simulations with a higher resolution.
Re: (Score:3, Informative)
Not too long ago (15-20 years, maybe?) 64-bit processors would have been unheard of on the desktop. I see 64-bit being stretched as we put more high-definition video into our datasets. And then we'll have the next "ultra high def" format that will stretch it even more. And then you have a small (in terms of units shipped), but very profitable business in supercomputing. Protein folding and subatomic research folks would probably jump at the chance to rerun their simulations with a higher resolution.
Just to put this into perspective, the forthcoming IBM Sequoia [wikipedia.org] supercomputer will have 1.6 petabytes of RAM, and only a very small fraction of this can be accessed by a single compute node. The total amount of RAM in this machine is still 4 orders of magnitude smaller than what can be addressed with a single 64-bit pointer.
More information (Score:5, Informative)
Re: (Score:3, Insightful)
No, you address bytes, not bits. So 2^64 = 18,446,744,073,709,551,616 bytes, not bits.
Not sure... (Score:5, Funny)
But I'll tell you how it will end.
The final architecture EVER will be 640-bit. And that WILL be enough for everyone.
Re:fishy (Score:5, Insightful)
Re: (Score:3, Insightful)
Besides windows 7 is just about to get released, it only makes sense that they start planning on the next big thing. Remember they hav
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Any leading tech company can and should have resources dedicated to the bleeding edge of their industry. It takes a long time and a lot of work to figure out how to turn ideas into products. It takes a lot of support from vendors and customers to be able to produce something reliable and profitable. This is difficult to do, even for a company with Microsoft's resources.
I get to work in R&D at my company. The stuff I've been working on for the past two years won't see full production for another two
Re:Ha ha (Score:4, Interesting)
Re: (Score:3, Informative)
What is Windows missing in terms of 64 bit migration, and what else can Microsoft do about it?
Make long 64 bits. On Win64, int and long are 4 bytes, long long and void* are 8. A huge amount of legacy code assumed that you can always store a void* in a long without truncation. On pretty much every mainstream or near-mainstream platform that assumption is valid... except for win64.
Re: (Score:3, Informative)
Even the netbook processors (Intel Atom and VIA Nano) have full 64-bit support.
Educate yourself [intel.com]. Only two shipping Atom models have x64 support - 330 and 230 - and I'm not aware of any netbooks in production using either one (Intel itself positions them for "nettops", and the rest of the model line for "netbooks"). Most certainly, all popular netbooks are not x64-capable.
Re: (Score:3, Informative)
The x86 line permits chaining of basic binary arithmetic operations to any level of complexity. However, why would we want 128-bit operands? Double precision arithmetic is 64-bits, and there isn't a significant clamor for more precision in scientific circles. (More speed = yes, Vector Op