Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Microsoft Windows Technology Hardware

Microsoft Lifts XP Mode Hardware Requirement 205

An anonymous reader writes "This week, Microsoft published a patch that allows Windows XP Mode to run on PCs without hardware-assisted virtualization. Which begs the question: Why the bizarro requirement in the first place? Was it an honest attempt to deliver an 'optimal' user experience? Or simply a concession to the company's jilted lover, Intel Corporation — 'a kind of apology for royally screwing up with the whole Windows Vista “too fat to fit” debacle,' as the blog post puts it."
This discussion has been archived. No new comments can be posted.

Microsoft Lifts XP Mode Hardware Requirement

Comments Filter:
  • begs the question (Score:2, Insightful)

    by Anonymous Coward on Friday March 19, 2010 @11:52AM (#31538982)

    This doesn't mean what you think it means. Why do some people (mostly nerds) insist on using this term, when it is obvious they don't know what it means or how to use it?

    This must end.

  • by ColdWetDog ( 752185 ) on Friday March 19, 2010 @11:54AM (#31539012) Homepage
    Not only does this not 'beg the question' it's by our good friend Randal C. Kennedy [slashdot.org] of Devil Mountain Software fame. Come on guys, this is in the frigging Ars Technica [arstechnica.com].

    The problem with Slashdot these days is that has no taste, absolutely no taste.
  • by Anonymous Coward on Friday March 19, 2010 @11:56AM (#31539042)

    could it not just be that they wrote the code initially to be reliant on the hardware functions, in order to (i assume) get decent performance relatively quickly and with (again making assumptions) less overheads, and now they have managed to write a software only version at a similar decent level of performance? it seems to be TFA is just assuming that anything microsoft does is cynical and evil, not just a practical decision

  • by Anonymous Coward on Friday March 19, 2010 @11:56AM (#31539050)

    The "weird" hardware requirements are probably due to the fact that they expected AMD and Intel only to produce CPUs with hardware support for virtualization enabled.

    I think its more a case of "If you want to use legacy apps you need to upgrade hardware".

    Microsoft gets the hardware vendors to agree to all their crazy demands by promising a bump in sales when a new version of Windows is released.

  • by shutdown -p now ( 807394 ) on Friday March 19, 2010 @11:58AM (#31539082) Journal

    On one hand, XP Mode in 7 actually uses Virtual PC, which has been around for a long time - but didn't require hardware virtualization support until the most recent version.

    On the other hand, while I can't find that link now, I do recall seeing one of VPC guys explaining that they went hardware-only for the new release simply because it allowed them to cut out a lot of code, which then no longer needs to be supported. In which case it may be that they have realized that it's a bad idea from business perspective, and put that code back in (and brought it up to date) with that patch.

  • by ShadowRangerRIT ( 1301549 ) on Friday March 19, 2010 @11:58AM (#31539108)
    Tag the article as randalckennedy. At least we can identify obvious FUD for what it is.
  • by tlhIngan ( 30335 ) <[ten.frow] [ta] [todhsals]> on Friday March 19, 2010 @11:58AM (#31539110)

    The "weird" hardware requirements are probably due to the fact that they expected AMD and Intel only to produce CPUs with hardware support for virtualization enabled. The fact that one of the major CPU manufacturers didn't, is most likely what bit Microsoft in the ass. Still, some OEMs also are at fault, I think: Just recently I got to look after a defective laptop (RAM module was broken...) and I looked in the BIOS. The CPU could do hardware virtualization, but by default it was disabled in the BIOS. Why? I have no idea...

    AMD did. Intel just makes chips for whatever their customer wants. Like how all Intel Macs have VT support, but it's iffy elsewhere (if an OEM can get a discount over chips with no VT, they'll take it).

    As for disabling VT support, it's probably to avoid "blue-pill" type malware from hitching a ride underneath the OS. At least, that's a reasonable explanation if you have the setting. Sonys don't (at least, they didn't use to), which was more of an OEM thing by trying to be more Apple-like in control. (After all, Apple doesn't give you any control in the matter. Except well, they see the need for VT and have it enabled).

    Ironic, too, since Sony and Apple use EFI firmware from the same company (Insyde).

  • by Anonymous Coward on Friday March 19, 2010 @12:00PM (#31539144)

    No twisting required.

    It appears Microsoft intentionally restricted this feature in an attempt to make customers purchase hardware they did not need.

    Is that bashing Microsoft? No, it's just stating a fact. If the fact bashes Microsoft, that's their fault.

  • by ShadowRangerRIT ( 1301549 ) on Friday March 19, 2010 @12:01PM (#31539170)
    That was my initial thought as well. Yes, well-written software-only virtualization can perform at a similar level, but if they can get it out the door faster with hardware supported virtualization, they may have decided the software-only approach wasn't worth the resources at the time. It's called prioritization; if they put the resources into software-only virtualization, they neglect something else.
  • by zero_out ( 1705074 ) on Friday March 19, 2010 @12:12PM (#31539360)

    It's very common for Intel to make 6 different CPU chips that are exactly the same, but use a laser to cut a single connection on the chip to make sure it runs at a certain speed. Sure, you can overclock the CPU, but it still has one of its throttles turned down. These chips are then sold for 10-50% less, depending on which connection is cut.

    It costs them exactly the same amount to design and manufacture a 2.4 GHz model as a 3.0 GHz model. In fact, it ultimately costs more to gimp these chips, because they have to pay an engineer to design this mechanism, and buy/maintain the machine that does the gimping. Although, this does mean that I can get a gimped chip for slightly less than the cost to produce it by having someone else pay for the overpriced ungimped version. Still, I think I would rather pay $200 for a 3.0 GHz model, and not have the option of a slower model, than to pay $180 for a 2.4 GHz model while enthusiasts pay $900 for the 3.0 GHz model.

    Not only are you right that this has been done for a long time, but it's also become common practice.

  • by jawtheshark ( 198669 ) * <{moc.krahsehtwaj} {ta} {todhsals}> on Friday March 19, 2010 @12:12PM (#31539372) Homepage Journal

    As for disabling VT support, it's probably to avoid "blue-pill" type malware from hitching a ride underneath the OS. At least, that's a reasonable explanation if you have the setting

    Yes, I heard that lame excuse before too. With so many higher end machines being sold having it enabled (heck, my el cheapo Turion X2 bought in January 2007 has it!) malware should be using it already. I haven't heard of a "blue-pill" type infection in the wild yet. Does it exist?

  • by TrancePhreak ( 576593 ) on Friday March 19, 2010 @12:14PM (#31539426)
    These days a lot of that happens to keep yields higher. They take chips where one part doesn't meet their standards and disable then, then sell it for a discount. Reduce, reuse, recycle or something. Where do you think AMD tri-cores came from?
  • by stonecypher ( 118140 ) <stonecypher@noSpam.gmail.com> on Friday March 19, 2010 @12:17PM (#31539478) Homepage Journal

    There are many, many other possibilities than the two derogative possibilities offered. The one which seems most likely to me is that Microsoft thought "well, it's in all new hardware, it probably isn't worth the time and cost of implementing a software solution," only to find out that market demand existed.

    Another possibility is that it took them time to produce an implementation of sufficient quality.

  • by amliebsch ( 724858 ) on Friday March 19, 2010 @12:18PM (#31539490) Journal

    Is it bashing Apple to point out that they do they same thing to OSX?

  • by Blakey Rat ( 99501 ) on Friday March 19, 2010 @12:25PM (#31539592)

    If the editors paid a tenth as much attention to the story as the commentators did, I'll be pissed that they're giving this bozo even more traffic as a reward for passing bullshit off as fact.

    Since they don't, I'll instead just have to remain pissed that the editors don't pay a tenth as much attention as the commentators do.

    Hint for Slashdotters: anything posted by the Exo Performance Network is pure bullshit. Don't believe a syllable without independent verification.

  • by tepples ( 727027 ) <tepples.gmail@com> on Friday March 19, 2010 @12:25PM (#31539604) Homepage Journal

    Nobody ever turned [the speedup button] off, so why have the button in the first place?

    Nowadays, CPU speed settings are most useful for battery-powered computers to let the user trade off performance against battery life.

  • by StuartHankins ( 1020819 ) on Friday March 19, 2010 @01:21PM (#31540584)

    Why have programs grown so bloated.

    My guess is these factors all play a role:

    • the "higher level" style of programming -- where you lose some control but can supposedly create more apps in the same unit time. I'm not convinced that this is a win for anyone. I'm a bit of a control freak though.
    • the insistence of cramming every conceivable options into a program. Yes, most software allows you to check and uncheck options but it's a tedious process and most people just do the "typical" install, resulting in GB of extra crap they will never use.
    • Programmers today have vastly improved storage, processors, bandwidth, etc and they aren't from the generation where you had to be crafty to shave time off the compilation / execution / memory requirements. I blame many of the current IDEs for this, see 1st point.
    • Programmers today don't regard their jobs as engineers -- and before I start a flame war let me say that I consider software engineers different from the programmers you see at the majority of places today. Software engineers thoroughly understand the implications of their decisions and aren't tapping in code chunks they found on the web.
    • The industry as a whole (not real-time OS's or some portable / embedded device programming -- most of them still "get it") has adopted the view that programmer time is expensive and hardware is cheap.

    <rant>
    What's the solution? I say to start, give the programmers a VM with very limited CPU / memory and let them feel the pain. They will understand better because their top-of-the line machine helps hides so much information including timing bugs. They will spend more time tweaking performance and this will prompt them to make some different programming decisions. I still see people coding all communications in XML because they can't imagine why you'd want the data in a compact format... if it's in-house and you control all interfaces, my view is to make things compact even if it sacrifices the tool- or language- or communication protocol-of-the-day. If you're an Amazon and many people need to communicate with you in an open way, XML makes more sense.
    </rant>

Always draw your curves, then plot your reading.

Working...