Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

A Hardware-Software Symbiosis 120

Posted by ScuttleMonkey
from the your-job-will-be-replaced-by-robots-soon dept.
Roland Piquepaille writes "We all want smaller and faster computers. Of course, this increases the complexity of the work of computer designers. But now, computer scientists from the University of Virginia are coming with a radical new idea which may revolutionize computer design. They've developed Tortola, a virtual interface that enables hardware and software to communicate and to solve problems together. This approach can be applied to get a better performance from a specific system. It also can be used in the areas of security or power consumption. And it soon could be commercialized with the help of IBM and Intel."
This discussion has been archived. No new comments can be posted.

A Hardware-Software Symbiosis

Comments Filter:
  • by Anonymous Coward
    "Reinventing the wheel for profit!"
    • > Computer engineers have long tried to optimize computer systems ... blah blah ... "This middle layer would allow software to adapt to the hardware it's running on, something engineers have not been able to do in the past," she says. Yes they have. Its called java and .net not that tortola thing. Keep reading > "We could use the software to hide flaws in the hardware, which would allow designers to release products sooner because problems could be fixed later," explains Hazelwood. Java and .net ca
  • by tuffy (10202) on Monday June 04, 2007 @01:28PM (#19385019) Homepage Journal
    A middle layer between hardware and software sounds a whole lot like an operating system - the sort of thing that "would allow software to adapt to the hardware it's running on". I can't figure out from the article what makes this thing so special.
    • Well, something that allows software to adapt to the hardware its running on sounds to me a lot like an optimizing compiler, but only if the software is distributed in source form.
      • Actually, it specifically mentions communication, so I'd say the OP is closer to correct, though a more specific answer would be "driver"
    • by N3WBI3 (595976)
      I was thinking the same thing, I had to read over the article several times to make sure I was not missing something. In the end, I think, she is after some uber firmware I mean we have the OS we have firmware, what else does she intend to do? Sure you could make your hardware components more programmable but the cost in terms of speed lost in operations and complexity of design far outweigh any benefit.
    • I was thinking it seemed almost like what LLVM is trying to do by optimising at runtime.
    • What makes it special is that she's using Transmeta's marketing literature.

      I'd recommend her to hire Linus and go head-to-head against Intel. (Or try to be bought out by them).

      It's scary to think what if the Cold Fusion professors were as pretty as she is.
    • by nurb432 (527695)
      Sounds more like a BIOS to me.
    • by MoxFulder (159829)
      Yeah... hardware/software working together. Not exactly new. In fact, I believe it is impossible to build a useful stored-program computer without it :-)

      This is the most content-free article I've ever read. It's basically a press release with an female professor thrown in to boot. Yay.
    • by kebes (861706) on Monday June 04, 2007 @02:23PM (#19385793) Journal
      The linked article doesn't really explain the work very well. The project homepage [tortolaproject.com] has quite a bit more information. What they are trying to accomplish is indeed a middle-layer between applications and hardware (e.g. OS functions or drivers) but the point is the solve a particular optimization problem (speed, low power usage, security) by optimizing software and hardware together.

      So, for example, if low power usage is the goal, then instead of fine-tuning the hardware for low power usage, and then also tuning the software for low power usage (e.g. getting rid of unnecessary loops, checks, etc.), the strategy would be to create specific hooks in the hardware system (accessible via the OS and/or a driver, of course) to allow this fine-tuning. Nowadays we have chips that can regulate themselves so that they don't use excessive power. But it could be even more efficient if the software were able to signal the hardware, for instance differentiating between "I'm just waking up to do a quick, boring check--no need to scale the CPU up to full power" versus "I'm about to start a complex task and would like the CPU ramped up to full power."

      They claim to have some encouraging results. From one of the abstracts:

      We have demonstrated the effectiveness of our approach on the well-known dI/dt problem, where we successfully stabilized the voltage fluctuations of the CPU's power supply by transforming the source code of the executing application using feedback from hardware.
      Obviously the idea of having software and hardware interact is not new (that's what computers do, after all)... but the intent of this project is, apparently, to push much more aggressively into a realm where optimizations are realized by using explicit signaling between hardware and software systems. (Rather than leaving the hardware or OS to guess what hardware state is best suited to the task at hand.)
      • Thanks for the insight. I really wish the summaries would link to project pages when they're available. I sure don't see a link to the project page from the article - and that's par for the course with news publications any more. I mean really, right under the spot with the reporter's email address would be a great place for it (besides the obvious spot somewhere within the body of TFA...)
      • by imgod2u (812837)
        Software-controlled hardware scaling already exist IIRC. They've existed in a primitive form since Intel first introduced its "Speedstep" feature on its mobile Pentiums. The OS would control the clockrate of the microprocessor based on the CPU utilization.

        Embedded systems (PDA's and cell phones) have had finer and more sophisticated grades of software-controlled frequency and voltage scaling and even software-controlled sleep-states.

        I'm not sure why this research project is so special. I suppose since sh
      • I'm still not amazingly impressed with what you told me... just kind of a "meh". But what bothers me is just how amazingly bad the article is:

        a middle layer between hardware and software that can translate and communicate between software and hardware, allowing for cooperative problem solving.

        Wow, sounds just like firmware. That or drivers, depending on your definition.

        This middle layer would allow software to adapt to the hardware its running on, something engineers have not been able to do in the past,

      • From TFA: Hazelwood already has collaborative ties with researchers at Intel and IBM that place her in an ideal position to eventually commercialize the technology her lab develops.

        IBM's iSeries computers use a lot of microcode to mediate between OS and hardware, promoting both software and hardware independence. Sounds like the current project is in the same vein.
    • These days the OS components are getting laess and less tied to specific hardware. Even "drivers" are tending towards generic code that is adapted with hardware adaptation layers (HALs).

      What is more interestin, however, is that hardware capability is getting richer. Gate arrays etc allow you you build far more intelligent hardware requiring less software control from the main CPU. That makes for more efficient processing.

  • by AKAImBatman (238306) * <akaimbatman AT gmail DOT com> on Monday June 04, 2007 @01:29PM (#19385037) Homepage Journal
    TFA is amazingly short on details. All it says is:

    ...a middle layer between hardware and software that can translate and communicate between software and hardware, allowing for cooperative problem solving. "This middle layer would allow software to adapt to the hardware it's running on, something engineers have not been able to do in the past," she says.


    That doesn't really say much. In fact, without further details it sounds like dynamic tuning in virtual machines. Which can't be the case here, as that would be reinventing what has already been inventing. (Seriously, her professor wouldn't approve a project like that, would he?)

    Anyone have any more details?
  • HOT! (Score:1, Interesting)

    by Anonymous Coward
    Damn! That professor is Hot!!!! And she teaches Compilers!!!!

    http://www.cs.virginia.edu/kim/ [virginia.edu]
  • Heh (Score:4, Funny)

    by NeoTerra (986979) on Monday June 04, 2007 @01:30PM (#19385049)

    "We could use the software to hide flaws in the hardware, which would allow designers to release products sooner because problems could be fixed later," explains Hazelwood.
    Looks like we have a winner!
  • They invented firmware
  • and it is just a smarter bios
  • hmm (Score:1, Insightful)

    by Anonymous Coward
    I remember something like this being talked about by my teacher 3 years ago. About how software could show down parts of the CPU to save power. It could also change the way the CPU worked on the fly.

    "We could use the software to hide flaws in the hardware, which would allow designers to release products sooner because problems could be fixed later," translation -> Hardware companies can produce shit and if someone happens to notice a flaw we can create a patch instead of testing our products first. Will
    • by misleb (129952)
      Wasn't this already dealt with in cases like the Pentium fdiv bug? I remember the Linux kernel detecting and "patching" known problems with hardware. Also happened certain accelerted IDE controllers, IIRC.

      But you're right, it sounds like a license for hardware manufacturers to be more careless and expect software people pick up the slack. As if software didn't have enough bugs... soon we won't even be able to trust that the hardware is reliable? WTF? In what world is this a good thing?

      -matthew
  • by LighterShadeOfBlack (1011407) on Monday June 04, 2007 @01:36PM (#19385159) Homepage

    Hazelwood cites a famous Intel mishap where microprocessors were distributed before a flaw in their fine mathematics function was detected, resulting in a massive recall. A system like Tortola could prevent such expensive glitches in the future. "We could use the software to hide flaws in the hardware, which would allow designers to release products sooner because problems could be fixed later," explains Hazelwood.
    Oh great. So not only do the public get to be unwitting beta testers for software but we'll soon be able to do it for hardware too.

    I can't wait to pay £400 for a Beta CPU and then get to endure 6 months of crashing until it gets patched.
    • by kebes (861706)

      I can't wait to pay £400 for a Beta CPU and then get to endure 6 months of crashing until it gets patched.

      That's a fair worry. But, on the other hand, chips already ship with plenty of bugs. There are thousands of documented bugs in every chip you've ever used. The expense of redesigning is too high, so they will never fix those bugs. Instead they usually just publish the list of known bugs, and tell the compiler writers: "don't ever use that particular instruction--it doesn't work" or "avoid this s

    • by ray-auch (454705)
      I can't wait to pay £400 for a Beta CPU and then get to endure 6 months of crashing until it gets patched

      I can't wait to do a months of simulation work on it, and then finding you have to redo it because your results were invalid due to hardware bug.

      Oh, wait, been there done that. Years ago. FDIV.

      Nothing new here, move along...

    • by Fyz (581804)
      I understand your frustration with the idea and I even share it to a point.

      But it is my understanding that in the future, the parallel processing microchips will be so incredibly complex that "getting it right the first time" is an ideal that just isn't realistic.
  • This sounds a lot like a virtual machine on top of a FPGA board. Would be neat to store a VM or OS on a seperate layer, and allow the OS to reflash the FPGA to optimize the hardware to a specific task.
  • Should I ask you to notice she's an intelligent, pretty, woman who's into CS? Nah.
  • URLs (Score:4, Informative)

    by mattr (78516) <mattr&telebody,com> on Monday June 04, 2007 @01:42PM (#19385237) Homepage Journal
    Her homepage [virginia.edu],
    tortola [tortolaproject.com] and
    possibly unrelated paper [acm.org]

  • BS again.... (Score:5, Insightful)

    by gweihir (88907) on Monday June 04, 2007 @01:42PM (#19385241)
    Sorry, but Hardware and software do not solve problems together. That is straight from the "computers are magic"-fraction. Hardware solves problems under software control. Hardware alone can do nothing and software alone cannot run.

    Using inferface layers to get more portable and easier to use interfaces is an old and well-established technique.

    There people are looking for money, right? Why does /. provide free advertizing to them?
    • Obviously written by a software guy...

      Hardware alone can do lots of things (albeit hard to change) but
      software alone can do nothing :-)

      Seriously, hardware/software partitioning is key in product design,
      and it affects everything. I'm curious as to what they are proposing,
      and how it will affect product cost and development schedules. TFA is
      completely uninformative.
      • by gweihir (88907)
        Hardware alone can do lots of things (albeit hard to change) but
        software alone can do nothing :-)


        Hehe, right. I have some elecronics experience (in fact a lot) and you are right of course. But computer hardware is entriely useless without software...
    • by 2short (466733)
      "There people are looking for money, right?"

      No, it appears they're doing something cool but fairly technical. A description of it so simplified that it misses the point has been written, and this has then been summarized to remove any shred of meaningful information, leaving "hardware and software solve problems together".

      This is pretty par for the course when you apply a couple dumbing-down passes to the description of something that is fundamentally only interesting for its non-dumbed-down technical deta
      • by gweihir (88907)
        No, it appears they're doing something cool but fairly technical. A description of it so simplified that it misses the point has been written, and this has then been summarized to remove any shred of meaningful information, leaving "hardware and software solve problems together".

        This is pretty par for the course when you apply a couple dumbing-down passes to the description of something that is fundamentally only interesting for its non-dumbed-down technical details.


        Hmm. Could be right. Makes the article a
  • WTF, again?! (Score:5, Interesting)

    by vivaoporto (1064484) on Monday June 04, 2007 @01:43PM (#19385253)
    Although this is not a dupe, it practically is. Check this other story, New Way to Patch Defective Hardware [slashdot.org], less than two months old. Basically, both approaches suck in the same way, they allow hardware manufacturers to be sloppy in order to rush the product out as fast as possible while allowing them to try to correct the errors that will appear later in the process. In short, they reinvented the FPGA.

    Two non-stories. But makes one think, cui bono? Who is benefiting from these articles? Roland for sure, being such a click whore. But other than him, who else? Weird, very weird indeed.
    • by aldheorte (162967)
      Sorry, semantic digression: Will people stop it with the cui bono!? Just say it in English, which you have to do anyway in repetition since some people don't know what cui bono means. For someone who reads both Latin and English it reads like this. "But makes one think, who is benefitting? Who is benefitting from these articles?"
      • For someone who reads both Latin and English it reads like this. "But makes one think, who is benefitting? Who is benefitting from these articles?"
        And that was exactly my intention, it's a figure of speech. Lookup anadiplosis [wikipedia.org]. Anyway, I shouldn't be explaining semantic, people that knew what cui bono mean would be OK, people who didn't could simply skip it and go on, or look it up on the internet.

  • Transmeta Crusoe (Score:3, Interesting)

    by dduardo (592868) on Monday June 04, 2007 @01:47PM (#19385299)
    How is this any different than what Transmeta has already done?
    • by bprice20 (709357)
      I thought the same thing.
    • My understanding is, Transmeta actually ran an entirely different chip under the hood, and emulated x86 on the fly (with help from hardware).

      This is entirely different -- it's about having the software be able to more tightly communicate with the hardware. To paraphrase someone else's post: It's so the hardware can know the difference between "I'm just waking up to poll something, keep everything low-power" and "OMG ramp it up to full lap-burning power NOW!!!"
  • Links and a comment (Score:5, Informative)

    by martyb (196687) on Monday June 04, 2007 @01:49PM (#19385325)

    Some Links:

    And a comment:

    I'm not entirely thrilled with this idea of dynamically communicating between hardware and software. From what I got from TFA, the hardware would change dynamically based on feedback from the software. It seems to me that we already have plenty of trouble writing programs that work correctly when the hardware does not change... imagine trying to debug a program when the computer hardware is adapting to the changes in your code. (IOW: heisenbugs [wikipedia.org].)

    Also, I've got some unease when I think about what mal-ware authors could come up with using this technology. Sure, we'll come up with something else to counteract that... but I think it'll bring up another order of magnitude's worth of challenge in this cat and mouse game we already have.

    • by pavon (30274)

      From what I got from TFA, the hardware would change dynamically based on feedback from the software.

      Based on their abstract in the link you post, it is the other way around - the hardware provides more information about the state of the processor than it normally would, and then the software uses this information to perform run-time optimizations taking these factors into account. Considering that we are already employing run-time optimization in languages such as Java and C#, providing more information to assist in these optimizations, and to allow them to optimize for things that they couldn't in the p

  • CCS (Score:5, Insightful)

    by diablovision (83618) on Monday June 04, 2007 @01:50PM (#19385333)
    This article is an example of CCS: "Cute Chick Science". The article has about as much fluff as a popcorn kernel. I am not exactly sure to what they are referring here--FPGAs? There seem to be a number of statements that are overly categorical and seemingly not well informed such as "This middle layer would allow software to adapt to the hardware it's running on, something engineers have not been able to do in the past," and "to engineer software that can communicate between the two layers, [hardware and software]".

    If there wasn't a pic of a cute professor involved, would anyone care?
    • I'm an ME with very limited programming experience who nonetheless hangs around slashdot, and even I've read enough to think this is fluff crap.

      If there wasn't a pic of a cute professor involved, would anyone care?

      Pretty sure you nailed it right there. On the one hand, this kind of crap isn't going to help her be taken seriously at all. On the other hand, she is very, very cute.

      It's entirely possible that Prof Hottie ^H^H^H Hazelwood discovered some new arcana in the field and the reporter can't even come c

      • It's entirely possible that Prof. Hazelwood discovered some new arcana in the field and the reporter can't even come close to understanding 3/4 of it, so she just went with the easy stuff.

        I think this is the correct explanation. I actually understand what Tortola is, and it's not bogus nor is it a reinvention of previous work. Unfortunately, the Web site isn't very detailed; the one example given (di/dt) is a pretty obscure problem to solve. As it stands now, there is no way to explain Tortola to a regular
      • by ebichete (223210)

        Pretty sure you nailed it right there. On the one hand, this kind of crap isn't going to help her be taken seriously at all. On the other hand, she is very, very cute.

        It's entirely possible that Prof Hottie ^H^H^H Hazelwood discovered some new arcana in the field and the reporter can't even come close to understanding 3/4 of it, so she just went with the easy stuff. More likely, Hazelwood rediscovered the FPGA.

        Lord, preserve us from slashidiots...

        Cute Chick factor can only get you so far. This is not CS Und

        • by Raenex (947668)

          This is not CS Undergrad Hazelwood we are talking about, it is Professor Hazelwood.
          Assistant Professor.
  • Tortola Project [tortolaproject.com]

    Seems like an interesting research project. The research seems new (I see no published papers on Tortola, although I do see some slides and an extended abstract), so it will be interesting to see how it develops. I am very interested in seeing how an operating system would interact with Tortola.

  • by Wicko (977078)

    "We could use the software to hide flaws in the hardware, which would allow designers to release products sooner because problems could be fixed later," explains Hazelwood.

    How about lets not encourage companies to rush out unfinished products any more than they already do?

  • Finally, the two arch nemesis: software and hardware, will live together in symbiosis. Never before you have seen software and hardware working together.

    Now this article demonstrates that what was before unthinkable, may tommorow be a commodity, and we will finally be able to run software on our hardware.

  • by Animats (122034) on Monday June 04, 2007 @02:49PM (#19386095) Homepage

    First off, it's a Roland the Plogger story, so you know it's clueless. Roland the Plogger is just regurgitating a press release.

    Here's an actual paper [virginia.edu] about the thing. Even that's kind of vague. The general idea, though, seems to be to insert a layer of code-patching middleware between the application and the hardware. The middleware has access to CPU instrumentation info about cache misses, power management activity, and CPU temperature. When it detects that the program is doing things that are causing problems at the CPU level, it tries to tweak the code to make it not do so much bad stuff. See Power Virus [wikipedia.org] in Wikipedia for an explaination of "bad stuff". The paper reports results on a simulated CPU with a simulated test program, not real programs on real hardware.

    Some CPUs now power down sections of the CPU, like the floating point unit, when they haven't been used for a while. A program which uses the FPU periodically, but with intervals longer than the power-off timer, is apparently troublesome, because the thing keeps cycling on and off, causing voltage regulation problems. This technique patches the code to make that stop happening. That's what they've actually done so far.

    Intel's interest seems to be because this was a problem with some Centrino parts. [intel.com] So this is something of a specialized fix. It's a software workaround for some problems with power management.

    It's probably too much software machinery for that problem. On-the-fly patching of code is an iffy proposition. Some code doesn't work well when patched - game code being checked for cheats, DRM code, code being used by multiple CPUs, code being debugged, and Microsoft Vista with its "tilt bits". Making everything compatible with an on the fly patcher would take some work. A profiling tool to detect program sections that have this problem might be more useful.

    It's a reasonable piece of work on an annoying problem in chip design. The real technical paper is titled "Eliminating voltage emergencies via microarchitectural voltage control feedback and dynamic optimization." (International Symposium on Low-Power Electronics and Design, August 2004). If you're really into this, see this paper on detecting the problem during chip design [computer.org], from the India Institute of Technology in Madras. Intel also funded that work.

    On the thermal front, back in 2000, at the Intel Developer Forum the keynote speaker after Intel's CEO spoke [intel.com], discussing whether CPUs should be designed for the thermal worst case or for something between the worst case and the average case: "Now, when you design a system, what you typically want to do is make sure the thermal of the system are okay, so even at the most power-hungry application, you will contain -- so the heat of the system will be okay. So this is called thermal design power, the maximum, which is all the way to your right. A lot of people, most people design to that because something like a power virus will cause the system to operate at very, very maximum power. It doesn't do any work, but that's -- you know, occasionally, you could run into that. The other one is, probably a little more reasonable, is you don't have the power virus, but what the most -- the most power consuming application would run, and that's what you put the TDP typical."

    From that talk, you can kind of see how Intel got into this hole. They knew it was a problem, though, so they put in temperature detection to slow down the CPU when it gets too hot. This prevents damage,

    • by ukillaSS (1111389) on Monday June 04, 2007 @05:54PM (#19388611)
      There has been a TON of work on creating uniform layers of OS, middleware, that are accessible to both SW and HW.

      * EPFL - Miljan Vuletic's PhD Thesis
      * University of Paderborn's ReconOS
      * University of Kansas's HybridThreads
      * etc. etc.

      This work is becoming very influential in areas of HW/SW co-design, computer architecture, embedded & real-time systems due to it's importance to both research-oriented and commerical computing.

      Additionally, this is now becoming a major thrust for many chip-makers that have now realized that serial programs running on superscalar machines really are getting any faster. Multicore systems are now available, and are still showing no significant speedups due to a lack of proper parallel programming models. In the past, developing HW was considered "hard/difficult" and developing SW was "easy". Additionally, this usually was due to the fact that HW design involved parallel processes, synchronization, communication, etc. while SW involved a serial list of execution steps. Now that we have multiple cores SW developers are realizing that not only do are most programmers horrible at writing code that interacts with hardware (an object of concurrency in most systems), but they are even worse at writing code that interacts with concurrent pieces of SW. The HW/SW boundary is only a small glimpse of how badly parallelism is managed in today's systems - we need to focus on how to describe massive amounts of coarse-grained parallelism in a such a way that one can reason about parallel systems.
  • This smells like trying to open the back door to Treacherous Computing to me, just by way of wrapping it up in buzzwords that'll make otherwise wary geeks accept it thinking it'll make their computers somehow go faster.
  • Of course she's hot, but she has a PhD from Harvard, and she's published a lot in major conferences. I'm pretty sure you can't get a PhD in CS, or papers accepted at major conferences (which are double-blind reviewed, btw) on looks alone.

    As for this work, the article summary and the article itself are severely lacking in details. Go to the project page. And yes, people have been doing dynamic translation/optimization for years (Transmeta, Dynamo from HP - which she worked on actually, - rePLAY from UIU

  • "hardware and software to communicate and to solve problems together"

    This is freaking slashdot - could we get something a little more technical in the summaries?
  • Aargh! I accidentally clicked on the link without noticing the submitter- most of Roland's trash gets tagged very quickly, why the delay with this one?
  • A device to let software communicate with hardware? Cool! Why don't we call it a "computer"?
  • yeah... we call that embedded software engineering at my company.
  • ... smaller computers. I'm very happy withe the form factors beginning at Extended ATX down to picoITX/SBC. Sizes for all my needs. But faster and cheaper, especially the small SBCs and pico/miniITX/felyATX, would be nice.
  • I'm sure it seemed like a good idea at the time... peices of junk. My old ISA bus modem (with real hardware on it) ran circles around those PCI bus beauties.
  • http://www.cs.virginia.edu/~hazelwood/tortola/pape rs/islped04.pdf [virginia.edu]

    When the hardware detects a problem it signals the software. The software knows the location of the problematic code by checking a "last executed branch" register. A dynamic optimizer(software) then re-orders the code in that region and caches it to be used in future passes through that section.

    The trick will be getting the dynamic optimizer light-weight enough that it doesn't induce performance hits in and of itself. Also, as an above po

  • .. but the idea is so old the patent has expired.

    Back in the 70s when they replaced the super successful 360 series
    they used "microcode" to translate the old 360 instructions into instructions
    understood by the new hardware -- and -- are still doing this today for zSeries.

    This was a very good thing as the 360 instruction set and associoated tooling
    was a work of art.

     
  • Programs could be accelerated if computers had FPGAs as standard components.
  • Yeah, this sounds like an idea to enhance the concept of a Hypervisor; develop functionality for better "error-handling". Initially, I thought this was a rather-impressive idea; most-likely since embedded-systems intrigue-me. ...yet after more-thought, the only way I could see this as beneficial would be to create a similar magic to the sheer-awe IBM revealed with microcode. Though the issue with insuring that this sort of implementation doesn't "hinder" performance right from the get-go, perplexes-me. In o

HOST SYSTEM NOT RESPONDING, PROBABLY DOWN. DO YOU WANT TO WAIT? (Y/N)

Working...