Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

A Hardware-Software Symbiosis 120

Roland Piquepaille writes "We all want smaller and faster computers. Of course, this increases the complexity of the work of computer designers. But now, computer scientists from the University of Virginia are coming with a radical new idea which may revolutionize computer design. They've developed Tortola, a virtual interface that enables hardware and software to communicate and to solve problems together. This approach can be applied to get a better performance from a specific system. It also can be used in the areas of security or power consumption. And it soon could be commercialized with the help of IBM and Intel."
This discussion has been archived. No new comments can be posted.

A Hardware-Software Symbiosis

Comments Filter:
  • by tuffy ( 10202 ) on Monday June 04, 2007 @02:28PM (#19385019) Homepage Journal
    A middle layer between hardware and software sounds a whole lot like an operating system - the sort of thing that "would allow software to adapt to the hardware it's running on". I can't figure out from the article what makes this thing so special.
  • URLs (Score:4, Informative)

    by mattr ( 78516 ) <mattr.telebody@com> on Monday June 04, 2007 @02:42PM (#19385237) Homepage Journal
    Her homepage [virginia.edu],
    tortola [tortolaproject.com] and
    possibly unrelated paper [acm.org]

  • Links and a comment (Score:5, Informative)

    by martyb ( 196687 ) on Monday June 04, 2007 @02:49PM (#19385325)

    Some Links:

    And a comment:

    I'm not entirely thrilled with this idea of dynamically communicating between hardware and software. From what I got from TFA, the hardware would change dynamically based on feedback from the software. It seems to me that we already have plenty of trouble writing programs that work correctly when the hardware does not change... imagine trying to debug a program when the computer hardware is adapting to the changes in your code. (IOW: heisenbugs [wikipedia.org].)

    Also, I've got some unease when I think about what mal-ware authors could come up with using this technology. Sure, we'll come up with something else to counteract that... but I think it'll bring up another order of magnitude's worth of challenge in this cat and mouse game we already have.

  • by kebes ( 861706 ) on Monday June 04, 2007 @03:23PM (#19385793) Journal
    The linked article doesn't really explain the work very well. The project homepage [tortolaproject.com] has quite a bit more information. What they are trying to accomplish is indeed a middle-layer between applications and hardware (e.g. OS functions or drivers) but the point is the solve a particular optimization problem (speed, low power usage, security) by optimizing software and hardware together.

    So, for example, if low power usage is the goal, then instead of fine-tuning the hardware for low power usage, and then also tuning the software for low power usage (e.g. getting rid of unnecessary loops, checks, etc.), the strategy would be to create specific hooks in the hardware system (accessible via the OS and/or a driver, of course) to allow this fine-tuning. Nowadays we have chips that can regulate themselves so that they don't use excessive power. But it could be even more efficient if the software were able to signal the hardware, for instance differentiating between "I'm just waking up to do a quick, boring check--no need to scale the CPU up to full power" versus "I'm about to start a complex task and would like the CPU ramped up to full power."

    They claim to have some encouraging results. From one of the abstracts:

    We have demonstrated the effectiveness of our approach on the well-known dI/dt problem, where we successfully stabilized the voltage fluctuations of the CPU's power supply by transforming the source code of the executing application using feedback from hardware.
    Obviously the idea of having software and hardware interact is not new (that's what computers do, after all)... but the intent of this project is, apparently, to push much more aggressively into a realm where optimizations are realized by using explicit signaling between hardware and software systems. (Rather than leaving the hardware or OS to guess what hardware state is best suited to the task at hand.)
  • by Animats ( 122034 ) on Monday June 04, 2007 @03:49PM (#19386095) Homepage

    First off, it's a Roland the Plogger story, so you know it's clueless. Roland the Plogger is just regurgitating a press release.

    Here's an actual paper [virginia.edu] about the thing. Even that's kind of vague. The general idea, though, seems to be to insert a layer of code-patching middleware between the application and the hardware. The middleware has access to CPU instrumentation info about cache misses, power management activity, and CPU temperature. When it detects that the program is doing things that are causing problems at the CPU level, it tries to tweak the code to make it not do so much bad stuff. See Power Virus [wikipedia.org] in Wikipedia for an explaination of "bad stuff". The paper reports results on a simulated CPU with a simulated test program, not real programs on real hardware.

    Some CPUs now power down sections of the CPU, like the floating point unit, when they haven't been used for a while. A program which uses the FPU periodically, but with intervals longer than the power-off timer, is apparently troublesome, because the thing keeps cycling on and off, causing voltage regulation problems. This technique patches the code to make that stop happening. That's what they've actually done so far.

    Intel's interest seems to be because this was a problem with some Centrino parts. [intel.com] So this is something of a specialized fix. It's a software workaround for some problems with power management.

    It's probably too much software machinery for that problem. On-the-fly patching of code is an iffy proposition. Some code doesn't work well when patched - game code being checked for cheats, DRM code, code being used by multiple CPUs, code being debugged, and Microsoft Vista with its "tilt bits". Making everything compatible with an on the fly patcher would take some work. A profiling tool to detect program sections that have this problem might be more useful.

    It's a reasonable piece of work on an annoying problem in chip design. The real technical paper is titled "Eliminating voltage emergencies via microarchitectural voltage control feedback and dynamic optimization." (International Symposium on Low-Power Electronics and Design, August 2004). If you're really into this, see this paper on detecting the problem during chip design [computer.org], from the India Institute of Technology in Madras. Intel also funded that work.

    On the thermal front, back in 2000, at the Intel Developer Forum the keynote speaker after Intel's CEO spoke [intel.com], discussing whether CPUs should be designed for the thermal worst case or for something between the worst case and the average case: "Now, when you design a system, what you typically want to do is make sure the thermal of the system are okay, so even at the most power-hungry application, you will contain -- so the heat of the system will be okay. So this is called thermal design power, the maximum, which is all the way to your right. A lot of people, most people design to that because something like a power virus will cause the system to operate at very, very maximum power. It doesn't do any work, but that's -- you know, occasionally, you could run into that. The other one is, probably a little more reasonable, is you don't have the power virus, but what the most -- the most power consuming application would run, and that's what you put the TDP typical."

    From that talk, you can kind of see how Intel got into this hole. They knew it was a problem, though, so they put in temperature detection to slow down the CPU when it gets too hot. This prevents damage,

  • by ukillaSS ( 1111389 ) on Monday June 04, 2007 @06:54PM (#19388611)
    There has been a TON of work on creating uniform layers of OS, middleware, that are accessible to both SW and HW.

    * EPFL - Miljan Vuletic's PhD Thesis
    * University of Paderborn's ReconOS
    * University of Kansas's HybridThreads
    * etc. etc.

    This work is becoming very influential in areas of HW/SW co-design, computer architecture, embedded & real-time systems due to it's importance to both research-oriented and commerical computing.

    Additionally, this is now becoming a major thrust for many chip-makers that have now realized that serial programs running on superscalar machines really are getting any faster. Multicore systems are now available, and are still showing no significant speedups due to a lack of proper parallel programming models. In the past, developing HW was considered "hard/difficult" and developing SW was "easy". Additionally, this usually was due to the fact that HW design involved parallel processes, synchronization, communication, etc. while SW involved a serial list of execution steps. Now that we have multiple cores SW developers are realizing that not only do are most programmers horrible at writing code that interacts with hardware (an object of concurrency in most systems), but they are even worse at writing code that interacts with concurrent pieces of SW. The HW/SW boundary is only a small glimpse of how badly parallelism is managed in today's systems - we need to focus on how to describe massive amounts of coarse-grained parallelism in a such a way that one can reason about parallel systems.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...