Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software Operating Systems

Futuristic UC Berkeley OS Tessellation Controls Discrete 'Manycore' Resources 28

coondoggie writes "At the Design Automation Conference (DAC) here this week, John Kubiatowicz, professor in the UC Berkeley computer science division, offered a preview of Tessellation, describing it as an operating system for the future where surfaces with sensors, such as walls and tables in rooms, for example, could be utilized via touch or audio command to summon up multimedia and other applications. The UC Berkeley Tessellation website says Tessellation is targeted at existing and future so-called 'manycore' based systems that have large numbers of processors, or cores on a single chip. Currently, the operating system runs on Intel multicore hardware as well as the Research Accelerator for Multiple Processors (RAMP) multicore emulation platform."
This discussion has been archived. No new comments can be posted.

Futuristic UC Berkeley OS Tessellation Controls Discrete 'Manycore' Resources

Comments Filter:
  • by girlintraining ( 1395911 ) on Saturday June 08, 2013 @02:04AM (#43944217)

    I know I'm going to hell for this, but I read "futuristic" and "tessellation" in the summary and immediately thought of Loki from The Avengers. Terrible villain really, just went bad because he had daddy issues. *cough* Crap... going off topic and triggering a flame war from marvel lovers. Yeah. I'm taking the special bus to hell now...

    • He was a nicely complicated villain in Thor - kept you guessing right to the end which side he was on, with his double-cross approach. He's a planner, not a fighter.

  • Sporadic scheduling (Score:5, Interesting)

    by Animats ( 122034 ) on Saturday June 08, 2013 @02:25AM (#43944283) Homepage

    Some of what they're doing with resource guarantees is like QNX's "sporadic scheduling" [qnx.com]. The idea is that you can guarantee a thread something like 1ms of CPU time every 10ms. This is useful for near-real-time tasks which need a bounded guarantee of responsiveness but don't need to preempt everything else immediately. Most UI activity is in this category. With lots of UI devices, including ones like vision systems that need serious compute resources, you need either something like this, or dedicated hardware for each device.

    On top of sporadic scheduling there should be a resource allocator which doesn't let you overcommit resources. So if something is allowed to run, it will run at the required speed.

    This is very useful in industrial process control and robotics. The use case for human interfaces is less convincing.

    • A bit like Penrose https://en.wikipedia.org/wiki/Penrose_tiling [wikipedia.org] Aperiodic tiling https://en.wikipedia.org/wiki/Aperiodic_tiling [wikipedia.org]
    • Re: (Score:3, Interesting)

      by Anonymous Coward

      My Thesis was about implementing a similar concept for Linux (http://saadi.sourceforge.net); it was working well in 2006, but I did not have the time to maintain it. The concept you describe has one flaw: For energy efficiency, most CPUs support digital voltage scaling. Therefore a time based CPU garantee is useles. Instead the guarantee should be about CPU cycles per time period.
      With that in place, it could be used to increase energy efficiency as well because the frequency can be optimized to just match t

    • allocating 1ms in 10 to a process on a process sounds pointless because in an environment with that many cores, getting a cpu will not be a problem. It's a bit like implementing QoS for VOIP on a gigabit ethernet link. Your voip channel is what a few kbps at most... a millionth of the available resource? What is the point of managing/scheduling it? That management overhead dwarfs the benefit in all but very rare cases.

      What I suspect they are really about is assigning (ie. dedicating) groups of proces

    • I have a fast 4 core computer with a GPU with a GByte of memory. Even with all of that, flash will bring it to a standstill. The worst is when the mouse pointer will not move. There are times when what I see lags behind what I am typing. There are times when after closing a program it takes several seconds before it disappears from the screen. I get messages about closing scripts before they make my computer look unresponsive. Most of those if not all have something to do with flash.
  • Tessellation means to cover a polygon with smaller polygons. That's it.

    Now whenever you google it, it turns up all this garbage from clueless gamers who think it's some kind of Crysis 3 feature. Now on top of that we have to deal with this?

  • by stox ( 131684 )

    "Is security an issue? Yes, Kubiatowicz acknowledges, suggesting cryptography, for one thing needs to be part of it."

    When are people going to learn? Unless you design an operating system to be secure from the very start, it is never going to be really secure.

    Some valuable lessons may be learned from this, but I don't see it having much of a future.

  • by TheRaven64 ( 641858 ) on Saturday June 08, 2013 @03:59AM (#43944523) Journal
    Well, it's definitely for nerds, but the Tesselation paper [acm.org] was published in 2009, so hardly news. For those that don't have ACM DL access, the paper [usenix.org] is interesting, but suffers from many of the same problems as LibOS / Exokernel approaches.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...