Futuristic UC Berkeley OS Tessellation Controls Discrete 'Manycore' Resources 28
coondoggie writes "At the Design Automation Conference (DAC) here this week, John Kubiatowicz, professor in the UC Berkeley computer science division, offered a preview of Tessellation, describing it as an operating system for the future where surfaces with sensors, such as walls and tables in rooms, for example, could be utilized via touch or audio command to summon up multimedia and other applications. The UC Berkeley Tessellation website says Tessellation is targeted at existing and future so-called 'manycore' based systems that have large numbers of processors, or cores on a single chip. Currently, the operating system runs on Intel multicore hardware as well as the Research Accelerator for Multiple Processors (RAMP) multicore emulation platform."
Bad Marvel villain (Score:3)
I know I'm going to hell for this, but I read "futuristic" and "tessellation" in the summary and immediately thought of Loki from The Avengers. Terrible villain really, just went bad because he had daddy issues. *cough* Crap... going off topic and triggering a flame war from marvel lovers. Yeah. I'm taking the special bus to hell now...
Re: (Score:2)
He was a nicely complicated villain in Thor - kept you guessing right to the end which side he was on, with his double-cross approach. He's a planner, not a fighter.
holy FAIL, batman! (Score:1)
So there is a new weird name for an OS, but how is it really different from any other OS in that every other modern OS can do the same and will be doing the same: allow many input devices and it is up to drivers to take care of what type of device is allowed.
Wow, Roman, just wow. We knew you failed math, physics, chemistry, biology, and (your favorite lecture subject) economics while taking courses at one of the largest publicly funded research universities in the western hemisphere. But apparently you failed operating systems as well? What you just stated describes quite nearly every operating system ever made and would suggest that every OS is the same regardless of its lineage. I have never met a reasonable person who would suggest, for example, that DO
Re: (Score:2)
Based on the Slashdot summary, he does have a point. All the summary describes is a computer with multiple peripherals attached to it. Multiple monitors and sound devices and various input devices connected through who knows what buses. It sounds like they're just describing a home media server.
Sporadic scheduling (Score:5, Interesting)
Some of what they're doing with resource guarantees is like QNX's "sporadic scheduling" [qnx.com]. The idea is that you can guarantee a thread something like 1ms of CPU time every 10ms. This is useful for near-real-time tasks which need a bounded guarantee of responsiveness but don't need to preempt everything else immediately. Most UI activity is in this category. With lots of UI devices, including ones like vision systems that need serious compute resources, you need either something like this, or dedicated hardware for each device.
On top of sporadic scheduling there should be a resource allocator which doesn't let you overcommit resources. So if something is allowed to run, it will run at the required speed.
This is very useful in industrial process control and robotics. The use case for human interfaces is less convincing.
Re: (Score:2)
Re: (Score:3, Interesting)
My Thesis was about implementing a similar concept for Linux (http://saadi.sourceforge.net); it was working well in 2006, but I did not have the time to maintain it. The concept you describe has one flaw: For energy efficiency, most CPUs support digital voltage scaling. Therefore a time based CPU garantee is useles. Instead the guarantee should be about CPU cycles per time period.
With that in place, it could be used to increase energy efficiency as well because the frequency can be optimized to just match t
Re: (Score:2)
What I suspect they are really about is assigning (ie. dedicating) groups of proces
Re: (Score:2)
stop abusing the word Tessellation (Score:2)
Tessellation means to cover a polygon with smaller polygons. That's it.
Now whenever you google it, it turns up all this garbage from clueless gamers who think it's some kind of Crysis 3 feature. Now on top of that we have to deal with this?
Re: (Score:3)
Tessellation means to cover a polygon with smaller polygons.
Actually, no. Tiles may be polygons but they're not always.
From wikipedia:
More formally, a tessellation or tiling is a partition of the Euclidean plane into a countable number of closed sets called tiles, such that the tiles intersect only on their boundaries.
Re: (Score:2)
Look! http://www.wikipaintings.org/en/m-c-escher/horseman-1 [wikipaintings.org]
A tessellation that is not made up of polygons. Dang.
Re: (Score:2)
So, those aren't many-sided figures? I suppose you think they're literally made up of infinitely many points, too.
Re: (Score:2)
Look! http://www.wikipaintings.org/en/m-c-escher/horseman-1 [wikipaintings.org]
A tessellation that is not made up of regular polygons. Dang.
There... fixed that for you...
Re: (Score:2)
You do realize that what GPUs are doing in games like Crysis 3 is covering a polygon with smaller polygons?
Huh? (Score:2)
"Is security an issue? Yes, Kubiatowicz acknowledges, suggesting cryptography, for one thing needs to be part of it."
When are people going to learn? Unless you design an operating system to be secure from the very start, it is never going to be really secure.
Some valuable lessons may be learned from this, but I don't see it having much of a future.
News for Nerds? (Score:3)
Re: (Score:2)
isn't that like using a computer through a secretary?
it's inefficient for most tasks.
anyhow, buy xbox one when it comes out. at least then you can get the nbc news by asking it.
Re: (Score:2)
Call me old-fashioned, but I like having an interface between me and the machine. Even if it becomes some haptic setup, I find it somewhat comforting to have a little something between me and it.
I hope my need to have congress with a machine never becomes so great that I have to do away with the interface.
Of course, we could skip the whole neural interface and go straight to the autonomous machines...