Forgot your password?
typodupeerror
Hardware Hacking Technology Build Hardware

Toward An FSF-Endorsable Embedded Processor 258

Posted by Unknown Lamer
from the building-rms-a-new-computer dept.
lkcl writes about his effort to go further than others have, and actually have a processor designed for Free Software manufactured: "A new processor is being put together — one that is FSF Endorseable, contains no proprietary hardware engines, yet an 800MHz 8-core version would, at 38 GFLOPS, be powerful enough on raw GFLOPS performance figures to take on the 3ghz AMD Phenom II x4 940, the 3GHz Intel i7 920 and other respectable mid-range 100 Watt CPUs. The difference is: power consumption in 40nm for an 8-core version would be under 3 watts. The core design has been proven in 65nm, and is based on a hybrid approach, with its general-purpose instruction set being designed from the ground up to help accelerate 3D Graphics and Video Encode and Decode, an 8-core 800mhz version would be capable of 1080p30 H.264 decode, and have peak 3D rates of 320 million triangles/sec and a peak fill rate of 1600 million pixels/sec. The unusual step in the processor world is being taken to solicit input from the Free Software Community at large before going ahead with putting the chip together. So have at it: if given carte blanche, what interfaces and what features would you like an FSF-Endorseable mass-volume processor to have? (Please don't say 'DRM' or 'built-in spyware')." There's some discussion on arm-netbook. This is the guy behind the first EOMA-68 card (currently nearing production). As a heads ups, we'll be interviewing him in a live style similarly to Woz (although intentionally this time) next Tuesday.
This discussion has been archived. No new comments can be posted.

Toward An FSF-Endorsable Embedded Processor

Comments Filter:
  • DRM (Score:5, Interesting)

    by queazocotal (915608) on Tuesday December 04, 2012 @02:29PM (#42181975)

    DRM, in some aspects - trusted computing - can be a positive thing.
    My ideal system would have a root key I can set, that without software signed by it, it is a rock.

  • Scientific Computing (Score:5, Interesting)

    by simonbp (412489) on Tuesday December 04, 2012 @02:30PM (#42181991) Homepage

    IMHO, they really need to push this for scientific computing initially, as they tend to buy in bulk and are not very binary dependant. They are claiming it is so low power (2.7 W) that it would be easy to put an array, say, eight of them on a 1U motherboard for 64 cores.

  • No thanks (Score:5, Interesting)

    by betterunixthanunix (980855) on Tuesday December 04, 2012 @02:30PM (#42182005)
    Can we please move away from x86? That architecture is horribly outdated, loaded down with things that sort-of made sense in the 1970s. Today's x86 CPUs are just dressed up RISC machines; let's free up some of that chip space and just use RISC.

    If you want to run x86 binaries, use a dynamic translation tool.
  • by muon-catalyzed (2483394) on Tuesday December 04, 2012 @02:50PM (#42182321)
    Hopefully FSF also patents it, so no troll can extort license fees from using the technology. In fact FSF should patent it all, make the blue prints available RFC-style and don't bother with anything else.
  • Re:No thanks (Score:5, Interesting)

    by lkcl (517947) <lkcl@lkcl.net> on Tuesday December 04, 2012 @02:54PM (#42182381) Homepage

    Can we please move away from x86?

    yes please!

    That architecture is horribly outdated, loaded down with things that sort-of made sense in the 1970s. Today's x86 CPUs are just dressed up RISC machines; let's free up some of that chip space and just use RISC.

    this team have come from the perspective of what makes a good GPU, then turned it into a CPU. it's about as far as you can get from x86 as you can possibly get. luckily they've done the hard part of porting at least one OS (android) so have proven the tools, the compiler, the kernel, everythine.

    with linux now being the main OS it's hard for me to even remember that windows and x86 was relevant at one point. not that i'm ruling out the possibility of MS porting windows to this chip: if they want to, that's great: they'll just have to bear in mind that there will be no DRM so they won't be able to lock everyone out.

    If you want to run x86 binaries, use a dynamic translation tool.

    who was it.... i think it was ICT who put 200 special instructions into the Loongson 2H, which allow it to accelerate-emulate the most common x86 instructions, they got 70% of the main processor speed.

  • Vaporware? (Score:5, Interesting)

    by WoOS (28173) on Tuesday December 04, 2012 @02:55PM (#42182401)

    From TFA:

    >The deadline:
    > July 2013 for first mass-produced silicon
    >
    >The cost:
    > $USD 10 million

    This poster has either no idea or is dreaming. In 6 months he will not have an SoC through potentially several tape-outs, having first done System Engineering, Design, Synthesis, Layout, Verification, Validation, Documentation, ... and seemingly all without an existing organization. Or are SoC manufacturers lately doing short-term build-to-order processors. And the 10 million are not going to cover the necessary cost for all of the above. The masks alone might be that expensive depending on the number of tape-outs necessary (which - without an existing organization and working design flow - will be a lot).

  • by AdamHaun (43173) on Tuesday December 04, 2012 @03:41PM (#42183141) Journal

    Forget the performance numbers, the whole thing is bullshit:

    * The proposal is dated December 2, 2012 for an advanced kitchen sink SoC with silicon in July 2013? Really?

    * Their never released to market CPU design that beats an ARM on one video decoding benchmark is ready to go, except they need to move it to a new process, double the number of cores, and speed it up by 30%. Trivial, I'm sure.

    * This bit here:

    What's the next step?

    Find investors! We need to move quickly: there's an opportunity to hit
    Christmas sales if the processor is ready by July 2013. This should be
    possible to achieve if the engineers start NOW (because the design's
    already done and proven: it's a matter of bolting on the modern interfaces,
    compiling for FPGA to make sure it works, then running verification etc.
    No actual "design" work is needed).

    The design is done! They just have to, you know, grab their perfectly-working peripheral IPs from unstated sources, "bolt them on" to their heavily-modified CPU, and then compile for FPGA. And maybe some timing simulations for their new 40nm process, but I'm sure that won't turn up any problems. And "verification, etc." (aka the part where you actually make it work). And fixing any problems found in silicon. But no *actual* design work is needed.

    I have spent the last three months in my day job on a team of a dozen people writing design verification test cases for a new SoC. Fuck you for talking like that's nothing.

    * They're going to hit "Christmas sales"? So despite being a real honest for-profit multi-million-selling product, we swear, they're still targeting a consumer shopping season. Hint: you want your chip to go into other products. Products sold at Christmas time are designed long before Christmas. Probably more than six months before, i.e. July 2013. Oops.

    * No mention of post-silicon testing, reliability studies, or even whether they've got a test facility lined up, or what kind of resources they need for long-term support. I said it when OpenCores pulled this crap [slashdot.org], and I'll say it again. Hardware is not software. You have to think about this stuff. Yield and reliability are what determine whether other companies buy your stuff and whether you make money from it.

    Let me offer some advice to anyone who wants to change the semiconductor world overnight with the magic of open source: start small. Really small. Even Linus Torvalds didn't start out planning to conquer the world. Maybe you could start by trying to get open source IP blocks into commercial products. Once there's a bench of solid, field-tested designs, *then* we can talk about funding an attempt to put it all together. But coming out of nowhere and asking for $10 million is not the way to start. Just ask OpenCores -- their big donation drive got them a grand total of $20 thousand [opencores.org].

  • by lkcl (517947) <lkcl@lkcl.net> on Tuesday December 04, 2012 @04:10PM (#42183567) Homepage

    Forget the performance numbers, the whole thing is bullshit:

    * The proposal is dated December 2, 2012

    pay attention 007: we're aiming for mid-2013, not yesterday :) literally yesterday: today's the 4th, right? also, we're open to all kinds of investment opportunities. this article is a heads-up.

    also, bear in mind: the core design's already proven. mid-2013, whilst pretty aggressive, is doable *SO LONG AS* we *DO NOT* do any "design" work. just building-blocks, stack them together, run the verification tools, run it in FPGAs to check it works, run the verification tools again... etc. etc.

    the teams we're working with know what they're doing. me? i have no clue, and am quite happy not knowing: this is waaay beyond my expertise level and time to learn. i'm quite happy to let other people do this. if you can help provide some useful feedback, input, or even have the expertise that can be contracted out to you, GREAT.

  • by lkcl (517947) <lkcl@lkcl.net> on Tuesday December 04, 2012 @05:24PM (#42184623) Homepage

    tell me about it. please share your concerns. this is not being sarcastic: i need to know. i need to know what the right questions to ask are, because i don't know.

  • Re:No thanks (Score:4, Interesting)

    by Richard_J_N (631241) on Tuesday December 04, 2012 @08:38PM (#42186725)

    How about implementing just a few of the most common C-library functions in dedicated hardware. For example, atoi(), strlen(), or printf(). Although the software routines are highly optimised, they still take hundreds to thousands of cycles. Dedicated libc functions would require a significant amount of chip die space, BUT, they would be really power-efficient - powered off most of the time, and simply used when needed. Imagine being able to use these functions as single-cycle commands... even if the core ran at 100MHz, the performance would be amazing. Essentially it lets us trade a few hundred thousand transistors (now very cheap) for a few mW (still rather valuable).

"It is easier to fight for principles than to live up to them." -- Alfred Adler

Working...