Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware Hacking Technology Build Hardware

Toward An FSF-Endorsable Embedded Processor 258

lkcl writes about his effort to go further than others have, and actually have a processor designed for Free Software manufactured: "A new processor is being put together — one that is FSF Endorseable, contains no proprietary hardware engines, yet an 800MHz 8-core version would, at 38 GFLOPS, be powerful enough on raw GFLOPS performance figures to take on the 3ghz AMD Phenom II x4 940, the 3GHz Intel i7 920 and other respectable mid-range 100 Watt CPUs. The difference is: power consumption in 40nm for an 8-core version would be under 3 watts. The core design has been proven in 65nm, and is based on a hybrid approach, with its general-purpose instruction set being designed from the ground up to help accelerate 3D Graphics and Video Encode and Decode, an 8-core 800mhz version would be capable of 1080p30 H.264 decode, and have peak 3D rates of 320 million triangles/sec and a peak fill rate of 1600 million pixels/sec. The unusual step in the processor world is being taken to solicit input from the Free Software Community at large before going ahead with putting the chip together. So have at it: if given carte blanche, what interfaces and what features would you like an FSF-Endorseable mass-volume processor to have? (Please don't say 'DRM' or 'built-in spyware')." There's some discussion on arm-netbook. This is the guy behind the first EOMA-68 card (currently nearing production). As a heads ups, we'll be interviewing him in a live style similarly to Woz (although intentionally this time) next Tuesday.
This discussion has been archived. No new comments can be posted.

Toward An FSF-Endorsable Embedded Processor

Comments Filter:
  • by dywolf ( 2673597 ) on Tuesday December 04, 2012 @02:30PM (#42182003)

    ok more than a little.

  • Re:x86 - NOT!!!!! (Score:5, Insightful)

    by fnj ( 64210 ) on Tuesday December 04, 2012 @02:34PM (#42182063)

    I couldn't care less if it is x86 compatible (I assume it is emphatically not). I'm sure the FSF does not care, either. I would use this in a heartbeat for my main desktop, and since I haven't had any significant dealings with Windows in at least 8 years, all I need is a free Posix OS (probably linux) and a C/C++ compiler.

  • Re:No thanks (Score:4, Insightful)

    by CajunArson ( 465943 ) on Tuesday December 04, 2012 @02:34PM (#42182073) Journal

    Today's ARM architecture is just a dressed up CISC architecture, let's move away from ARM's lame attempts at copying AVX with neon and just use the real thing!

    (You see how the door swings both ways there? Trust me, if any architecture designer from the early 1990's were frozen in a block of ice, thawed out today and then shown the ARMv8 ISA, he would never in a million years call it "RISC")

  • by godrik ( 1287354 ) on Tuesday December 04, 2012 @02:57PM (#42182445)

    Indeed, high gigaflops is easy, useful high gigaflops is hard. You can easily build a processor that only support float-addition and nothing else with a 1024 bit SIMD register clocked at 4 Ghz. And voila, you get 128Gflop/s per core. Problem is: it is useless.

    The question is not how many adds or muls you can do per second in an ideal application for your architecture. The question is how many adds or muls (or whatever you need to measure) you can do per second on a real application.

    For instance, the top-500 uses linpack, that measures how fast one can multiply dense matrices. That problem is only of interest to a small amount of people.

  • Also (Score:5, Insightful)

    by Sycraft-fu ( 314770 ) on Tuesday December 04, 2012 @03:01PM (#42182523)

    Compare it to a more modern processor. You want floating point performance? Take a look at a Sandy/Ivy Bridge. My 2600k, which I have set to run at 4GHz, gets about 90GFlops in Linpack. The reason is Intel's new AVX extension, which really is something to write home about for DP FP. Ivy Bridge is supposedly a bit more efficient per clock (I don't have one handy to test).

    If you are bringing out a processor at some point in the future, you need to compare to the latest products your competitors have, since that is realistically what you face. You can't look at something two generations old, as the 920 is, and say "Well we compete well with that!" because if I'm looking at buying your new product, it is competing against other new products.

  • by CajunArson ( 465943 ) on Tuesday December 04, 2012 @03:40PM (#42183127) Journal

    unless you consider 1333mhz 32-bit DDR3 not to be a real memory controller?

    Thanks for filling in that detail since I didn't know the precise specs (and for proving me right). To reiterate: No, this thing does not have a real memory controller compared to the 128 bit (2 channel 64-bit) or 192 bit (3 channel 64-bit) memory controllers in the AMD and Intel chips, respectively, that are mentioned in TFS.

    You can go on and on about some busy-loop that you were able to code that gets all those gigaflops. I can get a 386 to tell me the result of 100 quadrillion quad-precision add-muls where the only operands are zero in less than a second too.. but it isn't useful work.

    Trust me, if a chip even remotely like the one you are describing could do all that useful computational work in less than 3 watts using a previous generation process, then it would already have been deployed in supercomputers years ago and this wouldn't be some pie in the sky FSF project.

      I have no problem with a hobby project to build a CPU with an open architecture, but frankly hyperbole and outright dishonesty about performance expectations are not doing you or anyone else in the project any favors. Being "open" should include being honest & realistic first and foremost.

  • by AdamHaun ( 43173 ) on Tuesday December 04, 2012 @04:56PM (#42184251) Journal

    pay attention 007: we're aiming for mid-2013

    Yes, that's what I said:

    * The proposal is dated December 2, 2012 for an advanced kitchen sink SoC with silicon in July 2013? Really?

    Perhaps my phrasing was unclear. I am skeptical of a six-month development process.

    also, bear in mind: the core design's already proven.

    By who? To what specs (temperature, voltage, operating life)? Using what methodology?

    mid-2013, whilst pretty aggressive, is doable *SO LONG AS* we *DO NOT* do any "design" work. just building-blocks, stack them together, run the verification tools, run it in FPGAs to check it works, run the verification tools again... etc. etc.

    You know you can't go straight from RTL to silicon, right? You need timing sims and physical layout. Those are not trivial and they cannot be totally automated.

    the teams we're working with know what they're doing. me? i have no clue, and am quite happy not knowing: this is waaay beyond my expertise level and time to learn.

    Okay, here's the part that confuses me. You came up with an idea, talked to other people with expertise about doing it, and it sounds like you know who's working on it. All of that is fine. What I don't understand is why you are acting as the leader/spokesman for a project you know almost nothing about. Who are these other groups? The link at the bottom of your proposal is to a no-name Chinese semiconductor company that formed last year and has no products listed. Are they doing the RTL, layout, and verification? Who's doing the silicon testing? What foundry will you use?

    The reason I'm being so harsh here is because you're asking for a lot of money with very little credibility. There is nothing in your proposal, your CV, or your comments to suggest that you are competent to work on a project like this. So who's doing the work? Why aren't their names on the proposal? Who has the experience and leadership to make sure the project actually gets done? Why are you "quite happy not knowing" what they're doing when you're the one trying to secure funding?

    If you come back here in 2013 with a working chip I'll be the first to apologize, but right now I see very little reason to take this seriously.

  • Re:No thanks (Score:5, Insightful)

    by Anonymous Coward on Tuesday December 04, 2012 @05:04PM (#42184367)

    Yes, we can move away from x86.
    No, it isn't a good idea.

    It's time to put this one to rest.
    It's been a few decades and we've seen the argument from theory, practice, and to conclusion today.

    x86(and it's er.. extension/evolutions) IS the better general purpose arch. But not for the reasons anyone conceived of. I think it's best put this way.

    1. RISC(for example) very good at running good code.
    2. Most code is bad. (No really, it's awful. Ask any programmer)
    3. x86 processors, it turns out, are very good at running bad code.

    Many other arches were created under the premise that good code could be created for them automatically. Turns out that compilers that can do this are like unicorns. They don't exist. It's an np-hard problem.

    It's what killed itanium. The magic compilers never turned up. The amount of developer effort required to write good software isn't worth it.

    *Why is most code bad you ask? Easy. Programming, put crudely, is a bullshit art.
    Just ask Dijkstra (Well not anymore. He's dead now) Programs are math. Few programs, however, are proven to be "correct" mathmatically. - It's impractical for most applications. Sure, you have rules you call "Practices" that tend to generate better code.. But everyone knows how code is really developed nowadays. Lay it down, slap it around until the show stoppers are reduced to a bearable frequency, and patch up anything you missed after it ships.

    I'm not saying this approach is necessarily bad. It has advantages. It's very fast! It's fast, and you can get a lot of useful work out of it. If your idea or application is good or novel or productive enough you can put up with some bugs and at the end of the day you'll end up ahead. - If you set out to write a program that's mathematically prove-able from start to finish.. Your competitors will have buried you years before your first release.

  • by CajunArson ( 465943 ) on Tuesday December 04, 2012 @05:40PM (#42184847) Journal

    First of all: Lots of non-x86 high-performance computers have similar memory controller layouts. Look at high-end SPARC or Power architecture systems.

    Second of all: Thanks for proving me right with your screed about how ARM chips don't have good memory controllers. Guess what: you're right! They don't! And guess what: The Cortex-A15 is the first ARM chip capable of beating a 4 year old Atom when clocked north of 1.5 Ghz! So that's the type of performance that even the supposedly miraculous ARM gets with its architecture and a similar memory controller! You are now claiming to be insanely smarter than everyone at ARM and Intel simultaneously.. if chips could be designed and built based solely on arrogance & ego, you'd put ARM & Intel out of business by next Tuesday.

    So basically you have been trolling this thread calling everybody who has pointed out flaws in the grandiose promises that you have put forth "007" in a smarmy and condescending manner while presenting zero facts to backup your arguments and contradicting yourself at every turn.

    From your annoying and repetitive use of "007", do you perchance speak with a British accent? Do you appear in informercials at 2AM pushing whatever fake product of the day some insomniac can buy for $19.95? Because that's exactly how you come across in these discussions, and if you actually are associated with this project and aren't just troll then I'd highly recommend that the FSF immediately disavow this project before they end up getting sued when you make off with somebody's money.

  • by LordLimecat ( 1103839 ) on Tuesday December 04, 2012 @05:47PM (#42184943)

    well, tell you what, rather than accusing, why don't you ask me to ask them

    Its not a matter of asking. If someone could match even a 2-gen old i7 design on 3 watts, they would have done so by now, undercut Intel, and made zillions. They cant, because Intel processors are really good and their R&D budget dwarfs the budget of most US states, not to mention they own their own fabs and are 1-2 generations ahead of literally everyone else in process scale.

    Even without deep technical knowledge, it doesnt pass the smell test.

  • Re:x86 (Score:2, Insightful)

    by Anonymous Coward on Wednesday December 05, 2012 @02:10AM (#42188833)

    But with various advances and lessons learned in chip and PC architecture, it makes sense that an 800MHz processor could take on a 3GHz processor and kick its butt.

    No, it doesn't actually. Not when the 3GHz processor is one of the world's most efficient (in terms of realized computational power per clock cycle) designs in existence. Not when the effort required to reach that pinnacle of complexity and performance is... astonishing. (Think: 5 year long design projects.)

    Each discrete processor instruction in x86 land still takes several clock cycles to execute. (I know, pipelining and multiple instructions are being processed at all times assembly line style so the effective instructions per cycle is different.)

    You don't have any clue anyways. Throughput on common x86 instructions often exceeds 1 per cycle on modern advanced x86 core designs. It hardly matters that the start-to-finish latency for individual instructions is many cycles due to pipelining; without pipelines no design could operate even as fast as 800 MHz.

    But if you combine current technology and design it from the ground up to do the kinds of things we do today, it would make sense that it would use less power and fewer cycles per instruction.

    This only makes "sense" if you're an armchair theorizer who has no real insight into the problem space. In reality, the worst problem with x86 as of x86-64 is that its instruction encoding is somewhat difficult to efficiently decode, due to the variable length instructions. A nicer encoding (even one with variable length, so long as it made it easy to detect total length from the first 2 bytes instead of having to scan the whole instruction like you do with x86) would basically save some power.. But not a lot. There isn't any revolution lurking the way you naively want to believe.

    The reason we aren't doing all that well today is that x86 things are crippled into doing things the x86 way because they are still needed to run x86 software.

    So, while all other things are changing, why not take the opportunity and update the processor, OS and software along with the style of computing? I know Microsoft's answer is to adapt x86 Wintel into other forms. No one wants this other than Microsoft...

    Just how old are you? I'm thinking you can't possibly be very old if you aren't aware that all the way back in 1991 the Apple-IBM-Motorola consortium created PowerPC, which was going to eat the lunch of all of the x86 CPUs by being a clean new architecture designed on scientific RISC principles. Apple replaced 68K with PPC. Motorola ported Windows NT to PPC and touted how in the brave new world people would flock to PPC for far better performance which x86 could never provide.

    It took all of about 5 minutes for the NT-on-PPC plans to flop. There wasn't a real performance advantage there, so no third party software vendors were interested in porting application software, so no users wanted to buy PPC NT machines just so they could wank off about how pure and wonderful their CPUs were. The first PPC Macs shipped in 1994. Apple was reasonably successful with it for a while, because Mac users had little choice but to switch and PPC performance at least generally kept up with x86 for a while, but eventually PPC began falling behind and Apple was forced to switch.

    The problems with x86 weren't fatal in 1991, and they aren't fatal 20 years later either. A clean sheet design could indeed be slightly better, but not radically better.

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Wednesday December 05, 2012 @03:28AM (#42189195) Homepage

    I made it this far down the page before saying it, but I can't hold back any more.

    You have absolutely no clue what you're doing

    that's right - i don't. that's why i'm asking peoples' input.

    and because of that, if you're leading this project, I doubt any of it exists.

    that's right: it doesn't. the idea is to get it made, with as little risk as possible, using building blocks that have been proven as much as is possible.

    anything that's in the "planning" phase doesn't exist until it actually exists. what's wrong with that? if everyone followed the line you're proposing, nobody would ever make anything, would they?

Organic chemistry is the chemistry of carbon compounds. Biochemistry is the study of carbon compounds that crawl. -- Mike Adams

Working...