Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Hacking Technology Build Hardware

Toward An FSF-Endorsable Embedded Processor 258

lkcl writes about his effort to go further than others have, and actually have a processor designed for Free Software manufactured: "A new processor is being put together — one that is FSF Endorseable, contains no proprietary hardware engines, yet an 800MHz 8-core version would, at 38 GFLOPS, be powerful enough on raw GFLOPS performance figures to take on the 3ghz AMD Phenom II x4 940, the 3GHz Intel i7 920 and other respectable mid-range 100 Watt CPUs. The difference is: power consumption in 40nm for an 8-core version would be under 3 watts. The core design has been proven in 65nm, and is based on a hybrid approach, with its general-purpose instruction set being designed from the ground up to help accelerate 3D Graphics and Video Encode and Decode, an 8-core 800mhz version would be capable of 1080p30 H.264 decode, and have peak 3D rates of 320 million triangles/sec and a peak fill rate of 1600 million pixels/sec. The unusual step in the processor world is being taken to solicit input from the Free Software Community at large before going ahead with putting the chip together. So have at it: if given carte blanche, what interfaces and what features would you like an FSF-Endorseable mass-volume processor to have? (Please don't say 'DRM' or 'built-in spyware')." There's some discussion on arm-netbook. This is the guy behind the first EOMA-68 card (currently nearing production). As a heads ups, we'll be interviewing him in a live style similarly to Woz (although intentionally this time) next Tuesday.
This discussion has been archived. No new comments can be posted.

Toward An FSF-Endorsable Embedded Processor

Comments Filter:
  • DRM (Score:5, Interesting)

    by queazocotal ( 915608 ) on Tuesday December 04, 2012 @02:29PM (#42181975)

    DRM, in some aspects - trusted computing - can be a positive thing.
    My ideal system would have a root key I can set, that without software signed by it, it is a rock.

    • A rock made of silicone. Oh, wait...

    • DRM, in some aspects - trusted computing - can be a positive thing. My ideal system would have a root key I can set, that without software signed by it, it is a rock.

      No, trusted computing is pointless. Let me explain: Exploits are caused by bugs in your software, even if signed and encrypted if the bugs exist that allow stack smashing or heap pointer overwrites (buffer overruns) then your signed and encrypted "trusted computing" can end up actually being a remote code execution vulnerability. See also: Return oriented programming.

      Now perhaps you could brick your machine if the boot sector's been tampered with, but why would an exploit writer bother when they can jus

      • by Jartan ( 219704 )

        He's talking about CPU enforced no-execute rules. The thing you're getting upset about is a bios trick to stop certain root exploits.

        Two completely different things. Ironically the thing he's talking about stops the exploits you list.

  • Scientific Computing (Score:5, Interesting)

    by simonbp ( 412489 ) on Tuesday December 04, 2012 @02:30PM (#42181991) Homepage

    IMHO, they really need to push this for scientific computing initially, as they tend to buy in bulk and are not very binary dependant. They are claiming it is so low power (2.7 W) that it would be easy to put an array, say, eight of them on a 1U motherboard for 64 cores.

    • by korgitser ( 1809018 ) on Tuesday December 04, 2012 @03:26PM (#42182931)

      imagine a beowulf cluster of those!

    • by Durinia ( 72612 )

      And interconnect them with what?

      HDMI?

      • by lkcl ( 517947 )

        And interconnect them with what?

        HDMI?

        yeahhh... i hadn't really thought that one through. you kinda need 4 gigabit links, really (minimum)... hmmm....

    • As long as we're comparing mysterious numbers*, let's take a closer look.
      Future Chip:
      38 GFLOPS / 2.7W = ~14 GFLOPS/W
      Tesla K20x:
      3950 GFLOPS / 235W = ~16.8 GFLOPS/W
      Radeon 7970:
      3790 GFLOPS / 280W = ~13.5 GFLOPS/W

      So I'm not seeing a power advantage here. More questions: does the chip do double precision, and what's the rate? What's the memory bandwidth? Is there support for ECC/scrubbing, which is essential for Big Deal calculations? (The 7970 doesn't support ECC. The Tesla does, and it had better given the amo

      • by Durinia ( 72612 )

        For the record, the Tesla K20x TDP numbers include the memory (it's for the entire card).

        A comment below says that it uses DDR3 1333. Total bandwidth of that, being extremely generous and giving them 6 memory channels (unlikely) puts you in the neighborhood of about 1/10th the memory bandwidth of the K20.

        Combine that with the "how do you connect this to other things" problem, and this chip has no chance in scientific computing.

        • by Durinia ( 72612 )

          As a follow-up - it's one DDR3 channel - maybe 2. That puts it at about 1/30th of a K20.

          People have tried to creep into Scientific Computing with processors like this (tile-based perf-per-watt SoCs). They haven't succeeded (see: Adapteva, Tilera, etc.). And they have much bigger budgets. :)

  • by fnj ( 64210 ) on Tuesday December 04, 2012 @02:30PM (#42181997)

    I always wondered why it is always assumed that separate CPU and GPU are somehow the most efficient use of silicon. It just seemed counter intuitive to me. If the proposed processor is as efficient as claimed, it looks like I was right to wonder. This absolutely annihilates Intel and AMD on a performance per watt basis.

  • by dywolf ( 2673597 ) on Tuesday December 04, 2012 @02:30PM (#42182003)

    ok more than a little.

  • Didn't see any mention of hardware floating point unit(s). Is that just a given these days?

    • by fnj ( 64210 )

      I would think so. Even a five buck embedded oriented ARM like Cortex-M4 (sans MMU; hence not suitable for real OS) has one nowadays.

    • by lkcl ( 517947 )

      Didn't see any mention of hardware floating point unit(s). Is that just a given these days?

      i believe so, yes - that's why i mentioned the GFLOPS figure. apologies if it wasn't made clear.

    • Didn't see any mention of hardware floating point unit(s). Is that just a given these days?

      Kind of hard to be used for graphics if not... Hint: "designed from the ground up to help accelerate 3D Graphics".

  • by CajunArson ( 465943 ) on Tuesday December 04, 2012 @02:39PM (#42182121) Journal

    Those performance numbers are pure fantasy. First off, the 38 GFlops is undoubtedly referring to single precision operations while the x86 processors mentioned in TFS are doing that much in *double* precision mode. Second off, the 38 GFlop number is a simple arithmetic estimate of what the magic chip could do IFF every functional unit on the chip operated at 100% perfect efficiency. Guess what: a real memory controller that could keep the chip fed with data at that rate will use > 3 watts all by itself. This chip won't have a real memory controller though, so you can bet the 38 GFlop performance will remain a nice fairytale instead of a real product.

    • by godrik ( 1287354 ) on Tuesday December 04, 2012 @02:57PM (#42182445)

      Indeed, high gigaflops is easy, useful high gigaflops is hard. You can easily build a processor that only support float-addition and nothing else with a 1024 bit SIMD register clocked at 4 Ghz. And voila, you get 128Gflop/s per core. Problem is: it is useless.

      The question is not how many adds or muls you can do per second in an ideal application for your architecture. The question is how many adds or muls (or whatever you need to measure) you can do per second on a real application.

      For instance, the top-500 uses linpack, that measures how fast one can multiply dense matrices. That problem is only of interest to a small amount of people.

    • Also (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Tuesday December 04, 2012 @03:01PM (#42182523)

      Compare it to a more modern processor. You want floating point performance? Take a look at a Sandy/Ivy Bridge. My 2600k, which I have set to run at 4GHz, gets about 90GFlops in Linpack. The reason is Intel's new AVX extension, which really is something to write home about for DP FP. Ivy Bridge is supposedly a bit more efficient per clock (I don't have one handy to test).

      If you are bringing out a processor at some point in the future, you need to compare to the latest products your competitors have, since that is realistically what you face. You can't look at something two generations old, as the 920 is, and say "Well we compete well with that!" because if I'm looking at buying your new product, it is competing against other new products.

      • The summary is building expectations so much that I can't help feeling this is a massive flop (yup, I did that) waiting to happen.

        I'd be really impressed if they did match the performance of the 920, even if it'll probably be somewhere between 5-10 years old by the time this Free CPU sees production and gets into consumer hands. That's quite a complex, performant CPU right there to match. But the summary has so many holes, I really have a hard time believing they'll get anywhere near the 920 for general-pur

    • The one anecdotal piece I have to complement the above is that I was recently doing some work on an application in C to improve the performance of some legacy Fast Fourier Transform code compiled with GCC. The original code was doing a bunch of heavy lifting with double precision floats. I optimized the algorithm as far as I could without changing any data types and, as a last step I changed the doubles to pure 32bit integer arithmetic expecting at least twice the execution speed compared to the doubles on
    • by AdamHaun ( 43173 ) on Tuesday December 04, 2012 @03:41PM (#42183141) Journal

      Forget the performance numbers, the whole thing is bullshit:

      * The proposal is dated December 2, 2012 for an advanced kitchen sink SoC with silicon in July 2013? Really?

      * Their never released to market CPU design that beats an ARM on one video decoding benchmark is ready to go, except they need to move it to a new process, double the number of cores, and speed it up by 30%. Trivial, I'm sure.

      * This bit here:

      What's the next step?

      Find investors! We need to move quickly: there's an opportunity to hit
      Christmas sales if the processor is ready by July 2013. This should be
      possible to achieve if the engineers start NOW (because the design's
      already done and proven: it's a matter of bolting on the modern interfaces,
      compiling for FPGA to make sure it works, then running verification etc.
      No actual "design" work is needed).

      The design is done! They just have to, you know, grab their perfectly-working peripheral IPs from unstated sources, "bolt them on" to their heavily-modified CPU, and then compile for FPGA. And maybe some timing simulations for their new 40nm process, but I'm sure that won't turn up any problems. And "verification, etc." (aka the part where you actually make it work). And fixing any problems found in silicon. But no *actual* design work is needed.

      I have spent the last three months in my day job on a team of a dozen people writing design verification test cases for a new SoC. Fuck you for talking like that's nothing.

      * They're going to hit "Christmas sales"? So despite being a real honest for-profit multi-million-selling product, we swear, they're still targeting a consumer shopping season. Hint: you want your chip to go into other products. Products sold at Christmas time are designed long before Christmas. Probably more than six months before, i.e. July 2013. Oops.

      * No mention of post-silicon testing, reliability studies, or even whether they've got a test facility lined up, or what kind of resources they need for long-term support. I said it when OpenCores pulled this crap [slashdot.org], and I'll say it again. Hardware is not software. You have to think about this stuff. Yield and reliability are what determine whether other companies buy your stuff and whether you make money from it.

      Let me offer some advice to anyone who wants to change the semiconductor world overnight with the magic of open source: start small. Really small. Even Linus Torvalds didn't start out planning to conquer the world. Maybe you could start by trying to get open source IP blocks into commercial products. Once there's a bench of solid, field-tested designs, *then* we can talk about funding an attempt to put it all together. But coming out of nowhere and asking for $10 million is not the way to start. Just ask OpenCores -- their big donation drive got them a grand total of $20 thousand [opencores.org].

      • Thanks for that post.. extremely informative and it's good to know that people who really have to deal with these issues on a daily basis are paying attention.

        As I said above: I have no problem with a project to build an "open" chip for education & hobbyists, but scam artists that know how to fool their marks with the correct buzzwords and hype are not doing anyone any favors.

      • by Durinia ( 72612 )

        Thanks for saving me a lot of typing. :-)

      • Re: (Score:2, Interesting)

        by lkcl ( 517947 )

        Forget the performance numbers, the whole thing is bullshit:

        * The proposal is dated December 2, 2012

        pay attention 007: we're aiming for mid-2013, not yesterday :) literally yesterday: today's the 4th, right? also, we're open to all kinds of investment opportunities. this article is a heads-up.

        also, bear in mind: the core design's already proven. mid-2013, whilst pretty aggressive, is doable *SO LONG AS* we *DO NOT* do any "design" work. just building-blocks, stack them together, run the verification tools, run it in FPGAs to check it works, run the verification tools again... etc. etc.

        the teams we're

        • by AdamHaun ( 43173 ) on Tuesday December 04, 2012 @04:56PM (#42184251) Journal

          pay attention 007: we're aiming for mid-2013

          Yes, that's what I said:

          * The proposal is dated December 2, 2012 for an advanced kitchen sink SoC with silicon in July 2013? Really?

          Perhaps my phrasing was unclear. I am skeptical of a six-month development process.

          also, bear in mind: the core design's already proven.

          By who? To what specs (temperature, voltage, operating life)? Using what methodology?

          mid-2013, whilst pretty aggressive, is doable *SO LONG AS* we *DO NOT* do any "design" work. just building-blocks, stack them together, run the verification tools, run it in FPGAs to check it works, run the verification tools again... etc. etc.

          You know you can't go straight from RTL to silicon, right? You need timing sims and physical layout. Those are not trivial and they cannot be totally automated.

          the teams we're working with know what they're doing. me? i have no clue, and am quite happy not knowing: this is waaay beyond my expertise level and time to learn.

          Okay, here's the part that confuses me. You came up with an idea, talked to other people with expertise about doing it, and it sounds like you know who's working on it. All of that is fine. What I don't understand is why you are acting as the leader/spokesman for a project you know almost nothing about. Who are these other groups? The link at the bottom of your proposal is to a no-name Chinese semiconductor company that formed last year and has no products listed. Are they doing the RTL, layout, and verification? Who's doing the silicon testing? What foundry will you use?

          The reason I'm being so harsh here is because you're asking for a lot of money with very little credibility. There is nothing in your proposal, your CV, or your comments to suggest that you are competent to work on a project like this. So who's doing the work? Why aren't their names on the proposal? Who has the experience and leadership to make sure the project actually gets done? Why are you "quite happy not knowing" what they're doing when you're the one trying to secure funding?

          If you come back here in 2013 with a working chip I'll be the first to apologize, but right now I see very little reason to take this seriously.

  • H.264 can't (legally) be encoded without paying for a license... interesting choice for an example. Yes, decoding is free at the moment, but these patents will be in effect until around 2020 or later and are part of the highly patented MPEG 4 standard.

    • by Splab ( 574204 )

      Why yes it can; might not be in the US, but the rest of the world are somewhat more sensible (but arguably still stupid) about patents.

  • by lobiusmoop ( 305328 ) on Tuesday December 04, 2012 @02:42PM (#42182201) Homepage

    I know Allwinner did a separate version of their A10 chip without HDMI (A13) to avoid heavy licensing costs, would the HDMI push the cost of the chip up much?

    • would the HDMI push the cost of the chip up much?

      I doubt very much that the people who control the HDMI spec would allow an EFF-endorsed CPU to do this anyway -- the EFF has no interest in enforcing DRM, and HDCP pretty much requires you implement it end to end.

      I'm not sure you could reconcile those two views.

      • by lkcl ( 517947 )

        would the HDMI push the cost of the chip up much?

        I doubt very much that the people who control the HDMI spec would allow an EFF-endorsed CPU to do this anyway -- the EFF has no interest in enforcing DRM, and HDCP pretty much requires you implement it end to end.

        I'm not sure you could reconcile those two views.

        funny you should mention this. i raised it with Dr Stallman because the same sort of thing occurred to me: why support DRM?? well... his answer was: the DRM in HDMI is so utterly broken that it's as if it didn't matter. therefore, he's okay with it.

        which i find absolutely hilarious. DRM is okay, as long as the keys are available, one way or the other [thus making the DRM irrelevant, one way or the other]. this is primarily what the fuss over the GPLv3 is about, because of the endemic tivoisation that o

  • by WaffleMonster ( 969671 ) on Tuesday December 04, 2012 @02:44PM (#42182237)

    I want a REAL cryptographic quality random number generator based on thermal noise or some other quantum mumbo jumbo.

    https://www.eff.org/rng-bug [eff.org]

    Lets at least make the spooks have to work for a living :)

    • you could have a radio active sample that emits partials randomly, use that as the base for your random number generator. the features i would like to see in this chip though is virtualization acceleration similar to what the better x86 and x86_64 chips now have. Maybe throw in hardware decoding of open media formats like ogg to.

    • I write software that requires randomness to seed some key generation routines, for inverse DRM -- Where the user can validate mods other users make, or that my dev patches are valid (security, a value add, not the "prevent game from running" sense). When I do need randomness, I simply ask for it. I require the user to pound on the keyboard and randomly shake the mouse about, using the inputs to generate a bit of randomness to generate state and bit selection of the other random inputs for constructing t

  • Vaporware? (Score:5, Interesting)

    by WoOS ( 28173 ) on Tuesday December 04, 2012 @02:55PM (#42182401)

    From TFA:

    >The deadline:
    > July 2013 for first mass-produced silicon
    >
    >The cost:
    > $USD 10 million

    This poster has either no idea or is dreaming. In 6 months he will not have an SoC through potentially several tape-outs, having first done System Engineering, Design, Synthesis, Layout, Verification, Validation, Documentation, ... and seemingly all without an existing organization. Or are SoC manufacturers lately doing short-term build-to-order processors. And the 10 million are not going to cover the necessary cost for all of the above. The masks alone might be that expensive depending on the number of tape-outs necessary (which - without an existing organization and working design flow - will be a lot).

    • Re:Vaporware? (Score:4, Informative)

      by lkcl ( 517947 ) <lkcl@lkcl.net> on Tuesday December 04, 2012 @03:24PM (#42182907) Homepage

      From TFA:

      >The deadline:
      > July 2013 for first mass-produced silicon
      >
      >The cost:
      > $USD 10 million

      This poster has either no idea or is dreaming.

      both. i have no clue - that's why i posted this article online, as a way to solicit input and to double-check things - and i'm dreaming of success.

      In 6 months he will not have an SoC through potentially several tape-outs, having first done System Engineering, Design, Synthesis, Layout, Verification, Validation,

      what i haven't mentioned is that one of my associates (my mentor) used to work for LSI Logic, and he later went on to be Samsung's global head of R&D. he knows the ropes - i don't. we've been in constant communication, and also in touch with some people that he knows - long story but we have access to some of the best people who *have* done this sort of thing.

      Documentation,

      ahh, my old enemy: Documentation. [kung fu panda quote. sorry...] - yes, this is probably going to lag. at least there will be source code which we know already works. not having complete documentation has worked out quite well for the Allwinner A10 SoC, wouldn't you agree?

      also, because this is going to be a Rhombus Tech Project, the CPU will *not* be available for sale separately. it will *ONLY* be available as an EOMA-68 module. no arguments over the hardware design. no *need* to do complex hardware designs. the EVB Board will *be* the "Production Unit" - just in a case, instead.

      so by deploying that strategy, Documentation is minimised. heck, most factories in China have absolutely no clue what they're making. it might as well be shoes or handbags, for all they know. heck, many of the factories we've seen actually *make* shoes and handbags, and their owners have gone "i know, let's diversify, let's make tablets". you think they care about Documentation? :) ... ok, i know what you mean.

      ... and seemingly all without an existing organization.

      yeah. it's amazing what you can do if you're prepared to say "i don't know what i'm doing" and ask other people for help rather than try to keep everything secret, controlled and "in-house". my associates are tearing their hair out, i can tell you :)

      Or are SoC manufacturers lately doing short-term build-to-order processors. And the 10 million are not going to cover the necessary cost for all of the above. The masks alone might be that expensive depending on the number of tape-outs necessary (which - without an existing organization and working design flow - will be a lot).

      well, because i know nothing, i've asked people who do know and have a lot of experience. the procedure we'll be following is to get an independent 3rd party - one that partners with the foundry - and get them to do the verification, even if the designers themselves have run the exact same tools. if it then goes wrong, we can tell them to fix it... *without* the extra cost of another set of masks. a kind of insurance, if you will.

      but the other thing we are doing is: there will be *no* additional "design". it's a building-block exercise. the existing design is already proven in 65nm under the MVP Programme: USB-OTG works, DDR3/1333mhz works, RGB/TTL works, the core works, PWM works, I2S works, SD/MMC works and so on. all we're doing is asking them to dial up the macros to put down a few more cores, and surround it with additional well-proven hard macros (HDMI, USB3, SATA-II).

      does that sound like a strategy which would, in your opinion, minimise the costs and increase the chances of first time success?

      • by WoOS ( 28173 )

        > Yes, this is probably going to lag. at least there will be source code which we know already works.
        > not having complete documentation has worked out quite well for the Allwinner A10 SoC, wouldn't you agree?

        I don't know the A10 with the euphemistic name but I know that the typical SoC MCU I know has documentation in the thousands of pages. And most of it on internal blocks, not external connections which might see a reduced need by delivering it only on a board - although then you need to document t

  • by vlm ( 69642 ) on Tuesday December 04, 2012 @02:58PM (#42182459)

    hardware support for free formats, as opposed to non-free?

  • "So have at it: if given carte blanche, what interfaces and what features would you like an FSF-Endorseable mass-volume processor to have?"

    Standard size chip socket, with adapter springs and guides for using off the shelf cooling implements (like zalman fans, and watercooling), for other CPUs.

    need PCI and PCI express, prefrably at least 24 lanes, hopefully as many as 48 lanes.

    Behind this, fast northside/southside busses to keepup with the following, I think AMD open sourced hypertransport, so front side bus
    • by lkcl ( 517947 ) <lkcl@lkcl.net> on Tuesday December 04, 2012 @03:51PM (#42183269) Homepage

      "So have at it: if given carte blanche, what interfaces and what features would you like an FSF-Endorseable mass-volume processor to have?"

      thank you for taking me literally! really appreciated!

      Standard size chip socket, with adapter springs and guides for using off the shelf cooling implements (like zalman fans, and watercooling), for other CPUs.

      ah. this is going to be a 15mm x 15mm BGA with only around 320 pins. it's tiny. ok, that might have to be revisited now that i thought about doing an 8-core monster - 3 watts in a 15 x 15mm package is hellishly hot.
      i'm still debating whether it should have dual 32-bit DDR3 lanes. even so, that only adds an extra... 75 or so pins, bringing it up maybe to 19 x 19 mm.

      need PCI and PCI express, prefrably at least 24 lanes, hopefully as many as 48 lanes.

      ahhh... PCI express is a bug-bear. that many lanes would, on their own, turn this into a 12 to 30 watt part: right now we're aiming for a different market. i'm happy to be steered in a different direction if it can be shown that it's a genuinely good idea, with a high chance of return on investment.

      Behind this, fast northside/southside busses to keepup with the following, I think AMD open sourced hypertransport, so front side bussing should not be an issue.

      ah this is an embedded processor: they don't have northbridge/southbridge buses [at all]. those are reserved for CPUs at the 10+ watt market.

      If your still mulling over instruction set, a built in crypto proccessing chip would ROCK. implement intels AES-NI or something similar, plus more for twofish, serpent, and other fairly mainstream modern, unbroken Free/Open encryption algorythms. Then add hash instructions for the entire SHA family of hashes, MD6, whirlpool, tiger, RIPMED, and GOST

      ok - this is a general-purpose processor that *happens* to have been designed to be capable of doing a GPU and a VPU's job. hmmm... i wonder whether their instruction set can do crypto primitives.. hmmm.... yeah, that's a great question to ask. i'll get back to you on that.

      GOOD USB 3 support, with legacy suppoequivsrt for 1 and 2. Not only do I want some ports on the back, I want at least 3-4 banks of header pins on a theorhetical motherboard for front panel devices and ports. They shtheorheticalould be USB 1,2,3. Solid high speed memory controller at a preimium.

      definitely going to have 1x USB-OTG, probably 2x USB2-HOST, and at least one USB-3.

      Universial SATA support for revisions 1,2 and 3 (1.5GB/s 3.0 GB/s and 6.0 GBs respectively), built in RAID controller. eSATA would help too.

      i'm reluctant to push this IC towards 6gb/sec - it'd be by far and above the fastest bit of I/O on the chip. RAID i'd be concerned about pushing up the cost for the mass-volume uses [which wouldn't use it]. eSATA is _great_. i'd forgotten about that.

      scalable audio chipset capable of up to 8.1 surround, Stereo input, SPID/F and all the other great audio features.

      SPDIF - i'd not *entirely* forgotten about that - will remember to make a mental note. audio i would like to rely on the processor itself for that sort of thing (for basic audio - headphones and the like), otherwise handing off to a standard I2S/AC97 audio IC for cases where people really want more complex audio. there are 3 I2S interfaces i think.

      so, yeah - i want audio to be done more like the TI McBSP. DMA-driven, but use the main processor for audio handling. keep it simple.

      DDR3 RAM, or something comparable.

      already done. 1333mhz. bit concerned personally about the power consumption of 1333mhz, i know that 800mhz is about 0.3 watts for example: 1333mhz is starting to get to 1 maybe 1.5 watts all on its own!

      Unlocked bootloader with firmware m

    • by lkcl ( 517947 )

      split the graphics chipset into another PCI-E board, and sell it seperately, that works with x86. .

      in x86-land, yes. in ARM-land, yes. MIPS, funnily enough no: look up MIPS64-ASE-3D. ingenic jz4760 and below: no (look up X-Burst).

      this chip is more like MIPS-with-3D-ASE, or Ingenic-with-XBurst. you *can't* separate the GPU from the CPU: they're one and the same. ok, you could... but you'd end up with two identical processors connected by some sort of fast bus... why bother? why not just double the number of cores?

  • by gr8_phk ( 621180 ) on Tuesday December 04, 2012 @03:10PM (#42182667)
    Off the top of my head:

    0) A proper MMU and at least 1Meg of cache
    1) 64bit - If not, there will be a need for yet another version at some point. Just do this.
    2) Double precision floating point in hardware (for + - * / and preferably rsqrt)
    3) GCC support.
    4) LLVM support
    5) LLVM-Pipe for OpenGL support
    6) It would be nice if some instructions were optimized for running virtual machines.

    I haven't looked into what makes sense for #6, but with all the VMs around it would be nice to have them run efficiently.
    • by lkcl ( 517947 )

      Off the top of my head:

      always the best way :)

      0) A proper MMU and at least 1Meg of cache

      it's got 64k I & D 1st level, yes to the proper MMU, and the dual-core version has 256k 2nd-level (just enough). they reckon for 8-core that'll have to be increased.

      1) 64bit - If not, there will be a need for yet another version at some point. Just do this.

      yes. wellll aware of this :) have to be scheduled for the next version unfortunately.

      2) Double precision floating point in hardware (for + - * / and preferably rsqrt)

      it must have. i'll ask though.

      3) GCC support.

      ah no. this design is too different for gcc to handle. their compiler expert - someone with over 15 continuous years expertise in compiler design - chose open64 instead (which used gcc's front-end at

  • by Dishwasha ( 125561 ) on Tuesday December 04, 2012 @03:10PM (#42182679)

    So will this 100% free processor follow a 100% free fabrication process? What is the use in being worried about dependencies on proprietary vendors' architectures in order to support 3rd, 4th generation processors when the ability to replace 3rd, 4th generation processors with an equivalent part requires production through a proprietary vendor manufacturing processes?

    • by lkcl ( 517947 )

      So will this 100% free processor follow a 100% free fabrication process?

      interesting question! if it became an issue, i'd get quite pissed and would, if forced to, look for alternative processor designs. that's the whole point of the EOMA-68 and the Rhombus-Tech strategy: the products are *not* dependent on one particular CPU - processors are on *modules* that are completely interchangeable. but... i like the idea. i'll have to think how to handle this one - it's not actually our design.

      • I made it this far down the page before saying it, but I can't hold back any more.

        You have absolutely no clue what you're doing and because of that, if you're leading this project, I doubt any of it exists.

        You're trying to sell vaporware.

        Go sod off you damn troll.

        • by lkcl ( 517947 ) <lkcl@lkcl.net> on Wednesday December 05, 2012 @03:28AM (#42189195) Homepage

          I made it this far down the page before saying it, but I can't hold back any more.

          You have absolutely no clue what you're doing

          that's right - i don't. that's why i'm asking peoples' input.

          and because of that, if you're leading this project, I doubt any of it exists.

          that's right: it doesn't. the idea is to get it made, with as little risk as possible, using building blocks that have been proven as much as is possible.

          anything that's in the "planning" phase doesn't exist until it actually exists. what's wrong with that? if everyone followed the line you're proposing, nobody would ever make anything, would they?

  • So what is this to be attached to? A virtual motherboard with non-Nvidia / Intel / Marvel / Broadcom... virtual chipsets? This will be quite a long march to the desktop....
    • by lkcl ( 517947 )

      So what is this to be attached to? A virtual motherboard with non-Nvidia / Intel / Marvel / Broadcom... virtual chipsets? This will be quite a long march to the desktop....

      not really. the plan is to release it exclusively as an EOMA-68 module, which itself will be both the EVB *and* the mass-production PCB (just in a metal case). what we'll do is the same thing as done with the Allwinner A10 card: make the module power-able from the USB-OTG as a stand-alone computer, that also has an HDMI output. so it'd be a larger version of these USB-dongle-computers like the MK-802, except with more "oomph" and the option of being able to plug it directly into desktop chassis', tablet

  • I'd like to see support for a small chunk of FPGA fabric to allow for specialized instructions. I have no idea if FPGAs can be implemented free of patents.

  • First a boatload of cores. 8 is a good start but I want 128 or 1024. One idea would be to have variable cores. That is a handful of crazy powerful cores and then another 1000 lightweight cores for all kinds of lightweight stuff.

    But the difficult question is how compatible with existing things to make it. If you venture too far into the land of cool you might only end up with a tiny bunch of hardcore followers like Lisp and Erlang presently have. I am not saying that lisp or Erlang are good or bad but wh
  • I woyuld recommend MMIX (thanks Knuth) with a GPU alongside.
  • Remember OpenMoko [openmoko.com], the open source cell phone? By the time it shipped, it was obsolete. And they didn't even have to do IC design.

    There are parts such as the Allwinner family which have no US intellectual property. That's how they can ship a rather impressive ARM SOIC for $7.

What is research but a blind date with knowledge? -- Will Harvey

Working...