Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Open Source Hardware Technology

MIPS Technologies Joins RISC-V, Moves To Open-Source ISA Standard (tuxphones.com) 82

MIPS Technologies, the company that had been synonymous with the MIPS processor architecture, will now be developing processors based on RISC-V architecture. TuxPhones reports: The MIPS silicon manufacturer is one of the oldest RISC chip manufacturers, used in several systems since the late 80s. Characterized by clean and efficient designs, allowing adaption in varied applications, this company has been considered one of the most innovative in the market during its golden age - to the point that the Windows OS had a MIPS port in the early 90s. However, the company has been struggling with an increasingly lower market share and risked bankruptcy in recent years, ultimately leading to acquisition by start-up Wave Computing, which faced bankruptcy last year.

How this company was reborn just weeks ago, exiting the state of bankruptcy, is surprising, but not at all irrational: in its official statement, (the new) MIPS has become a member of RISC-V International, the non-profit organization managing the fully open-hardware ISA, substantially replacing their current architecture with the de facto open chip standard in its entirety. Licensing of the original MIPS architecture to third parties will probably be managed as before, so that the "old" architecture will remain available upon need. This is officially known as the "8th generation" of MIPS chips, indicating a total architectural gap from the previous seven iterations, essentially leaving the old architecture and fully embracing the new one.
The Electronic Engineering Journal says it's likely that the new MIPS will continue to honor pre-existing licensing agreements, but it's unclear what level of support the company will offer for older MIPS-based chip designs.
This discussion has been archived. No new comments can be posted.

MIPS Technologies Joins RISC-V, Moves To Open-Source ISA Standard

Comments Filter:
  • by battingly ( 5065477 ) on Monday March 08, 2021 @08:55PM (#61139080)
    Let's hear from the old guys with comments like "I remember using a MIPS processor....".
    • MIPS? Heck I had motherboards with ISA bus slots on them; the ISA bus [wikipedia.org] predates MIPS architecture by several years [wikipedia.org].

      In other words thanks for making me feel old, sonny. Now kindly get off my lawn.
    • Re:Cue the old guys (Score:5, Informative)

      by fred6666 ( 4718031 ) on Monday March 08, 2021 @09:18PM (#61139122)

      MIPS is still quite popular, you just don't know you are using it. It powers many devices including routers, thermostats, etc.

      • was in the Playstation 2 if I recall correctly too!
        • by Z80a ( 971949 )

          SGI, PS1 and N64 too.
          They were the original people doing the ARM strategy of just licensing the core to everyone.

          • Re:Cue the old guys (Score:5, Informative)

            by Bert64 ( 520050 ) <.moc.eeznerif.todhsals. .ta. .treb.> on Monday March 08, 2021 @09:48PM (#61139194) Homepage

            They were also 64bit capable years before other architectures, especially ARM.
            When ARMv8 was new and untested, MIPS64 had 15+ years of experience and existing toolchain support. They squandered the advantage they had.

            • Re:Cue the old guys (Score:5, Informative)

              by ChunderDownunder ( 709234 ) on Monday March 08, 2021 @10:01PM (#61139226)

              Blame SGI. They bet the farm on Itanium, which also claimed the lives of DEC Alpha and PA-RISC.

              • SGI was always a bit elitist, they always seemed to want above premium pricing, of cours eN64 was the exception.
                • by Viol8 ( 599362 )

                  Unfortunately like all of the unix workstation vendors they still tried to charge premium prices into the late 90s when for far less you could have a bought a PC with a top of the range graphics card for far less than had the same performance (albeit perhaps not the same reliability).

                  • by teg ( 97890 )

                    Unfortunately like all of the unix workstation vendors they still tried to charge premium prices into the late 90s when for far less you could have a bought a PC with a top of the range graphics card for far less than had the same performance (albeit perhaps not the same reliability).

                    They had to.

                    While paying more is always unwelcome, obviously, the fixed costs for their CPUs weren't that much lower than Intel's - but they had a tiny amount of sales to spread that cost on. Also, Intel was far ahead of anyone in process technology - beating their performance by being smarter and more targeted got harder and harder. It's just the last five years or so Intel have really lost their way there, unable to progress from 14 nm.

                    This is also why moving to Itanic was so widespread... Intel were

              • by m5shiv ( 877079 ) * on Tuesday March 09, 2021 @01:34AM (#61139558) Homepage
                Let's get this straight - there are only two guys who are to blame for squandering everything we had at SGI. 1) Ed McCracken for fighting with Jim Clark, thus forcing him to leave and start Netscape 2) Ed McCracken for refusing to sell PC graphics cards because "SGI is a system company" and selling the Odyssey graphics team to Nvidia and cancelling next gen Bali graphics. 3) Rick Beluzzo, who just about knew how to sell printers, cancelled all future MIPS development and decided that Itanium/Windows was the future, squandering everything else.
              • SGI had no chance to survive (make your time)

                PC processors became too good for small companies to compete with. SGI simply could not continue to develop and maintain MIPS. They were always doomed.

                Remember, SGI didn't bet the farm just on Itanic, they made PCs too. But PC video cards were getting too good too fast, and they couldn't compete with commodity GPUs either.

                TL;DR: SGI was always doomed, and MIPS with it.

                • by teg ( 97890 )

                  SGI had no chance to survive (make your time)

                  PC processors became too good for small companies to compete with. SGI simply could not continue to develop and maintain MIPS. They were always doomed.

                  Remember, SGI didn't bet the farm just on Itanic, they made PCs too. But PC video cards were getting too good too fast, and they couldn't compete with commodity GPUs either.

                  TL;DR: SGI was always doomed, and MIPS with it.

                  SGI - and other workstation manufacturers - are good examples of The Innovator's Dilemma [wikipedia.org]. An excellent book, read it if you haven't already.

      • Re:Cue the old guys (Score:4, Informative)

        by ShanghaiBill ( 739463 ) on Monday March 08, 2021 @09:39PM (#61139176)

        MIPS and RISC-V are very similar.

        They were both designed by some of the same people.

        • I hate that aspect of RISC-V. It enshrines the C grade idea of never bothering to check. With just a few more transistors it comes for free and occasionally traps serious errors.

          But C does not do it. So Java does not. Nor does anything else new. So the bean counters saw it was not used. And now it never will be used because testing for integer overflow is very expensive on RISC-V. Self fulfilling.

          • checking for integer overflows in the stack? using an overflow flag in assembly defeated the point of c and assembly. overflows really became a problem when heap operations in large memory methods would trip the stack run up. problems caused by new methods that did not need to granular control of the c/assembly it was running running on top of. i hate how it's the fashion to criticize the old standard languages in favor of the new. and to do so while expecting the old standard to continue to be used and ac
          • by Misagon ( 1135 )

            Indeed. That is my main gripe with RISC-V. It makes it difficult to do arbitrary-precision arithmetic (bignums) ... or even 64-bit arithmetic on 32-bit RISC-V -- which most existing RISC-V implementations are because they cater to the embedded market.

            Rust and Ada/SPARK also require checks for integer overflow, and their popularity is rising, also especially for embedded systems.
            Python is also rising as an embedded language, especially for teaching, and it's integers are arbitrary-precision integers (promote

            • I did not realize Rust did some Integer Overflow checking, good for them.

              C# can do it, but usually does not. It does it because VB6 always did it, and that caught several nasty errors.

              The overhead should be zero. It is just testing the sign bits at the end with a few gates. And there should be a hardware trap, like an illegal memory reference. And it is an error conditions, so the state does not have to be perfectly defined, compiler optimizations are OK.

      • by Tool Man ( 9826 )

        Yeah, have in some ISP routers lying around here, and a consumer Western Digital net-backup device in my closet. That last one runs a flavour of Debian, which is also not supporting MIPS now. (Nor have I ever had patches from WD either, come to think of it.)

    • I do, I even had an SGI Indy at home. It had a webcam before the web even proper existed. It ran the coolest yet most insecure buggiest operating system in the world. The cool thing was it ran FSN â" File System Navigator .. which many people may remember from the original Jurassic Park movie.

    • by AReilly ( 9339 )

      I've used MIPS (both 32-bit and 64-bit) in DEC, SGI and Sony workstations. Wonderful pieces of kit. Those were the days!

      Now, get off my lawn!

    • Let's hear from the old guys with comments like "I remember using a MIPS processor....".

      The year was 1994. I was at my first internship, working for a CAD company. Windows 3.11 was the cutting edge of the PC world (ignoring Archimedes, Amiga, etc...), ugly beige boxes abounded and 640x480 with 16 colours seemed good as a screen resolution.

      Day 2 and the machine arrived (teal Indigo2 IIRC). It was a top of the line Indy? Indigo, something of that sort with a boat load of RAM, a webcam, high resolution graphic

      • Even putting aside SGIs, basically everything else was more advanced than a PC, even a Macintosh.

        Of course, everything else was spectacularly more expensive than a PC, even a Macintosh...

        • I mean yeah the Suns and etc running CDE were nice and all. But the SGI was just so far ahead.

          • I mean yeah the Suns and etc running CDE were nice and all. But the SGI was just so far ahead.

            Too bad they weren't far ahead in security. Putting xhost + into the default X session was probably a bad idea.

            I had an Indigo R3000 and later an Indy R4400SC, both with entry graphics unfortunately :)

    • Most inane first post ever?? Slashdot needs a small test at login to weed-out the users with little to no brainwave activity.
    • by teg ( 97890 )

      Let's hear from the old guys with comments like "I remember using a MIPS processor....".

      The first Unix I remember using was IRIX. The university I started at had a full lab of SGI Indys [wikipedia.org], which were were MIPS powered. Getting Linux at home a year and a half later was just a logical progression.

    • I've decorated my lawn with colorful 90's era SGI machines. Indigo, Octane, etc
      And when the neighborhood children climb and play on them like it's a jungle gym. I run outside shouting: GET OFF MY LAWN YOU PESKY KIDS! *fistshake*

    • by thsths ( 31372 )

      Exactly. Of course I have used MIPS. I think it was at uni, where the only colour UNIX system available was an SGI Indigo. It was obscure even then, and getting software to compile was not always easy.

      To be honest, I do not miss the age of UNIX. These, those workstations were well built and well engineered, but the sheer amount of incompatibility between different UNIX flavours was ridiculous. Linux rightfully put an end to a lot of that.

  • We need a viable open source GPU project. All the ones that exist are half assed.

    • Re: (Score:3, Insightful)

      What possible use case is there for an open source GPU?

      Anything less demanding than the newest games is handled fine by AMD or Intel embedded GPUs.

      If you want a gaming GPU you'll need tens to hundreds of millions of dollars to get it manufactured. It will take years to get it though all the HW verification and driver writing. By the time that happens a $50 off the shelf GPU will outclass it.

      There are low end GPUs out there, (VideoCore, Mali) for embedded stuff but obviously none of them are open. Altho

      • You're completely missing the point. Unfortunately.

        It's about being patent-free.

        But I think nowadays, GPUs are just vector processors anyway, so some RISC-V vector additions, designed with GPUs in mind, should cover it.
        GPUs are all about the hardware design anyway. (As in: Massive parallelity and optimized for certain operstions.) Not the architecture / commands you can send it.

        • I haven't missed the point. Everything is already patented, you can't make and sell them without violating patents. There's a reason the big players haven't open sourced all their stuff, it shows just how the hardware is built leaving them open to lawsuits.
      • What possible use case is there for an open source GPU?

        For use in an open source computer.

        Anything less demanding than the newest games is handled fine by AMD or Intel embedded GPUs.

        It's not about handled fine, it's about source code.

        Also, that stuff is NOT handled fine. Putting Intel embedded GPUs aside (because they suck rocks) even AMD's embedded GPUs have their problems in the driver department. And AMD doesn't care about compatibility except for new software.

      • Comment removed based on user account deletion
        • Nobody is going to build an open source desktop CPU either.

          A CPU for a small embedded task, sure. It'll cost 100x an off the shelf ARM device but knock yourself out.

    • Re:GPU (Score:5, Interesting)

      by AReilly ( 9339 ) on Monday March 08, 2021 @10:41PM (#61139326)

      There are some folk working on building a GPU out of an array of RISC-V cores...

      Couldn't comment on how "viable" that is.

      Personally, I like to think that the wheel of reincarnation might finally be turning back around to CPUs. I really don't _want_ a GPU. I want a chip with 100+ beefy cores, all with vector engines, and a dumb frame buffer (and heaps of memory bus bandwidth of course) and let an open-source software renderer stack do all the work. I don't _want_ to shuffle shaders off to some opaque, badly documented secondary card. I want to be able to write and debug that code just like any other piece of code. I sometimes dream of making a workstation out of an Altra Max and a DMA-pipe connected to an HDMI port...

      • by sjames ( 1099 )

        That would be a natural evolution. Kind of like how early PCs had an often empty socket for an 8087 floating point co-processor and by the time the 80386 came out, the instructions and execution units were built in to the CPU. Floating point intensive software either cam with different executables (with or without floating point emulation) or occasionally would auto-detect which version of the floating point functions to call.

        We're getting closer now that some CPUs have a (generally low end) GPU built in.

        • So far iGPU / APU is still some distance away from the like of 8087. We still don't have a driver-free ABI to instruct all graphical operations in a standardized forward-or-backward-compatible way. Not even for GPU chips designed by the same company. Everytime I read GPU driver release news / changelog and see bend-over-the-back things like "this version of driver fix bug of game XXX" I think the design of GPU software interface must be seriously messed up. In an ideal world, a game that can run correctly i
          • by sjames ( 1099 )

            Agreed that there's a way to go yet. The problem now is that we have multiple high level specs that each GPU /driver is free to implement any way it cares to. What we need is a low-level spec. Operations that the GPU is to perform and an ABI for them, THEN build the high level specs/libraries on top of those elements.

        • by teg ( 97890 )

          That would be a natural evolution. Kind of like how early PCs had an often empty socket for an 8087 floating point co-processor and by the time the 80386 came out, the instructions and execution units were built in to the CPU. Floating point intensive software either cam with different executables (with or without floating point emulation) or occasionally would auto-detect which version of the floating point functions to call.

          We're getting closer now that some CPUs have a (generally low end) GPU built in.

          You're confusing the generations slightly. The 80386 was still upgradeable with a math coprocessor, the 80387 [wikipedia.org]. The first generation of Intel mainstream CPUs with an onboard FPU was the 80486. You also a had a version of this with a missing/disabled FPU - the 486SX. The fun part about that one was that the math coprocessor, the i487SX [wikipedia.org], was actually a full 486DX instead of just an FPU.

          • by sjames ( 1099 )

            You are correct, I was off by a generation.

            IIRC, the 486SX started as a way to be able to sell chips where a manufacturing defect affected the FP operations, much as the Celeron was basically a regular CPU that had a defect in one bank of cache.

            • by teg ( 97890 )

              You are correct, I was off by a generation.

              IIRC, the 486SX started as a way to be able to sell chips where a manufacturing defect affected the FP operations, much as the Celeron was basically a regular CPU that had a defect in one bank of cache.

              Indeed. On the topic of 486DX... I remember buying a DX33 back in 93. With a whopping 8 MB of RAM, a 15" CRT screen, an S3 card and a hard drive it cost more - before adjusting for inflation - than a top of the line iMac today. Turbo Pascal was so much fun!

              • by sjames ( 1099 )

                I do remember that era. The various Turbo compilers really leveled the playing field. I did some Turbo Pascal but mostly Turbo C. Definitely the most affordable decent compiler until gcc became common. Way back in the Computer Shopper days

      • by AmiMoJo ( 196126 )

        A bunch of CPUs will never be as fast for rendering as a GPU, at least not within the same power/thermal envelope.

        • A bunch of CPUs will never be as fast for rendering as a GPU, at least not within the same power/thermal envelope.

          Depends on the processors: the AMD graphics cards are essentially a wide array of SIMD RISC processors with some threading stuff. They do of course have special instructions and functional blocks for things like texture sampling.

          What I suspect the OP wants is a wide array of full-strength CPUs with the fast single threading.

      • by lkcl ( 517947 )

        I really don't _want_ a GPU. I want a chip with 100+ beefy cores, all with vector engines, and a dumb frame buffer (and heaps of memory bus bandwidth of course) and let an open-source software renderer stack do all the work. I don't _want_ to shuffle shaders off to some opaque, badly documented secondary card. I want to be able to write and debug that code just like any other piece of code.

        funny you should say that because this is what we're doing with Libre-SOC. what you describe is called a "Hybrid" or "Unified" GPGPU. ICubeCorp did one (their demo IC was the IC3128), all proprietary. this is entirely libre http://libre-soc.org/ [libre-soc.org]

        we've a software-style-but-going-to-have-3D-opcodes-added 3D MESA driver being developed: http://lists.libre-soc.org/pip... [libre-soc.org]

      • That was done. It's called a 'transputer'.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

      • Personally, I like to think that the wheel of reincarnation might finally be turning back around to CPUs. I really don't _want_ a GPU. I want a chip with 100+ beefy cores, all with vector engines, and a dumb frame buffer (and heaps of memory bus bandwidth of course) and let an open-source software renderer stack do all the work. I don't _want_ to shuffle shaders off to some opaque, badly documented secondary card. I want to be able to write and debug that code just like any other piece of code.

        That's a very reasonable dream. Unfortunately the modern impediment to it is heat.

        AMD's upcoming Threadripper is rumored to consume a whopping 320 watts with all the cores it has jammed into it. Dissipating that much heat is a serious problem. AMD is only getting away with it because of their chiplet design, which allows them to have multiple smaller widely separated hotspots in their package instead of having all that circuitry right up against its neighbors, pushing heat across the entire die until the

      • by timq ( 240600 )

        Amen, brother.

  • Pff... That's been a long time coming. But open-sourcing PCI would be more useful.

  • Teaching the new guys MIPS assembly and not telling them about the branch delay slot was always good for a laugh.
    • by twosat ( 1414337 )

      The name MIPS came from Microprocessor without Interlocked Pipelined Stages, so assembly code was not straightforward to write from what I understood.

      • MIPS assembly wasn't bad. It had its quirks but they were well documented.

        For those that don't know, early MIPS processors had a quirk that the instruction after a branch was always executed. If you didn't know debugging assembly made no sense.

        • by Viol8 ( 599362 )

          Presumably by the time the core realised it had to branch the pipeline had already moved onto the next instruction in the cache?

        • So pushing instructions straight to something like the L cashe automatically?
        • Current day processors still kind of do that, except that now we have speculative execution logic that undoes the effects of the road not taken, along with branch prediction that tries to guess whether the instruction will branch or not and work on the most likely path rather than always working on the non-branch case.
  • ??? nAyone ??? Something like pagetable.com but for mips or RSICV
  • Let's see why Slashdot will hate it anyway. ;)

  • There are many open, royalty-free ISAs these days, including OpenRISC, OpenPOWER and OpenSPARC. The nice thing about OpenRISC in particular is that it's not just an ISA, but also an entire architecture with freely available cores, suitable for commercial designs.

    OpenPOWER has the open MicroWatt core and a few open sourced IBM POWER cores. Anyone can use the POWER ISA as well and not pay a cent, even in commercial designs.

    Fun thing about RISC-V is that it is essentially a MIPS clone, with the same odditi
    • They are meaningless is you are waiting for the someone to create the next, hottest new arch to run your software on.. But what if you are creating those standards yourself?
    • by Pimpy ( 143938 )

      Excluding the SGI debacle, one of the final nails in the MIPS coffin was their stupid lawsuit with Lexra over the then patent-encumbered LL/SC instructions and betting too much on patent-encumbered instruction set extensions as a viable revenue source, without adequately considering the chilling effect this would have on adoption. They (MIPS Technologies) started to go down the same trap that many semiconductors started to run into when transitioning into the SoC space - trying to figure out whether they ar

  • Similar to how most browsers dumped their independent engines and became Chromium skins.
    • There is nothing wrong with this. There were just too many RISC architectures out there. 25 years ago there were Alpha, SPARC, POWER, HP, ARM. This was bewildering. It seems today the trend is to standardize on ARM, but it would be nice if someone kept propping up RISC-V because we need one viable alternative and open source architecture.

      • It doesn't matter how many architectures are out there, what matters is instruction sets.

        People talk about the x86 ISA all the time as if there were such a thing. There isn't. There's only the x86 IS, and lots of different ways to actually execute those instructions, although literally everyone today decomposes x86 instructions into RISC micro-ops.

        I presume most other instruction sets are the same today, but don't know as much about 'em

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...