Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology Hardware

ARM Announces 64-Bit Cortex-A50 Architecture 160

MojoKid writes "ARM debuted its new 64-bit microarchitecture today and announced the upcoming launch of a new set of Cortex processors, due in 2014. The two new chip architectures, dubbed the Cortex-A53 and Cortex-A57, are the most advanced CPUs the British company has ever built, and are integral to AMD's plans to drive dense server applications beginning in 2014. The new ARMv8 architecture adds 64-bit memory addressing, increases the number of general purpose registers to 30, and increases the size of the vector registers for NEON/SIMD operations. The Cortex-A57 and A-53 are both aimed at the mobile market. Partners that've already signed on to build ARMv8-based hardware include Samsung, AMD, Broadcom, Calxeda, and STMicro." The 64-bit ARM ISA is pretty interesting: it's more of wholesale overhaul than a set of additions to the 32-bit ISA.
This discussion has been archived. No new comments can be posted.

ARM Announces 64-Bit Cortex-A50 Architecture

Comments Filter:
  • Relaunch (Score:2, Informative)

    by Anonymous Coward

    The 64-bit ARMv8 became available over 12 months ago and no one is making any yet.

    • Re:Relaunch (Score:5, Informative)

      by Anonymous Coward on Tuesday October 30, 2012 @09:48PM (#41826671)

      The 64-bit ARMv8 became available over 12 months ago and no one is making any yet.

      That was the instruction set. These are the chip designs.

    • Re:Relaunch (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Wednesday October 31, 2012 @04:15AM (#41828029) Journal

      The first drafts of the ARMv8 architecture became available to a few ARM partners about 4-5 years ago. They've since been working closely with these partners to produce their chips before releasing their own design. The aim was to have third-party silicon ready to ship before anyone started shipping ARM-designed parts to encourage more competition.

      ARM intentionally delayed releasing their own designs to give the first-mover advantage to the partners that design their own cores. In the first half of next year, there should be three almost totally independent[1] implementations of the ARMv8 architecture, with the Cortex A50 appearing later in the year. This is part of ARM's plan to be more directly competitive with the likes of Intel. Intel is a couple of magnitudes bigger than ARM, and can afford to have half a dozen teams designing chips for different market segments, including some that never make it to production because that market segment didn't exist by the time the chip was ready. ARM basically has one design, plus a seriously cut-down variant. By encouraging other implementations, they get to have chips designed for everything from ultra-low-power embedded systems (e.g. the Cortex-M0, which ARM licenses for about one cent per chip), through smartphone and tablet processors up to server chips. ARM will produce designs for some of these, and their design is quite modular, so it's relatively easy for SoC makers with the slightly more expensive licenses to tweak it a bit more to fit their use case, and companies like nVidia, TSMC and AMD will fill in the gaps.

      The fact that ARM is now releasing their own designs for licensing means that their partners are very close to releasing shipping silicon. We've seen a few pre-production chips from a couple of vendors, but it's nice to see that they're about to hit the market.

      [1] ARM engineers consulted on the designs, so there may be some common elements.

      • by pchan- ( 118053 )

        In the first half of next year, there should be three almost totally independent[1] implementations of the ARMv8 architecture, with the Cortex A50 appearing later in the year.

        Can you name the three vendors? Qualcomm for sure. Marvell seems likely. Nvidia says they will have a chip out, but I have serious doubts about their ability to deliver.

  • ARM builds chips? (Score:2, Redundant)

    by Gothmolly ( 148874 )

    So ARM is producing chips now?

  • Hey Guys,
    Don't you just love how it's 2010 and the Cortex A15 already out on the market!

    http://www.electronics-eetimes.com/en/arm-in-servers-push-describes-the-cortex-a15-cpu.html?cmp_id=7&news_id=222903607 [electronics-eetimes.com]

    Oh wait.. the first real A15s just launched literally this month and except for Samsung they won't even be on sale from other manufacturers until next year.

    Now we're going to be hearing non-stop about how the 64-bit ARMs will be here next Tuesday and take over the world and put Big-Bad

    • Re: (Score:1, Insightful)

      by CajunArson ( 465943 )

      Look at the article in detail. Isn't it funny how the A-15 is the super-miracle chip that is going to stick it to Intel in the server world! Oh wait.. now that super-miracle chip is the 64-bit ARM Miracle Chip (TM) and the A-15 has been relegated to smartphones instead of taking over the server world.

      Fortunately, Intel is completely incapable of making any improvements to its chips whatsoever, so ARM's victory in 2014 is assured.

      • by rs79 ( 71822 )

        x86 is kind of at end of life. evolve or die.

        • Re: (Score:2, Insightful)

          Comment removed based on user account deletion
          • The simple facts are 1.- ARM doesn't scale

            It hasn't yet. Perhaps it will. So far, it targets a different space than amd64.

            and 2.- Its a hell of a lot easier for Intel, who already has insane levels of IPC, to scale down and have low power chips than it is for ARM to scale up and not blow their power budgets

            But so far, this strategy has not permitted Intel to achieve as low TDP nor as much IPC per watt as ARM. So you can say that it's easier, but so far ARM has not scaled up as far as Intel, nor has Intel scaled down as far as ARM. When intel delivers a TDP as low as ARM with the same performance, wake me up, and I'll care. Until then, ARM is working fine, and many people are pretty happy with existing ARM-based tablets, as evinced

          • Re: (Score:2, Insightful)

            by Anonymous Coward

            Its a hell of a lot easier for Intel, who already has insane levels of IPC, to scale down and have low power chips than it is for ARM to scale up and not blow their power budgets

            [Citation required]

            Intel have insane levels of IPC because they use insane amounts of power. IPC and power are correlated, yeah? It takes power to run all those parallel circuits that can look ahead in the instruction stream and provide high IPC. The laws of physics work the same way for Intel as they do for ARM. So, why is it e

          • ARM has YET to hit even the IPC of a Pentium 4

            Why would they do that? Do you plan to write single-threaded ARM applications for the next two decades?

        • by Bengie ( 1121981 )
          Next gen Intel x86 has a lower idle power draw and better performance/watt than current ARM cpus. I don't know what new ARM cpus may be out next year, but that means Intel has closed the gap on cellphones.
        • x86 will linger like COBOL or Java or Win32.

      • Re: (Score:3, Insightful)

        by Hal_Porter ( 817932 )

        Smartphones with >4GB are not that far off. There are a couple of 2GB phones already.

        So it's likely that Android will have 64 bit kernel and 32 bit userland before long.

        Though you can probably kludge it with something like PAE - i.e. a 32 bit kernel with >32bits of address space.

        • The A15 supports LPAE, so you can have a 40-bit physical address space with a 32-bit virtual address space. This lets you have up to 1TB of physical RAM in your tablet (which might be interesting if you wanted to memory map the flash directly), as long as no single application uses more than 4GB of address space. Given that on my 64-bit laptop, the process with the most virtual address space at the moment is using 1.2GB, I think that's probably a safe assumption for a few years...
    • by Pulzar ( 81031 ) on Tuesday October 30, 2012 @09:34PM (#41826593)

      Well, ARM designs the IPs that will go into those products... and they are ready to start selling the IP. It takes a couple of years to build SOCs around them, and then to build the devices.

      ARM is selling their product now, their customers will announce their products when they are ready. You can't expect them to keep quiet about what they're trying to sell until it's in an actual phone.

      • by Guppy ( 12314 )

        Well, ARM designs the IPs that will go into those products... and they are ready to start selling the IP. It takes a couple of years to build SOCs around them, and then to build the devices.

        I've been wondering about just how much lead time they gave their partners prior to this announcement. Given the rate at which AMD is burning cash and credibility, I doubt they can afford a lead-time that's too long.

        More likely, there was some development going on in parallel between ARM and their partners. If I had to guess, AMD started the move to ARM about the time they began discussions on purchasing Seamicro, and soon after lost a bunch of senior executives and engineers (at least some of who probabl

    • Apple & Samsung can sell us non-modifiable devices with locked-down hardware apparently this is supposed to make Linux take over

      The vast majority of Samsung ARM devices are modifiable & do not have locked-down hardware. Apple on the other hand does, but I have no idea why you think Apple's locked down devices are going to help Linux take over (wtf have you been smoking?).

  • by elashish14 ( 1302231 ) <profcalc4NO@SPAMgmail.com> on Tuesday October 30, 2012 @09:30PM (#41826557)

    Competition drives innovation.

    Who knows if this will be successful or not, but a world with AMD is a world with one more innovator bringing fresh, new ideas to the table and trying things that the members of a smaller oligopoly wouldn't.

    • by game kid ( 805301 ) on Tuesday October 30, 2012 @09:39PM (#41826615) Homepage

      A world with AMD is also one that uses ARM as a DRM bludgeon [slashdot.org]. I'm not sure we need that sort of competition.

    • Competition drives innovation.

      How is it innovation to take something that sucks and xerox it 64 times? AMDs chips consume more power and perform less computational operations, compared to Intel chips. They've been behind the curve and falling farther for awhile; And it has everything to do with poor management. AMD is not an argument for competition driving innovation. Pick a better example.

      • AMD are not very good engineers. But at the absolute, very least, they provide value chips which, though significantly behind the curve, still force Intel to keep their prices fairly honest. Yes, AMD is significantly behind the curve, but at least they're trying new things and hopefully something will stick.

        And on an irrelevant side note, let's not forget that Intel was wildly successful with their antitrust actions. They must have made 10x in profit what they got fined in the lawsuit....

        • by jkflying ( 2190798 ) on Wednesday October 31, 2012 @12:24AM (#41827339)

          Dunno, considering the budget they are working with I'd say getting performance even in the same ballpark as Intel is pretty impressive, especially once you factor in that they are a full process node behind at the fabs. Their multi-threaded top-end speed is the same or faster than Intel, it's only in IPC that they are still behind. Their performance/$ is tied or better.

          • I suppose AMD can't do much about their situation with the fabs, and it would be interesting to see what Bulldozer derivatives can do when they get down to the ~22nm process. But I think IPC is more important than you're giving it credit for. Taking laptops for example, it's unfortunate that we don't have any options that rival anything i5 or above. Trinity is decent on the low-end, ie. the i3/Celeron range, but imagine if they tried to put Bulldozer on a laptop - you'd have to settle for either Celeron-com

        • AMD are not very good engineers

          Man, why do you people keep referring to companies as multiple individuals? Why can't you just refer to the company as a single entity (which it is) and refer to the people who work for it as invidivuals? Why are the British so poor with English?

          AMD used to employ some excellent engineers, who designed the K7. Then they let them go for short-term financial gain. Now they're gone.

          • Man, why do you people keep referring to companies as multiple individuals?

            Because that's what they're made of. IIRC it's one of those annoying subtle differences between British and American English.

          • by Alioth ( 221270 )

            English is called English because it's from England. By definition, what English people speak is correct English (the hint is in the name), and what the US speak is incorrect. Otherwise it would be called American, not English.

            • by DarkOx ( 621550 )

              Wrong.

              English does not have an office body governing its use. It does not have an official dictionary. English is anything another English speaker is able to understand comfortably.

              The only thing even near what might be an official standard is "The queen's English" and if you use that than most of Great Britain does not use correct English.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        So you're posting this from a Itanium box, right?
        Wait, so you got a 64 bit x86 CPU... what was that ISA called again?

    • As much as I'd like to see AMD stay around, I don't think AMD will be important either way for the success of these chips.

      They probably will be successful, as the natural successor of 32 bit ARM chips today, in a similar way as x86 chips have evolved from 32 to 64 bit. And thus find their way into tablets, netbooks, and -why not- low power (or densely packed) servers too. With or without AMD packing those chips in there.

  • I always wonder, why change the ABI so often? after all the instruction set is only the interface between the C compiler and the underlying VLIW CPU engine. That's why the first 64 bit processors were actually slower in 64 bit than in 32 bits, and even today they aren't that faster in 64-bit mode.
    I suspect is all a Patent game, that's why CPU designers keeps modifying the ABI. Their patents are expiring all the time.

    • I always wonder, why change the ABI so often? after all the instruction set is only the interface between the C compiler

      XXX compiler, for all values of XXX where machine code (rather than some virtual instruction set or code in some lower-level language) is generated, and also assembler.

      and the underlying VLIW CPU engine.

      For which instruction sets other than x86, in Pentium Pro and later, and z/Architecture, in z196 and later, is that the case?

      That's why the first 64 bit processors were actually slower in 64 bit than in 32 bits, and even today they aren't that faster in 64-bit mode.

      Unless you happen to be dealing with more data than fits into a 32-bit addressing space, in which case all the I/O, or memory mapping/unmapping, you're doing might slow you down a bit.

      If not, pointers will be larger,

    • Re:Patent move? (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Wednesday October 31, 2012 @05:48AM (#41828375) Journal

      First, don't conflate the ABI and the ISA. The ABI, the Application Binary Interface, describes things like calling conventions, the sizes of fundamental types, the layout of C++ classes, vtables, RTTI, and so on. It is usually defined on a per-platform (OS-architecture pair) basis. This changes quite infrequently because changing it would break all existing binaries.

      The ISA, Instruction Set Architecture, defines the set of operations that a CPU can execute and their encodings. These change quite frequently, but usually in a backwards-compatible way. For example, the latest AMD or Intel chips can still run early versions of MS DOS (although the BIOS and other hardware may be incompatible). ARM maintains backwards compatibility for userspace (unprivileged mode) code. You could run applications written for the ARM2 on a modern Cortex A15 if you had any. ARM does not, however, maintain compatibility for privileged mode operations between architectures. This means that kernels needed porting from ARMv5 to ARMv6, a little bit of porting from ARMv6 to ARMv7 and a fair bit more from ARMv7 to ARMv8. This means that they can fundamentally change the low-level parts of the system (for example, how it does virtual memory) but without breaking application compatibility. You may need a new kernel for the new CPU, but all of the rest of your code will keep working.

      Backwards compatible changes happen very frequently. For example, Intel adds new SSE instructions with every CPU generation, ARM added NEON and so on. This is because each new generation adds more transistors, and you may as well use them for something. Once you've identified a set of operations that are commonly used, it's generally a good use of spare silicon to add hardware for them. This is increasingly common now because of the dark silicon problem: as transistor densities increase, you have a smaller proportion of the die area that can actually be used at a time if you want to keep within your heat dissipation limits. This means that it's increasingly sensible to add more obscure hardware (e.g. ARMv8 adds AES instructions) because it's a big power saving when it is used and it's not costing anything when it isn't (you couldn't use the transistors for anything that needs to be constantly powered, or your CPU would catch fire).

  • http://wikipedia.org/wiki/Transmeta#Crusoe [wikipedia.org]

    Anyway, i welcome our new ARM-64 overlords.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      The problem for Transmeta was that no manufacturers understood, or could market a low power laptop or tablet. According to popular magazines at the time, the sort with large intel ad budgets, it was 40% slower than Pentium 4 and therefore a complete and utterly miserable failure that you should avoid like the plague.

      The fact that it drew only 3 watts at max load was somehow glossed over.

  • by IYagami ( 136831 ) on Tuesday October 30, 2012 @10:29PM (#41826885)

    Anandtech has a better article:

    http://www.anandtech.com/show/6420/arms-cortex-a57-and-cortex-a53-the-first-64bit-armv8-cpu-cores [anandtech.com]

    According to them, ARM Cortex A57 core is a tweaked ARM Cortex A15 core with 64 bit support. And ARM Cortex A53 core is a tweaked ARM Cortex A7 core with 64 bit support. It is possible to mix A57 and A53 cores in the same die to improve efficiency.

    What I would like to see is this kind of approach in the x86 world. Imagine having an AMD processor with two fast cores (Piledriver's successor, Steamroller) for heavy processing and two lower cores for longer battery life (Bobcat's successor Jaguar).

    Or Intel with their future Haswell and Silvermont architectures...

    • What I would like to see is this kind of approach in the x86 world. Imagine having an AMD processor with two fast cores (Piledriver's successor, Steamroller) for heavy processing and two lower cores for longer battery life (Bobcat's successor Jaguar).

      I'll bet you a dollar it would be more efficient to spend the code creating more separate functional units that do the same work as the existing functional units, but which could be switched off when unneeded. You'd still have half the CPU, but you wouldn't have any cores that just sat around fondling themselves most of the time.

  • Lets hope this can keep AMD afloat.

    A fortune 500 datacenter can easily cost up to 1 million a year in electricity! I/O, not CPU performance is the bottleneck in most servers so the slower ARM wont make that much a deal. Also a kick ass GPU can improve SuperComputing a lot more than a tweaked out Xeon if AMD can pull it off with a decent graphics for scientific workloads.

    • A fortune 500 datacenter can easily cost up to 1 million a year in electricity!

      Yes but...

      Datacentres operate with an overhead of between 1.1 and 2 for cooling (i.e. cooling requires between 10% and 100% of the electricity used for compute).

      If you're buying $1e6 of electricity per year then you get quite decent rates.

      At those price points, the performance per watt, density and bang for buck go head to head.

      A quad socket Opteron system gives pretty much the best bang per buck, and some of the highest density

      • In order for the ARM machines to be competitive, they can't be super-expensive.

        The main selling point of ARM (even more important than power efficiency) that kept them alive all this time is price.

        • The main selling point of ARM (even more important than power efficiency) that kept them alive all this time is price.

          No.

          There are plenty of cheap embedded cores. AFAICT ARM actually command a bit of a premium.

          And also, the point was about price/performance. Basically as soon as the hardware gets exotic in some way, the price/performance usually goes down. At which point, you may as well go with commodity hardware unless you have very specialised needs.

  • ARM64 is a mess (Score:4, Interesting)

    by AaronW ( 33736 ) on Tuesday October 30, 2012 @11:10PM (#41827051) Homepage

    ARM 64's ISA is radically different than ARM32. All of the things that make Arm "ARM" are gone, such as conditional execution, having the program counter as general purpose register and more. Not only that, the binary encoding is totally different. The binary encoding for ARM64 is a total confusing mess compared to ARM32. I wouldn't say that ARM64 was a well designed ISA.

    Other processors made much cleaner transitions between 32 and 64-bit such as MIPS, Power/Power PC and Sparc. Even i386 and x86-64 are much closer than ARM32 vs ARM64.

    -Aaron

    • Re:ARM64 is a mess (Score:5, Interesting)

      by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Wednesday October 31, 2012 @12:36AM (#41827367)

      ARM 64's ISA is radically different than ARM32. All of the things that make Arm "ARM" are gone, such as conditional execution, having the program counter as general purpose register and more. Not only that, the binary encoding is totally different. The binary encoding for ARM64 is a total confusing mess compared to ARM32. I wouldn't say that ARM64 was a well designed ISA.

      The binary encodings are a mess, yes, due mostly to the urge to adapt and produce some consistency with the AArch32 instructions. The ARM ABI has seriously evolved and the encoding possibilities are quite... nasty now if you look at ARMv7.

      Thankfully, the assembler takes care of that for us.

      Conditional execution is nice, but it really interferes with modern architectures. The ARMv8 core is a fully speculative, out-of-order with register renaming implementation. Conditional execution breaks this as the processor has to track more state since any combinations of instructions in the stream could have any combination of conditional execution.

      Ditto the PC - it was nice to be able to jump by simply writing to the PC, but man does it complicate internal design if any instruction can arbitrarily move the PC to any register value. In the end, the few uses of conditional execution and the ability to move anything to the PC without using a branch or return style instruction was probably so limited, there was no point.

      Oh, and there are 31 registers - X0 through X30. The 32nd register is special depending on the instruction - for ADD and SUB, "X31" means the stack pointer. For most other instructions, it means the zero register (reads as zero), something borrowed from MIPS, and allowing interesting register-only instruction forms to be used when the immediate value is zero. It does result in oddball uses though, like
            SUB SP, 0, X0 ; Set SP.
      to play with the stack pointer.

      If you're a system level programmer, AArch64 is MUCH nicer (no more damned coprocessors). I know, I've done a fair bit of it.

    • Re:ARM64 is a mess (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Wednesday October 31, 2012 @04:49AM (#41828133) Journal

      All of the things that make Arm "ARM" are gone, such as conditional execution, having the program counter as general purpose register and more

      The advantage of conditional instructions is that you can eliminate branches. The conditional instructions are always executed, but they're only retired if the predicates held. ARMv8 still has predicated select instructions, so you can implement exactly the same functionality, just do an unconditional instruction and then select the result based on the condition. The only case when this doesn't work is for loads and stores, and having predicated loads and stores massively complicates pipeline stage interactions anyway, so isn't such a clear win (you get better code density and fewer branches, but at the cost of a much more complex pipeline).

      They also have the same set of conditional branches as ARMv7, but because the PC is not a GPR branch prediction becomes a lot easier. With ARMv7, any instruction can potentially be a branch and you need to know that the destination operand is the pc before you know whether it's a branch. This is great for software. You can do indirect branches with a load instruction, for example. Load with the pc as the target is insanely powerful and fun when writing ARM assembly, but it's a massive pain for branch prediction. This didn't matter on ARM6, because there was no branch predictor (and the pipeline was sufficiently short that it didn't matter), but it's a big problem on a Cortex A8 or newer. Now, the branch predictor only needs to get involved if the instruction has one of a small set of opcodes. This simplifies the interface between the decode unit and the branch predictor a lot. For example, it's easy to differentiate branches with a fixed offset from ones with a register target (which may go through completely different branch prediction mechanisms), just by the opcode. With ARMv7, an add with the pc as the destination takes two operands, a register and a flexible second operand, which may be a register, a register with the value shifted, or an immediate. If both registers are zero, then this is a fixed-destination branch. If one register is the pc, then it's a relative branch. Because pretty much any ARMv7 instruction can be a branch, the branch predictor interface to the decoder has two big disadvantages: it's very complex (not good for power) and it often doesn't get some of the information that it needs until a cycle later than one working just on branch and jump encodings would.

      Load and store multiple are gone as well, but they're replaced with load and store pair. These give slightly lower instruction density, but they have the advantage that they complete in a more predictable amount of time, once again simplifying the pipeline, which reduces power consumption and increases the upper bound on clock frequency (which is related to the complexity of each pipeline stage).

      They've also done quite a neat trick with the stack pointer. Register 0 is, like most RISC architectures, always 0, but when used as the base address for a load or store, this becomes the stack pointer with ARMv8, so they effectively get stack-relative addressing without having to introduce any extra opcodes (e.g. push and pop on x86) or make the stack a GPR.

      ARMv8 also adds a very rich set of memory barriers, which map very cleanly to the C[++]11 memory ordering model. This is a big win when it comes to reducing bus traffic for cache coherency. This is a big win for power efficiency for multithreaded code, because it means that it's easy to do the exact minimum of synchronisation that the algorithm requires.

      As an assembly programmer, I much prefer ARMv7, but as a compiler writer ARMv8 is a clear win. I spend a lot more time writing compilers than I spend writing assembly (and most people spend a lot more time using compilers than writing assembly). All of the things that they've removed are things that are hard to generate from a compiler (and hard to implement efficiently in silicon) and all of the things that they've added are useful for compilers. It's the first architecture I've seen where it looks like the architecture people actually talked to the compiler people before designing it.

  • "The 64-bit ARM ISA is pretty interesting: it's more of wholesale overhaul than a set of additions to the 32-bit ISA."

    OS based on Linux and OSS just need athat GCC supports it, which it already does, and Microsoft will only need another 12 years of rewriting^w research until they can come up with Windows RT 64.

    (I stand corrected: the Linux kernel itself was up and running on ARM 64 even before GCC ofically supported it: http://www.phoronix.com/scan.php?page=news_item&px=MTIwNzU [phoronix.com] )

  • Will the new architecture run Dalvik any faster? You know, 'cause Android apps aren't native anyway.
  • Might AMD just license the instruction set and not the hardware design? Could they then bolt an ARM instruction decoder onto their core right next to the x86 decoder and run code for either architecture on mostly the same hardware?

//GO.SYSIN DD *, DOODAH, DOODAH

Working...