Forgot your password?
typodupeerror
Portables (Apple) Apple

Apple's New MacBook Pro Delivers 24-Hour Battery Life and Faster AI Processing (apple.com) 75

Apple unveiled a new 14-inch MacBook Pro on Wednesday that features the company's M5 chip and represents what Apple describes as the next major advancement in AI performance for its Mac lineup. The laptop delivers up to 3.5 times faster AI performance than the M4 chip and up to six times faster performance than the M1 chip through a redesigned 10-core GPU architecture that incorporates a Neural Accelerator in each core.

The improvements extend beyond AI processing to include graphics performance that runs up to 1.6 times faster than the previous generation and battery life that reaches up to 24 hours on a single charge. Apple also integrated faster storage technology that performs up to twice as fast as the prior generation and allows configurations up to 4TB. The 10-core CPU delivers up to 20% faster multithreaded performance compared to the M4.

The laptop runs macOS Tahoe and includes a Liquid Retina XDR display available in a nano-texture option, a 12MP Center Stage camera, and a six-speaker sound system. The 14-inch MacBook Pro is available for pre-order starting Wednesday in space black and silver finishes and begins shipping October 22. The base model costs $1,599.
This discussion has been archived. No new comments can be posted.

Apple's New MacBook Pro Delivers 24-Hour Battery Life and Faster AI Processing

Comments Filter:
  • I know that the main Apple motivation is to massage the investors, but is people really using their Macs to run local LLMs? (Genuine question, I have no idea). Another question: are software that have real AI capabilities using those NPUs or just exchanging data with company servers?
    • by NoMoreACs ( 6161580 ) on Wednesday October 15, 2025 @10:31AM (#65726400)

      I know that the main Apple motivation is to massage the investors, but is people really using their Macs to run local LLMs? (Genuine question, I have no idea). Another question: are software that have real AI capabilities using those NPUs or just exchanging data with company servers?

      Yes, limited local AI. The have built a Privacy-Focused, three-tiered AI architecture, unique (AFAICT) in the industry.

      Tom's guide explains it fairly well

      https://www.tomsguide.com/ai/a... [tomsguide.com]

      • by DamnOregonian ( 963763 ) on Wednesday October 15, 2025 @01:45PM (#65726896)
        They asked about LLMs.
        A Mac is, in fact, the best LLM inference machine you can buy realistically.
        You will need to spend ~$30k on GPUs in order to run models you can run on your Mac.

        You can also run those models on something like an Ryzen AI Max 395+, or one of those Broadwell mini computers, but the M4 Max is twice as fast, and the M3 Ultra even faster.
        M3 Ultra comes with the additional benefit of being able to be loaded with 4x the VRAM of the M4 Max (already 128GB), making it worth 6-figures worth of GPUs.

        Technically, certain stacks can use tensor parallelism (vLLM, LangChain) which means the performance scales with the GPUs, not just the VRAM- but nothing int he open source ecosystem really supports those.
        Means realistically, Macs are the best thing you can buy for local LLM inference. Nothing else really competes.
        • s/Broadwell/Blackwell/;
        • They asked about LLMs.

          A Mac is, in fact, the best LLM inference machine you can buy realistically.

          You will need to spend ~$30k on GPUs in order to run models you can run on your Mac.

          You can also run those models on something like an Ryzen AI Max 395+, or one of those Broadwell mini computers, but the M4 Max is twice as fast, and the M3 Ultra even faster.

          M3 Ultra comes with the additional benefit of being able to be loaded with 4x the VRAM of the M4 Max (already 128GB), making it worth 6-figures worth of GPUs.

          Technically, certain stacks can use tensor parallelism (vLLM, LangChain) which means the performance scales with the GPUs, not just the VRAM- but nothing int he open source ecosystem really supports those.

          Means realistically, Macs are the best thing you can buy for local LLM inference. Nothing else really competes.

          I realized the detail about using LLMs only after I Posted. (*Eyeroll* at my own laziness)

          Thanks muchly for your more Correct, and Learned, Reply!

          • No worries. "AI" is such a vague word, and person you replied to mixed AI and LLMs which are the same, but also not the same. There wasn't a great correct answer :P
            • No worries. "AI" is such a vague word, and person you replied to mixed AI and LLMs which are the same, but also not the same. There wasn't a great correct answer :P

              I'll freely admit I'm pretty stooooopid when it comes to the nuts and bolt-ons of AI Infrastructure.

              Thanks for digging me out of my own trench!

        • I am sure my dedicated 5080 with cuda can run circles around your mac

    • Apple ones - yes. The unified memory architecture makes them the fastest things you can buy for running AI outside of $5000 nVidia AI GPUs. Every desktop GPU is going to run out of memory on any serious model, and nothing else is as fast as these.

      I mean, sure, you're likely wanting the M5 Ultra or M5 Max for doing that, rather than the base M5, but given that they're the same architecture, just with bits chopped off or doubled up, yeh... they're gonna need that.

    • by FictionPimp ( 712802 ) on Wednesday October 15, 2025 @11:09AM (#65726502) Homepage

      I do for my personal dev projects. It's getting to the point where I can see it being the new norm at my day job in the next 18 months.

    • Yeah, they absolutely are. There are some Local LLM reddit groups where people are doing some neat stuff.

      The M* hardware is very impressive.

    • by EvilSS ( 557649 )
      I am. M4 Max with 128GB RAM. It's one of the cheapest options to run large models (basically anything that won't fit on a 5090's 32GB).
      • How good are the results compared to the latest cloud-based models from the big players? Obviously not as good, but are local LLMs "good enough" for real purposes, and if so then which ones? I have thought about getting a powerful Mac for exactly this purpose, but except for tinkering around and experimenting I couldn't think of a real use for having a local LLM.

        • I use them for codegen.
          They don't benchmark as well as cloud services, but for the purposes I've been using them for, cloud services don't outperform them at all.
          I've tested extensively.

          Granted, my work model for them is to use them on a side monitor basically building up a small custom application I would like, but don't really have the time to consider it "needed", while I continue on with my main workload. codex, qwen-code, and bolt.diy are fantastic tools. opencode... good, but a bit weird. aider- m
        • by EvilSS ( 557649 )
          For coding Claude is still king but some LLMs like qwen-code are perfectly serviceable. For more general tasks they are getting pretty good. Not as good as the commercial models but you gain portability, offline use, privacy, and more flexibility (fewer guard rails to run into for example). As for real world, one use I have is a documentation script for cloud environments. I've added code to call out to a local LLM API endpoint to generate text blocks based on configuration sections. The config data can con
        • There are open models that can trade blows with big commercial ones (like DeepSeek and Qwen), but those are too large to run on notebooks today. The models that you can run on a laptop tend to be in the 120b - 20b range. They are not as great for general knowledge, but can excel at purpose-specific tasks. Take a look at MedGemma, or Qwen3 Coder Flash. Qwen Coder is super fast on a good laptop, and can beat the original GPT-4 at most coding tasks. Amazingly, these smaller models are often as good at the stat
        • by Kisai ( 213879 )

          Results don't really matter when it comes to AI models. Only the performance does. If you run something on the CPU it may take an hour, but it has 128GB to play with, but when you run something on the GPU it takes advantage of the parallelism so that hour might be reduced to 1/40th of an hour if there are 40 parallel processes on the GPU. But the sad reality is that Inference and training require two different setups.

          Inference wise, LLM, TTS, ASR, CV, etc are only faster as you increase RAM. You get more b

          • There is already 512GB available for the M series- the M3 Ultra, available in the Mac Studio.

            You can't really compare that to NVIDIA GPUs in a server. You need 4 of them, and they're $30,000 a pop, and on conventional software stacks, that $120,000 you just spent on GPUs doesn't even scale-out for performance. It's just not the same game in that front.

            My M4 Max has 128GB, and can run most LLMs under the sun, minus a couple of the obscenely huge ones like DeepSeek R1/V3.
            But really, size doesn't correlat
    • 128GB of VRAM- ya. I'm using mine to do it. Love it.

      The only thing that really uses the NPU is the OS.
      You can build stable diffusion checkpoints to use it, but the only advantage is power efficiency- the GPU is faster... at many times the power draw.
    • Apple isn't a great platform for services, have a simple AI model running on a webcam, the webcam just stops returning images to opencv after a few hours, no reason or errors, a reboot fixes it. And I think it goes into some sort of sleep mode every now and again. Webpage disappears, go to login to the machine, then it starts working again without anything changed
    • by ceoyoyo ( 59147 )

      AI =/= LLM.

    • by allo ( 1728082 )

      NPUs are currently not very usable for LLM, but if you want to segment the foreground of your webcam image it can save you battery if it's done on the NPU. The LLM feature of Macs is the unified RAM. Nvidia wants a lot of money for a card with that much VRAM.

    • Another question: are software that have real AI capabilities using those NPUs or just exchanging data with company servers?

      Errr yes, software with "real AI" capabilities is plentiful these days insofar as "real AI" is running a certain workload with a certain algorithm trained with certain models are using hardware acceleration that various platforms provide. That said many do so in a way similar to the old days of 2D video acceleration. They will fallback and not just fail silently, the only external indication you may have that it is working is the speed at which the task finishes, or in some cases the hardware which is being

    • Surely for dev purposes and testing and some document analysis. Apple also makes the Mac Studio a Mini which, if you give them enough RAM, kicks butt with AI. Expensive, but still the cheapest option for the performance

      I have a MacBook Pro and a Ryzen with a 12Gb RTX under my desk. The RTX is surely faster with AI, but the Mac can load much bigger models with 48Gb of RAM. The secret sauce is that Apple shares the GPU and system memory, something intel’s old PCI architecture has problems with

      The laptop

  • The base model costs $1,599.

    Journalism outlets should start calling these things what they truly are: desperation models. Apple ratchets up the price so much if you want to upgrade past the desperation model that it's practically comical.

    • Re:"base" model (Score:4, Insightful)

      by cmseagle ( 1195671 ) on Wednesday October 15, 2025 @11:47AM (#65726580)

      There was some validity to this sort of argument when their base models came with 8 GB of memory and 128/256 GB of storage. That was always pretty borderline and you needed to factor in ~$400 on top of the base price to get it to a reasonable spot.

      These have 16 GB memory and 512 GB storage. That's plenty for a large portion of the market.

      • These have 16 GB memory and 512 GB storage. That's plenty for a large portion of the market.

        16GB of RAM, I'll grant is fine for standard use...but Apple really needs to come up with some sort of solution for storage expansion beyond "bag of USB accessories" or "2TB of iCloud storage"; most Mac owners end up with both.

        Sure, 512GB is fine for Apple Chromebooks, but video editors easily end up with either an external storage array or having to do "the project shuffle" of data management that is an absolute chore. There are more than a handful of PC laptops that offer multiple NVMe slots, so 8TB of in

      • 512 GB isn't much. Especially considering the fact that if the SSD goes, the Mac is effectively bricked because iBoot is stored on the SSD and that is required to boot from any media, internal or external.

        On the PC side, 1 TB is the minimum I'd provision, with 16 gigs of RAM, minimum. Ideally 32 gigs and 2 TB SSD to provide more sectors for wear leveling, perhaps overprovisioning the SSD by 5-10% to have more cells ready. With all the "AI" based apps, even the base EDR/XDR/MDR as well as the other admin

      • 512 Gb of storage is a sick joke in 2025. Single programs sometimes require more than that. The most expensive possible configuration they announced which has 512gb of storage costs about 3k. For 3k you can build a desktop server computer with more than 512gb of RAM, with a 1-4tb ssd depending on what kind of gpu and quality of ssd you want. Its not a terribly useful configuration for most use cases because you are spending almost all the money on an obsolete server platform and a ton of ram, but the fa
        • 512 Gb of storage is a sick joke in 2025. Single programs sometimes require more than that. The most expensive possible configuration they announced which has 512gb of storage costs about 3k. For 3k you can build a desktop server computer with more than 512gb of RAM, with a 1-4tb ssd depending on what kind of gpu and quality of ssd you want. Its not a terribly useful configuration for most use cases because you are spending almost all the money on an obsolete server platform and a ton of ram, but the fact you can do it is insane.

          With TB5, you can stick whatever amount of cheap aftermarket SSD on a short tether, with pretty-much equivalent performance to the Internal SSD; and since macOS (finally!) makes it dead-simple to locate your Home Folder on an External Drive with just a couple clicks, who cares how much internal SSD there is.

      • These have 16 GB memory and 512 GB storage. That's plenty for a large portion of the market.

        While this is accurate, it's not plenty for people who want to run LLMs, 16GB is totally inadequate for that since you have to run your OS and apps in that as well. Apple is advertising this hardware at being great for LLMs, but they don't offer enough memory to where you need hardware that could be that fast at running LLMs if only you had enough RAM to load them.

  • by stealth_finger ( 1809752 ) on Wednesday October 15, 2025 @10:27AM (#65726386)

    The base model costs $1,599

    Fuck I wish we could post gifs here because that one from spiderman where he says "oh you're serious" and then laughs even harder.

    • What comparable laptop can you get for cheaper?
      • That's the wrong question. The right question is how much more does this laptop really do for people in real world use and is it worth that price once all AI hype is set aside.
        • by darkain ( 749283 ) on Wednesday October 15, 2025 @12:24PM (#65726668) Homepage

          Battery Life.

          Nuf said.

          PCs still cannot even remotely compete w/ Apples actual real world battery life on their laptops. This is their killer selling feature. Everything else is just fluff. "I'll just sit at my desk all day plugged in" - well, good for you then, I guess? The rest of us will enjoy our portability.

          • by stooo ( 2202012 )

            >> PCs still cannot even remotely compete w/ Apples actual real world battery life on their laptops.
            Nope.
            I use Lifebooks with a second battery in place of an optical drive. Beats Macbooks.

            • So basically, if you sacrifice hardware capacity, you get more battery life, as opposed to the pretty damned phenomenal Apple battery life out of the box.

              And you feel this is something to brag about...

              • So getting back to the question, that implies that the extra price of the Mac is worth it? Or how many people who buy them could get by with the smaller laptop and are just buying the Mac to say they did. This is the elephant in the room.
                • You made an unfair comparison and then declared victory. I mean sure, if I plug my MacBook into a $20000 battery backup UPS I can probably get a few weeks out of the fucking thing, but that's not really any kind of rational comparison at all. Out of the box, MacBooks having some of the best battery performance of any laptop. You're comparison was the equivalent of "Oh sure, your Mustang can go 200mph, but if I take my Honda Accord and weld a rocket engine to the back, I can beat the sound barrier!

                  • No actually I'm saying the opposite. I'm saying you could spend the money to weld a rocket to the mustang, but is it worth it if you just need to get to work at the speed limit? I'm asking you to justify the cost and practicality of the rocket Mustang.
          • Do people really use their laptops on battery that much? I tend to sit at desks and not 'Neath the oak tree at the top of the hill.
            • by tlhIngan ( 30335 )

              Do people really use their laptops on battery that much? I tend to sit at desks and not 'Neath the oak tree at the top of the hill.

              Yes, but do you bring an adapter with you?

              Long battery life can easily mean you just take the laptop home to do work from home, rather than the laptop and adapter. (Sure, you could buy an adapter and keep it at home...). Or if you're someone who visits offices to do presentations at client sites, or you travel to various offices you don't have to bring an adapter with you.

              Or it

    • by itsme1234 ( 199680 ) on Wednesday October 15, 2025 @12:12PM (#65726644)

      If you think that's bad check out what's Microsoft doing: the absolute minimum spec Intel (lowest SKU, 16GB RAM/ 256GB SSD, the small one - 13.8" and so on) Surface Laptop it's $1499 (.99, sic!). And it's slower than the MacBook air, which is $999 MSRP and more like $850 often on sale (while the Intel Surface Laptops get virtually no special sales.

      Yes, the world in mad. But not in the way you think.

    • If you're looking for a cheaper laptop, the MacBook Air starts at $999. The MacBook Pro line is the more expensive models for higher end uses. The least expensive model in the more expensive line is not a low end laptop.

  • For one thing, it's got questionable usability. For another, if I'm interested in AI, Apple isn't the name that immediately comes to mind...

    • by blackomegax ( 807080 ) on Wednesday October 15, 2025 @10:36AM (#65726418) Journal
      Apple is shockingly a strong name in local LLM use. Just not at the base model level..... The max and pro chips have extremely fat memory bus speeds and up to 192gb unified ram, but even the 96gb or 64gb options can run hefty LLMs
    • by beelsebob ( 529313 ) on Wednesday October 15, 2025 @11:06AM (#65726498)

      Wait... Apple doesn't come to mind for AI? You clearly no absolutely nothing about buying hardware for running AI.

      In order of speed, your options are:

      A server with $5000 nVidia GPUs with lots of VRAM on board.
      A Mac, preferably with their highest end chip in it
      A desktop PC with a high end desktop graphics card

      Thanks to their unified memory architecture, Apple's neural engines are the fastest thing that doesn't cost $5000 for the GPU alone that doesn't instantly run out of memory running any vaguely serious model.

    • For one thing, it's got questionable usability. For another, if I'm interested in AI, Apple isn't the name that immediately comes to mind..

      Yeah, Siri sucks and Apple's models are behind, but Apple is doing some interesting research and the M chip architecture is very, very good for running local models.

      Even if you ignore Apple's own AI software, it's popping up in 3rd party software all over the place, including graphics and video editing.

    • I have an M1 MacBook Pro Max I've run gpt-oss-20b on it. It wouldn't support a dozen users, but it worked fine for me for experimental and dev testing.

      Considering I didn't buy this machine with AI as an intended use, that's pretty amazing.

      There's definitely some room for Apple in this space, while everybody thinks the only player is Nvidia.

    • Microsoft Surface laptop has had 24-hour battery life for quite some time now.

  • If you keep the lid closed and don't do anything with it.

  • by 0xG ( 712423 ) on Wednesday October 15, 2025 @11:34AM (#65726568)

    delivers up to 3.5 times faster AI performance
    up to 1.6 times faster than the previous generation
    up to 24 hours on a single charge.
    up to 20% faster multithreaded performance c

    Marketing weasel words.

  • >battery life that reaches up to 24 hours on a single charge. If you believe that I have a bridge for sale.
  • by dremon ( 735466 ) on Wednesday October 15, 2025 @12:31PM (#65726682)
    As it requires 32 GB of RAM [slashdot.org].
    • need to pay china $400 more to be able to run Calculator to find out how bad you are being ripped off.

  • bigger screen should not be $800+ more for less cpu. Also only 16GB Unified Memory with an upgrade to 32 at $400 more??

Measure twice, cut once.

Working...