Forgot your password?
typodupeerror
Music Apple

Apple Launches AirPods Max 2 With Better ANC, Live Translation (theverge.com) 30

Apple has quietly announced the AirPods Max 2, featuring improved active noise cancellation, an H2 chip, and new features like adaptive audio and AI-powered real-time translation. Like the original model, these headphones start at $549. The Verge reports: As noted by Apple, the AirPods Max 2 offer active noise-cancellation that's 1.5 times more effective when compared to its predecessor. Transparency mode, which allows you to hear your surroundings while wearing the headphones, also sounds "more natural" with the AirPods Max 2, according to Apple.

The AirPods Max 2 support 24-bit, 48kHz lossless audio when connected with a USB-C cable, as well as offer up to 20 hours of listening time on a single charge. Other capabilities include loud sound reduction, a camera remote feature that works by pressing the digital crown to take a photo or start a recording, as well as a personalized volume feature that "automatically fine-tunes the listening experience" based on your preferences over time.

This discussion has been archived. No new comments can be posted.

Apple Launches AirPods Max 2 With Better ANC, Live Translation

Comments Filter:
  • 1997 wants it's sample rate back.

    At least it's lossless... just not wireless.

    • But it's been proven that 48kHz really whips the llama's ass

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Unless you can hear above 24Khz this isn't a problem.

    • by ebunga ( 95613 )

      Higher sample rates and bit depths don't really do much outside of the recording, mixing, and mastering context other than making your file sizes larger than they need to be.

    • My ears function worse than they did in 1997, so I'm good with 48kHz.
    • by tlhIngan ( 30335 )

      Basically all music you listen to starts out as 48kHz. Even the "192kHz" tracks are just upscaled.

      When an artist records an album they generally record in 48kHz - and basically anything you listen to will have a 48kHz master.

      Now, some artists do record in 96kHz usually in very specialized recording studios set up for it (most aren't). Generally you're not going to be listening to a 96khz originated recording because almost no studios have equipment to do so. AIX Studios and Skywalker Ranch are two examples

      • Basically all music you listen to starts out as 48kHz. Even the "192kHz" tracks are just upscaled.

        When an artist records an album they generally record in 48kHz - and basically anything you listen to will have a 48kHz master.

        Now, some artists do record in 96kHz usually in very specialized recording studios set up for it (most aren't). Generally you're not going to be listening to a 96khz originated recording because almost no studios have equipment to do so. AIX Studios and Skywalker Ranch are two examples that can do 96kHz and have tuned pipelines to work at 96kHz. But AIX generally only does their own concert recordings of public domain works (e.g., classical pieces) and people who use Skywalker Ranch at 96kHz are very, very rare.

        99.9% of common music you can buy will be recorded at 48kHz, and the whole "high res" thing is to be able to sell you upscaled music at a higher price even though there's no added content.

        In an age when a lot of home studios record at 96k and up, I find a lot of this post to seem out of touch. Commodity recording equipment has been capable of 192k and up for about twenty years now. I don't see how its possible that professional studios wouldn't have this capability, even if they don't typically use it. I've been recording at home at 96k for about ten years or more. I'm sure you can argue it's pointless to do so, because a lot of people do, but disk space has been cheap enough and system perf

        • by ebunga ( 95613 )

          You need that on the studio side because you're manipulating the sound through all those shitty plugins. But if you think you can hear the sound difference, I have a $175,000 ethernet cable to sell you.

          • You need that on the studio side because you're manipulating the sound through all those shitty plugins. But if you think you can hear the sound difference, I have a $175,000 ethernet cable to sell you.

            Yeah, I doubt I do. My ears are shit since I've been in thrash and death metal bands since the 80s. But the argument that studios aren't capable of more than 48k? That sounded like somebody talking about a subject they were completely unfamiliar with.

            • Your ears couldn't when they were at peak.
              96kHz sample rate can faithfully replicate 48kHz waveforms, which is over double the range of human hearing.
              Even 48kHz replicated 24kHz waveforms, which are well above human hearing.

              Studios are not capable of it, because -why would they be-?
              That's like making a 4800K 36-bpc movie camera.
              192kHz is just fucking clownshoes.

              For the mastering process, there is value in working using upscaled PCM audio, so that you don't introduce additional aliasing where you hav
              • Your ears couldn't when they were at peak. 96kHz sample rate can faithfully replicate 48kHz waveforms, which is over double the range of human hearing. Even 48kHz replicated 24kHz waveforms, which are well above human hearing. Studios are not capable of it, because -why would they be-? That's like making a 4800K 36-bpc movie camera. 192kHz is just fucking clownshoes. For the mastering process, there is value in working using upscaled PCM audio, so that you don't introduce additional aliasing where you have very little room to absorb it, but there is no value whatsoever in recording it, or playing it.

                The value is in not needing to upscale and downscale again. Every change to the original audio has the potential to induce issues, so the why from my perspective is simple enough. Why upscale when you can record at what you need for mixing/mastering and only need to downscale for the final result?

                • I suppose this becomes value that's in the eye of the beholder.
                  The value could indeed be in not needing to upscale and downscale again. Or, the value could be in not having to store a much larger raw copy with the overwhelming majority of its bits wasted on noise, that can't even be looked at without a whole ton of resources being used (DSP filters processing 44.1/48kHz requires vastly less horsepower than 192kHz.)

                  I'll concede your value notion, though.
                • The value is in not needing to upscale and downscale again. Every change to the original audio has the potential to induce issues, so the why from my perspective is simple enough.

                  The change being introduced here is fixed. You cannot convert analogue to digital without a world of signal processing in the way. But that signal processing is very very good. We live in an age where resampling, sample rate conversion, delta sigma conversion, bitdepth conversion etc, etc, are so good that we can't even measure the resulting difference with some of the best gear we have, we instead have to model it in the purely digital domain. Then you get some algorithms which introduce harmonics at -160d

                  • The value is in not needing to upscale and downscale again. Every change to the original audio has the potential to induce issues, so the why from my perspective is simple enough.

                    The change being introduced here is fixed. You cannot convert analogue to digital without a world of signal processing in the way. But that signal processing is very very good. We live in an age where resampling, sample rate conversion, delta sigma conversion, bitdepth conversion etc, etc, are so good that we can't even measure the resulting difference with some of the best gear we have, we instead have to model it in the purely digital domain. Then you get some algorithms which introduce harmonics at -160dB from the primary signal, and those harmonics are not only not audible, they are many orders of magnitude fainter than our best gear can reproduce.

                    Fun fact, ever sound you hear through a computer that isn't played using some hardware exclusive mode (e.g. WASAPI exclusive mode for Windows Core Audio) already goes through multiple cases of conversion. All volume controls in applications and in windows themselves convert bit depth before and after. Your source material is rarely the same sample rate as your audio path, resampling is also done everywhere.

                    But all this is even ignoring the true theoretical WTF moment. If audiophilia is about reproducing as accurately as possible what is produced by the studio, and the studio output is by its nature a creative process, who are you to argue they are doing it wrong?

                    Just a dork that has spent a lot of time in studios. Including his own.

                    The idea that tracked audio should need to jump through more digital conversion processes just hits me wrong, and always has. I've been doing the recording game for almost thirty years now, and been playing instruments for over forty. It's not like I'm some, "Oh, I used to use a tape recorded to pull songs off the radio" uninformed idiot when it comes to the subject.

                    In fact, despite your seeming expertise on everything, I'm willing to pl

        • by radoni ( 267396 )

          You record at higher sample rate because of math when mixing two waveform representations i.e.:

          "What is the result of 1.4 * 1.3 ?" compared to "What is the result of 1 * 1 ?"

          Well in this situation, 1 * 1 equals 1 or 2, the uncertainty of which can become an audible artifact when iteratively repeated (as might happen in some recording pipeline) but on a scale of 1-to-22'000+ it matters not at all if it was 1 or 2, just that it was some kind of value in that range. No percentage of human ears can discern betw

      • by Ed Avis ( 5917 )
        I thought the "high res" thing was to sell you music straight from the 48KHz master -- not jankily downscaled to 44.1KHz, clipped because of loudness wars, and then lossily compressed. Am I out of date?
        • 48kHz has nothing do with with the loudness wars or lossily compressed. In fact MP3 inherently works at 48kHz already, and the problem with loudness wars is precisely that the master itself is the thing which is lacking dynamics. It doesn't matter what format you get it afterwards, or if in fact you play the original master record right in your sound system.

          However 48kHz does match what the movie industry uses universally, including for a format called Dolby Atmos, which Apple has been using for surround so

          • by Ed Avis ( 5917 )
            I meant, you get a stream straight from the studio master, not ripped from an audio CD.
  • by Anonymous Coward

    Why aren't they just called EarPods, to go with your EyePods

  • by rossdee ( 243626 ) on Monday March 16, 2026 @04:55PM (#66044692)

    I think the African National Congress went downhill after the death of Nelson Mandela, so no doubt the Apple fans in South Africa will appreciate that.

  • That sounds like a tacit admission that Bluetooth bandwidth sucks. I guess putting WiFi in ear buds would draw too much power? (I'm only getting about 300,000 bits/second out of the BLE I'm currently working on, which isn't enough for true lossless hi-fi audio... or any decent video.)
    • Admission? It is well-known known that it does.
      High-bandwidth codecs (LDAC, aptX Lossless) only work at very short range (few feet) before they have to downgrade to much lower quality streams.

      I can maintain a 990kbps LDAC stream sitting at my laptop, but it's trashed by the time I get to the door of my office.
    • I'm not sure who is "admitting" anything here. Bluetooth by definition is a low bandwidth low-energy personal area network protocol. It was never billed as being anything otherwise. Virtually all Bluetooth audio applications use lossy encoding, or an encoding method with continuously variable quality to cope with changing bandwidth availability.

      Bluetooth 5.2 has a maximum data throughput of 2.1Mbps. That is less than the 24bit * 48000Hz * 2 channels required to transmit that data uncompressed. Faster speeds

  • by smoot123 ( 1027084 ) on Tuesday March 17, 2026 @10:25AM (#66045772)

    Technical question for the nerds in the crowd.

    Where exactly does the real-time translation run? I assume part is on the headphones themselves, my guess would be tokenizing the incoming sound. Is there something running on an iPhone (which I assume is a required accessory)? What runs on an AI back end service?

    Anyone know the details?

  • Why the hell would my headphones have to process anything? I'm an old man unimpressed by unnecessary and costly tech!

    I just want my headphones to be a pair of small speakers. They don't need DACs, they need wires and vibrating membranes.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...