Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Music Technology Entertainment

3D Audio Standard Released 82

CIStud writes The Audio Engineering Society (AES) has released its new 3D Audio Standard (AES69-2015), covering topics such as binaural listening, which is growing due to increased usage of smartphones, tablets and other individual entertainment systems that offer audio using headphones. AES states that an understanding of the way that the listener experiences binaural sound, expressed as head-related transfer functions (HRTF) facilitates the way to 3D personal audio. The standard also looks into convolution-based reverberation processors in 3D virtual audio environments, which has also grown with the increase of available computing power.
This discussion has been archived. No new comments can be posted.

3D Audio Standard Released

Comments Filter:
  • This could be quite promising if incorporated into movies and video games. I watch almost all my movies and play most of my video games at home with headphones, partially so I don't annoy those around me and so I can watch movies while others sleep, but also because I can get much better sound quality out of headphones for a fraction of the price of comparable speakers. If they can get 3D audio working out of simple headphones so I can get full surround sound out of a decent pair of headphones, then I wou
    • > but also because I can get much better sound quality out of headphones for a fraction of the price of comparable speakers.

      Doubtful. The distance between your ears and your headphones are far from long enough to experience lower frequencies.

      • by Anonymous Coward
        You don't need a chamber large enough for low frequency to resonate in order to hear the lower frequency. The biggest issue with headphones and lower frequencies is not how close to your ears but the size of the diaphragm. Deep rumbles still sound better with headphones, they only difference is I can't feel it over my entire body.
        • by Anonymous Coward

          Feeling it in your body is part of the fun of bass, to be sure. A simple transducer attached to the furniture you're sitting on (butt kicker, D-Box, etc) that receives the LFE signal from your audio processor would make a huge difference there.
          If the purpose of the headphones is to not disturb someone else in the room, you may need to pay attention to isolating your furniture from the floor and adjusting the intensity of the transducer such that it doesn't audibly vibrate the furniture.

      • by Theaetetus ( 590071 ) <theaetetus@slashdot.gmail@com> on Tuesday March 17, 2015 @01:43PM (#49277021) Homepage Journal

        > but also because I can get much better sound quality out of headphones for a fraction of the price of comparable speakers.

        Doubtful. The distance between your ears and your headphones are far from long enough to experience lower frequencies.

        That makes no sense, both from a physics standpoint and a common sense standpoint. For the former, sound is varying air pressure over time. Distance doesn't come into it, and you don't need to run around in a space to hear low frequencies.

        For the latter, the distance between ear buds and your eardrum are approximately what, half an inch? Sound travels at about 1000 ft/sec, or conversely, has a wavelength of 1 ft at 1kHz, 1 inch at 12 kHz, and a half-inch at 24kHz. If your ability to hear low frequency depended on the distance between your ear and the source, then no one wearing earbuds could hear anything below 24 kHz, right? And since most people can't hear above 20kHz, then ear buds would just be silence generators, right?

        Or, if you wanted to hear the rumble of an approaching train in the distance, you wouldn't put your ear on the track because "the distance between it and your ear wouldn't be enough to hear low frequencies" and instead, you'd want to stand several feet away?

        In fact, as noted above, the wavelength of 1 kHz sound wave in air is 1 ft. At 100 Hz, it's 10ft, and at 20 Hz, it's 50 ft. How many people have 50 feet between their ears and their speakers? Or even 10 feet in most living rooms?

        Your post makes no sense, no matter how you think about it.

    • Re: (Score:3, Interesting)

      by Anonymous Coward
      Back in the late 90s, my $20 A3D soundcard faked 3D sound so well, I could shoot people through walls in Counter-Strike. It was like I could "see" with my ears. I know several other friends who had this same experience. Ever since A3D got sued out of existence by Creative, I have never heard such good stereo 3D sound. Headphones are superior to speakers because the headphones remain fixed, equally distanced, not affected by the surrounding environment, and the sound from one does not bleed into the other un
      • Re: (Score:1, Interesting)

        by Anonymous Coward

        You're not wrong. I worked for a company selling an audio card that featured the Aureal A3D engine, and it was incredible compared to even what people are doing today.

        A clear example of "If you can't compete, sue." Also a clear example of patent law snuffing out innovation, rather than fostering it.

      • by nobuddy ( 952985 )

        I absolutely loved my A3D card. I was sad when SB bought them out then killed the line. It was a very promising technology.

      • I just miss hardware accelerated audio. I still don't understand why MS killed it.
    • by modecx ( 130548 )

      I have a set of Logitech G930 wireless headsets, which I rather like except for the fact that they're advertised as "7.1", which couldn't be a more false statement. Sure, the software interface presents itself as 7.1 discreet channels, but you still have only two drivers. They're reasonably good as stereo headphones, but for "surround" mode they use some Dolby surround psychoacoustics nonsense, which as far as I can tell, basically ups some reverb in the software preamp and makes everything sound like you'r

    • This could be quite promising if incorporated into movies and video games.

      There are already several platforms for object-based 3D audio in games, they already offer solutions for binaural and HRTF listening.

      The AES has promulgated many standards with regard to file interchange and computer audio, they're always several years behind and chasing proprietary vendor technology that's already established (See AES31, a timeline interchange format supported by no one, even open source projects avoid it like the p

    • by antdude ( 79039 )

      The only thing missing is shaking your body with subwoofer's bass. :(

  • i don't get it..... (Score:2, Interesting)

    by Anonymous Coward

    binaural = stereo
    3d audio = surround sound (5.1/7.1/8.1/etc)

    both have been around forever.

    • by tlhIngan ( 30335 )

      binaural = stereo
      3d audio = surround sound (5.1/7.1/8.1/etc)

      both have been around forever.

      As have speaker virtualization.

      This is basically another form of speaker virtualization - the ability to simulate a surround sound system using headphones. They do work (since you only have two ears, they just have to reproduce how your ear hears each speaker), and they do keep you from having the "inside your head" feeling you get with stereo sound played on normal headphones.

      However, it's a bit more flexible in that

      • by mspring ( 126862 )
        Let me fully understand it... What happens when I'm listening via headphones and I turn my head? Will the sound stay in the same absolute direction?
    • Isn't that a trifle reductive? It is true that you'll need multiple speakers surrounding the subject in order to convincingly fake apparent direction and the like(though you can do fairly well with only two headphones, since they only have two ears and the stereo separation will be extremely good); but you can have arbitrarily many speakers and it isn't "3d audio" unless the signal going to each one has been crafted so as to present the illusion of sound location, distance, and so on; and you can have mere
      • by Anonymous Coward
        Speakers interfere with each other and sound reflects off of objects. Headphones are 100% unaffected. Much better signal to noise in the sense of the actual data received by the brain.
    • binaural = stereo
      3d audio = surround sound (5.1/7.1/8.1/etc)

      No, binaural is one presentation modality for a 3D soundstage. You could do it with any given set of speakers if you have the right convolution matrix. Stereo imaging was what I worked on for an undergrad project in the early 90's. We had 5.1 available at the time and nobody called it '3D audio' - "Surround Sound" was the common parlance.

    • by iluvcapra ( 782887 ) on Tuesday March 17, 2015 @01:08PM (#49276761)

      3d audio = surround sound (5.1/7.1/8.1/etc)

      "5.1/7.1/8.1" doesn't have an elevation component. Certain IMAX formats did, as did some experimental 70mm formats in the 70s, but it hasn't really been widely available before Dolby ATMOS and Barco Auro.

      The big difference with the traditional X.Y formats is these regard individual screen channels as discrete, and when films are mixed, sound sources are hard-assigned to certain speaker channels, and the speaker placement has to be matched in every venue . "3D" systems use procedural methods to assign sound sources a vector or coordinate with metadata, and a decoder at the receiving end does the job of assigning speakers, which may have different placement and number from venue to venue.

      Something mixed in 5.1 or 7.1 can be "downmixed" to stereo by summing channels together and applying pan and gain to position the multichannel sources in a stereo field. But a stereo signal can't really be "upmixed" to a 7.1, the position of individual sound sources is lost and can't really be extracted from the mix -- there are fancy ways of "spatializing" stereo mixes to 5.1 or 7.1 with fourier analysis and panning certain phase correlations or frequencies to different speakers, but there's really no way for a spatializer to split the celli from the violas and pan them separately, or the machine guns and the explosions.

      3D audio formats keep violas and cellis on separate streams in the file, and then use position metadata to do the speaker mix in the receiver, so something mixed on stereo or 5.1 speakers could be unmixed to a 7.1, or 11.1, or 64 channel setup and you would actually get more fidelity.

    • by steveha ( 103154 ) on Tuesday March 17, 2015 @02:42PM (#49277503) Homepage

      binaural = stereo

      Actually in the audio world, "binaural" is used to specifically mean a recording intended for being played directly into the ears.

      I was once present for a binaural recording session. The guy doing the recording had brought a fake human head, and the two microphones for the recoding were positioned in the two ears. The idea was to reproduce as fully as possible what you would have heard if you had been sitting in that spot in the room, with your head in that position.

      You can listen to a binaural recording on speakers of course, but for the best experience you should use headphones.

      For the absolute best experience, the recording should use a fake head that is exactly like your head. Not many people are ever going to experience that.

      http://en.wikipedia.org/wiki/Binaural_recording [wikipedia.org]

      Audio can do funny things as it travels around your head. For the absolute best 3D experience with headphones, you want to measure what happens to audio around your head; this is called your "Head-Related Transfer Function" or "HRTF". Instead of recording the audio with a fake head shaped just like yours, companies can just record a good 5.1 or 7.1 recording, and then you can mix that down to a binaural stereo mix that is perfect for your head if you have your HRTF. According to the article the AES is standardizing a file format for HRTF data, so that the software you get will be more likely to be able to work with your HRTF data if you have it measured.

      The ultimate in VR audio will be headphones with motion tracking, and real-time mixing that uses your HRTF and changes the mix as you turn your head. If something is supposed to be coming from your left, and you turn your head to the left, that sound should get louder; then if you turn your head away from it, it should get quieter. If this is done right it should be incredible. People have been working on this for years and I'm sure someone somewhere has done it right, but I haven't seen any commonly available products to do it yet.

      But with VR goggles you should totally have VR audio like I described above. It would be really immersive.

      3d audio = surround sound (5.1/7.1/8.1/etc)

      Pretty much, 3D audio is intended to include speakers above the plane of the 5.1 or 7.1 speaker setup; the industry calls these "height speakers". DTS 11.1 audio, for example, has a standard 7.1 setup, and then 4 height speakers: two in the front and two in the back.

      The current ultimate in 3D audio is a 22.2 setup, where the ceiling has a 3x3 array of speakers, there are speakers at mid height, and there are speakers at ground level. However, IMHO there is zero chance that 22.2 will catch on as an audio standard.

      Before the 5.1 and 7.1 digital standards, there was Dolby Surround that was encoded within a stereo soundtrack. A simple audio mixer could "upmix" from stereo to surround. DTS Neural Upmix can make a very clean 7.1 from a stereo signal, and it works from an analog signal (it's not something tricky inside a digital encoded format). You can't get 8 kilograms of flour into a 2-kilo bag, and Neural Upmix 7.1 can't completely reproduce the same mix as you can play through 8 discrete channels, but it can provide a good experience.

      DTS 11.1, as I understand it, uses technology similar to DTS Neural Upmix to encode the 4 "height" channels within the other 7.1 channels. Turning 7.1 into 11.1 should be a lot easier than turning 2.0 into 7.1 so it should provide a good experience.

      I expect the industry to go to "object oriented" audio. This means that audio will have metadata tags saying what direction the audio is coming from, and then a real-time mixer upmixes from the digital format with the metadata tags to whatever mix you need (i.e. if you have 11.1 speakers you get an 11.1 mix, if you actually have 22.2 speakers you get that, if you have 7.1 you get that, etc.) I believe Dolby Atmos works this way, and I believe DTS

      • Before the 5.1 and 7.1 digital standards, there was Dolby Surround that was encoded within a stereo soundtrack. A simple audio mixer could "upmix" from stereo to surround. DTS Neural Upmix can make a very clean 7.1 from a stereo signal, and it works from an analog signal (it's not something tricky inside a digital encoded format).

        There's a fundamental difference between an encoded mix and an upmixer. Dolby Surround is intended to be decoded from 2 tracks into LCRS, the filmmakers mixed the film in Dolby St

        • by steveha ( 103154 )

          There's a fundamental difference between an encoded mix and an upmixer. Dolby Surround is intended to be decoded from 2 tracks into LCRS, the filmmakers mixed the film in Dolby Stereo and were listening to the surrounds so they know what's in them. The phase encoding is part of the channel spec.

          I'm with you so far.

          An upmixer takes a stereo or 5.1 mix and applies effects to it to make it sound like it was mixed in a wider format, but there's nothing really being decoded, it's just synthesizing or guessing wh

          • Despite the name "Neural Upmix", it is designed to work with phase-encoded signals intentionally mixed using Neural Downmix.

            They sell it as doing both, it's marketed as a spatializing upmixer that can also decode Neural Surround (which is a third format not necessarily related to Neo:X). But this feature is sorta incidental, as literally nothing is mixed in Neural Surround.

            I don't know what you mean by "DTS 11.1 is an actual format"... if you mean that it has 12 discrete channels, I believe you are mistake

            • by steveha ( 103154 )

              [DTS Neural Upmix is] marketed as a spatializing upmixer that can also decode Neural Surround (which is a third format not necessarily related to Neo:X).

              No, there is no "Neural Surround" format as such. Neural Downmix uses phase encodings and the output is just an audio stream (can be analog, saved as a wave file, saved as DTS Master Audio, saved as MP3, etc.).

              Look at this PDF. There are two columns: one shows different disk formats and how many bits per second each one needs; the other column has one thi

              • There is no disk format for "DTS Neural Surround" as such.

                Right, it's a phase matrix format, like Dolby Stereo or Pro Logic II. And like those, you can listen to the LtRt as a plain left-right and you'll still hear something that's tolerable stereo, it's just perilous for certain applications (it'll have bad mono compatibility).

                See also this press release. A radio station was broadcasting in 5.1 using Neural Surround... broadcasting in ordinary stereo FM as well as HD radio. Anyone could listen in stereo,

                • by steveha ( 103154 )

                  Now that you explained your points I don't think I disagree with you about any of the technical stuff.

                  I interpreted "nothing is mixed in Neural Surround" as "Neural Surround does not mix anything" which wasn't your intent. I agree that there isn't much content in Neural Surround; that press release was from 2006, and I don't know if that radio station is still doing the 5.1 broadcasts or not.

                  I'm a feature film sound designer and mixer, DTS is completely out of theatrical and television -- the original thea

                  • That's what I want for my living room anyway.

                    Buy a Home Atmos rig. I've been going around to Gearslutz [gearslutz.com] and asking around here at the studio and nobody has even heard of Neo:X.

                    • by steveha ( 103154 )

                      Buy a Home Atmos rig

                      I'm not in a big hurry, and I want to see what DTS comes out with before I invest in an object-oriented sound system. Also I'm not in a hurry to bolt speakers to my ceiling and run the wires.

                    • I'm not in a big hurry, and I want to see what DTS comes out with before I invest in an object-oriented sound system.

                      Seriously, don't bother, all the features and TV shows doing objects mixing now are mixing in Atmos, and the DTS-Barco system is a vaporware "open spec" that's DOA.

                      You don't have to do a ceiling installation for home atmos, I've heard good things about top-firing speakers [amazon.com].

              • But this feature is sorta incidental, as literally nothing is mixed in Neural Surround.

                Oh you don't understand, by "nothing" I mean no films or television shows are distributed using a Neural Surround downmixer. Because of this, there's no reason to have a Neural Surround decoder for your home.

  • by Anonymous Coward on Tuesday March 17, 2015 @11:51AM (#49276059)

    This is going to sound incredible on gold plated 3d monster cables!!

    • Poseur! If you're inputs aren't pristinely represented, you'll hear nothing except your hissy Monster cables. I knew you were a fraud when you didn't say to use this [amazon.com], too!

  • by rodrigoandrade ( 713371 ) on Tuesday March 17, 2015 @11:56AM (#49276105)
    Let's not forget that this means new DRM and additional difficulty to get shit that we paid for working together.
  • by Anonymous Coward

    So is it patent-encumbered ?

  • OpenAL? (Score:2, Redundant)

    by ilsaloving ( 1534307 )

    What's wrong with OpenAL?

    I love standards! There are so many to choose from!

    • Or, you know, I could RTFA and find out that it's actually an effort to create a FILE FORMAT for sharing 3d spatial audio data. Dunno if there's already such a thing, but if there isn't then it definitely makes sense to have one.

      • the ogg file format has supported multiple streams pretty much since inception. Couple this with a bit of positional tagging information and you're done.

        • the ogg file format has supported multiple streams pretty much since inception. Couple this with a bit of positional tagging information and you're done.

          Yeah, but this thing isn't just positional tagging, it's 3D soundscape stuff. So you have to have a way of communicating to the receiver the kind of space the audio stream is in -- the size of it, the general shape, how reflective the surfaces are, diffusion, the position of the space relative to the source, etc. and then you have to rigorously define the

          • Every electronic space/reverb algorithm I've heard just adds distortion and makes the original sound worse. I prefer to hear just the original instruments in as pristine quality as possible as if we were in an infinite volume room.
            • Every electronic space/reverb algorithm I've heard just adds distortion and makes the original sound worse.

              Well, by definition any signal-dependent component a process adds to an original signal is distortion. :) I just don't think you've heard good ones. Also, part of doing good music mixing is using reverb in a way that people don't notice, or just accept as natural. There are also applications of reverb [youtube.com] that don't sound like reverb [youtube.com].

              I prefer to hear just the original instruments in as pristine quality

          • At the time of posting, parent post is the only informative one in this discussion, and stands out among ignorant posts asking isn't OpenAL enough (this isn't about an API, FFS!) or being paranoid regarding DRM, things that would have been avoided had those posters RTFA and made sure they had minimum knowledge of the subject area before rushing to publicize their opinions.
  • Not really anything regarding stereo, but how to digitally recreate a 3D space and provide the resultant acoustic signature to stereo headphones? So, you could digitally model Carnegie Hall, or a warehouse, or a coffee shop, and if you know the locations of your point sources of audio you can then create what the room would sound like based on a given listener location and orientation? It sounds (a bit like) raytracing for audio, with the format allowing a standardized way to define the space.

    Yes? No? For o

    • Not really anything regarding stereo, but how to digitally recreate a 3D space and provide the resultant acoustic signature to stereo headphones?

      We can do this without any fancy computers, traditionally someone would make a binaural recording with a dummy head [wikipedia.org].

      So, you could digitally model Carnegie Hall, or a warehouse, or a coffee shop, and if you know the locations of your point sources of audio you can then create what the room would sound like based on a given listener location and orientation?

      It's not

    • Not really anything regarding stereo, but how to digitally recreate a 3D space and provide the resultant acoustic signature to stereo headphones? So, you could digitally model Carnegie Hall, or a warehouse, or a coffee shop, and if you know the locations of your point sources of audio you can then create what the room would sound like based on a given listener location and orientation? It sounds (a bit like) raytracing for audio, with the format allowing a standardized way to define the space.

      Yes? No? For once, I think we actually need an *article* to go with this abstract, or at least a Bennet Haselton-style rant* as the summary.

      *except factual, useful, and correct.

      Kind of... You know how, even though you only have two ears, you can still determine whether a sound is coming from in front of you, behind you, above you, below you, etc.? You don't need 5 ears or 7 ears or whatever surround-sound standard you think of, and yet you still get a great 3D image. It has to do with some complicated math our brains are instinctively doing, measuring the interaural phase differences of low frequency signals received at each ear, and interaural timing and amplitude differences of

  • Aureal Vortex 2 (Score:3, Insightful)

    by J-1000 ( 869558 ) on Tuesday March 17, 2015 @12:49PM (#49276603)

    Interactive 3D sound was incredible on my sound card with a Vortex 2 chipset back around 1999. After their acquisition by Creative Labs I've heard very little good 3D sound. Is it really that uncommon, or am I just numb? Seems odd that we're still trying to get this figured out.

    • It seems to me that the 3d audio in every single fps (first person shooter) has been just fine for 20 years. I suppose having a standard is good though.
      • You never had an A3D card did you? 3D Audio was incredible back in the 90s with this card. Then Creative got in and sued them out of existence. Since then 3D audio has been nothing but procedural fakery done in software with a few affects applied (think the audio equivalent of Doom sprites always facing the viewer's camera but actually being 2D).

        3D Audio in games took a colossal step back in the 90s and never recovered. Heck even a few months ago on Slashdot there was an article about a company trying to re

        • It was something of a dick move by Creative, especially since first gen EAX didn't compare to A3D. Later iterations were amazing though. Until MS killed hardware audio.
          • As a matter of interest, what did MS have to do with it? I was under the impression that Creative effectively drove the path towards software audio.

            • Not so far as I've found. Creative bet big on hardware audio, but in Vista Microsoft re-wrote the audio stack, eliminating direct access to audio hardware so DirectSound and DirectSound3D weren't direct anymore.

              (http://en.wikipedia.org/wiki/DirectSound#Windows_Vista.2FWindows_7)

              I've never seen any explanation from MS for why they did it, just rumors that it had something to do with DRM. No matter their reasons, it was devastating for Creative. They differentiated their products by focusing on positi

              • I would actually disagree with this history slightly. Hardware audio was dead long before Vista came out. Creative bet big on accelerating something that didn't actually need accelerating. Audio processing used a tiny TINY portion of the CPU time and was simply not worth buying hardware for. The inclusion of on-board 7.1 sound which wasn't appreciably worse than what Creative offered for anyone other than someone who actually did audio recording is what killed Creative. Even then most professionals were goi

                • Well, swappable op-amps is something audiophiles love. And although the theoretical cpu overhead of software audio seems negligible, the actual impact was not. Especially considering that the change occurred when single core cpus still dominated. Much of that is probably due to bugs in implementation (I still see audiodg rape my resources on occasion), but even if something isn't generally necessary, that doesn't mean it should be eliminated. In fact, MS has restored some degree of direct access, but on
                  • I didn't mean to imply that audio processing was a zero hit on CPU, I implied that for the most part games aren't limited by CPU. Actually I've never owned a gaming system that was limited by CPU, though I guess at the very top of the line systems you may end up with that scenario and I may be speaking out of ignorance.

                    Also my opamps comment was more tongue in cheek. Audiophiles typically have an irrational hatred of opamps, and those that do understand the phenomenally high performance available in an inte

                    • I don't, but high entirely understand it either, but swappable op-amps are common on high end cards. And I know, if it's so high-end, why would you want to swap components?

                      I've never had a top of the line system myself, but I think that's why the CPU hit was so noticeable. I was doing everything imaginable to free up resources for Oblivion - upping it's priority, stopping services, even closing explorer! In the end I was left with a dilemma, deal with the bugs in the new audio stack that hit my framerat

    • by Prune ( 557140 )
      3D positional audio is only a solved problem in the special cases where you have either (a) made binaural recordings — microphones in the ears of a dummy head with an HRTF known to be sufficiently similar to the listener's (or the listener's actual head!), or (b) have all the original positional information about the sound sources, and all environmental information affecting propagation and reverb, to compute the total wavefront from all directions converging at the listener's virtual head position, a
    • Re:Aureal Vortex 2 (Score:4, Insightful)

      by Dutch Gun ( 899105 ) on Tuesday March 17, 2015 @08:30PM (#49279975)

      We're not still trying to figure this out. It's that we *can't* use it.

      HRTF is a patent minefield, thanks to Creative Labs and a few others. It's extremely difficult to develop decent software HRTF functions without stepping on someone's patent. No one in the videogame industry bothers with hardware acceleration anymore as it's not supported, and even if it were, is far too limiting (think fixed-pipeline versus shader-based rendering). As such, we're pretty much stuck with basic pan/volume simulation of 3D sound, perhaps with a bit of low-pass filtering if you're lucky.

      Fortunately, many of those patents should be expiring soon, and CPUs are now plenty powerful enough to perform those calculations in software. So, we may see high-quality HRTF functions make a comeback in the next five years or so. Kind of sad, really.

    • I miss that sound card. Being called a cheater in Counter Strike because I could tell exactly where people were was so satisfying.

    • Creative had great hardware accelerated 3D audio (EAX) which they licensed out to other manufactures, but Microsoft killed any such thing when they deprecated DirectSound in Vista. I'm still kind of ticked off that I have this great DSP that goes completely unused while audiodg eats up my CPU.

      I've heard it was for DRM reasons, which just upsets me further.

      • I loved some of my creative cards, but I heard MS killed it because they were sick of sound card drivers, especially creative ones, being such a huge source of BSODs.

        • What are you talking about? Creative writes he best drivers ever! I can't think of a single problem they ever causIRQ_NOT_LESS_OR _EQUAL
  • by Anonymous Coward

    I am a "taper" meaning I go to concerts, record them legally with permission of the band, and then give them away, such as uploading them to the Live Music Archive on Archive.org (I'm SmokinJoe). It's a hobby, not a business model. Some recordings sound good with headphones, and some don't, for various reasons.

    HRTF or "binaural" recordings generally involve head mounted omnidirectional mics, and sound great with headphones, but not so great in a living room playback setting. More often than not we use di

  • by Anonymous Coward

    Can someone please post the link to the XKCD comic about standards so they can be modded Funny/Informative?

  • Does this mean when I'm listening via headphones and I turn my head that an accoustical source will stay in the same absolute direction, or does it stay in the same direction relative to my (turning) head which is today's headphone experience?
  • Surprised that it's taken this long to get to market. The technology has been around for years.

    Lake Technology in Australia pioneered this technology way, way back. Lake was bought by Dolby back in 2003 [dolby.com] and the technology re-labelled as Dolby Headphone [wikipedia.org]. Their technology uses HRTFs.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...