Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using 'binaural' audio as special type of 'direct speaker' #282

Open
WernerBleisteiner opened this issue Jun 8, 2024 · 7 comments
Open

Comments

@WernerBleisteiner
Copy link

I'm wondering if we hadn't already touched on this earlier (and I've simply forgotten the outcome)...
In a workshop on immersive radio drama at HdM Stuttgart (now the domain of Prof. Dr. F.M.) we've came across this use case:

There might be a (pre-rendered of dummy head stereo) binaural audio element (c.f. 2076- 2 'typeDefinition 005') that needs to be added as a special case of 'direct speaker'.

When rendering the ADM in binaural this should not be additionally rendered but just added to the (BEAR) rendered output.

I reckon this also somehow related to the 'audioBlockFormar headlocked' function.

If this element optionally should then also rotate with head-tracking, is debatable or technically depending.

Open also, how this signal type is to be treated in ls monitoring: +/-90°?
But creators might also consider to additionally process and route the primary binaural signal with up-mix plug-ins to 'widen' it, and defining alternative 'loudspeaker-dedicated' audio programmes.

My naïve imagination:
adding a directSpeaker type "binaural"
with check-box 'bypass binaural ADM
rendering'

Thanks.

@firthm01
Copy link
Contributor

@tomjnixon and I had a brief chat about this, probably a year or so ago.
We should probably have a distinct plugin for Binaural type as we do for the other 3 supported types, but that's not a problem.
The problem is, there are no rules for handling "Binaural" types in Tech 3396 (BEAR) and it is specifically unsupported in EAR/BS.2127. If we want to keep the EPS standards-compliant (which we really should as it's designed in part to promote those standards), then this should be tackled first really.
Unfortunately headLocked and headphoneVirtualise are also unsupported at the moment, so there's no easy way to get around it using a DirectSpeakers type either.

@WernerBleisteiner
Copy link
Author

Thanks for your feedback @firthm01 and @tomjnixon !
I'm very aware of the complexity and present limitations of that.
However, I've worked out a little demo in Reaper/EPS for the students as inspiration, how this might a tackled for the moment:

  • During production, with some caution, one might be able to handle this with routings and monitor settings and dedicated audioProgrammes
  • optionally, creators may 'upmix' the binaural track(s) into multi-channel to broaden it
    (in this radio drama binaural is only used for ambience sounds)
  • the rendered ADM would yet contain all elements
    (at the moment it's just DAW session and they would render binaural out from a plug-in)
  • some manual corrections in the axml could fix the missing 'binaural' defintions
    (I've attached an edited version from my naïve view - pleas have a look at lines 56ff)
  • instead of rendering the binaural version of the final production with EAR, one would render (bounce) the
    'BINAURAL-MONITOR-MASTER' track instead.
    EPS two objects, one stereo one binaural bed plus FOA -TEST-1-ADM.zip

What do you think?
w

@firthm01
Copy link
Contributor

firthm01 commented Jun 13, 2024

Yes, that could work although I'm not sure how good the LS upmix reproduction of the binaural asset would be for general use.

For your edited AXML, the audioChannelFormatID elements under the binaural audioObject aren't necessary, but you would need to make sure that you update the two audioTrackUID elements (ATU_00000009 and ATU_0000000a) to use the correct audioChannelFormatIDRef and audioPackFormatIDRef. This would need changing in CHNA too.

@tomjnixon
Copy link
Member

Hi, some random thoughts on this:

  • it's not supported in BS.2127, because the ADM doesn't contain enough metadata to represent programmes that contain binaural elements, but can also be rendered to loudspeakers. It's reasonably obvious that Binaural-type content should be ignored when rendering to loudspeakers, but what is needed is a way to represent content which is only rendered on loudspeakers.

    For example, if your programme contains a binaural recording of some speech to get nice near-field effects, when rendered to loudspeakers there should probably be an Object-type track containing equivalent content. Sometimes it may be acceptable to route the binaural track to loudspeakers, but that is not ideal, so should not be the only option.

    I don't want to define a behavior in BS.2127 until BS.2076 is clear about what should happen, because otherwise BS.2076 could end up saying the opposite and making a mess.

  • I don't think there's any reason to not support it in the BEAR, because the behavior for binaural output is obvious. It's not useful for content intended to be reproduced binaurally or on loudspeakers, but would still make the process of producing binaural-only content better.

  • Given that situation, it seems like implementing it in the EPS might cause confusion, as using a binaural-type plugin would mean that the whole programme can no longer be rendered on loudspeakers. I can't remember how the support for multiple programmes works in the EPS, but maybe it could work if you had one programme for loudspeaker rendering, and one for binaural rendering? Maybe there could be some other work-around, like making the user chose what to do with binaural content.

As for headLocked and headphoneVirtualise, these should both be easy enough to implement (except for the DRR, which needs more DSP work). The only problem i see with using headphoneVirtualise bypass for this is that your binaural content has to be placed between +-30 and +-110 degrees, which might be ok.

@WernerBleisteiner
Copy link
Author

Thanks @firthm01 and @tomjnixon for your explanations.
I know that there are a lot of bits and pieces missing to make this specific use case work one day.

However, I think ADM and its whole eco-system is eventually also intended to describe and allow for combining all formats.
I must admit, I was quite stunned, that the students at HdM deliberately decided to try and combine 5th order ambisonics for FX (reverb) along with binaural ambiences plus stereo plus single dialogue objects.
And this in a complex workflow, where ProTools will deliver the final mix (just bianural no stereo, no multi-channel, no ambisonics, no Atmos) while Reaper is used outboard to process 5thoa....
This is why I thought, hey, ADM and EPS could handle all these formats at once, delivering an ADM as an archivble master, while keeping all these elements/stems in their orginal format including proper description, for futher exploitation (instead of two DAW session for ProTools and Reaper, which might never be merged again like that on any other system).

They still can stick to their main deliverable, the binaural mix, by simply bouncing it from a master track (and not render it from ADM). And sure, a upmix from binaural to muiltichannel (or ambisonics) is just a makeshift, depending on the input materail and needs careful judging if it suits.
This was just trying to nudge them into that. And as some of them are also into coding, they might even join the community and help to solve the related issues... ;)

And one thing on playing binaural on loudspeakers: This has been a discussion for 50 years, ever since ARD started to produce dummy head stereo at large until the early 80s).
In practice, the rejection of loudspeaker reproduction to me seems often (mostly) a bit academic.
I remeber well some demos of solution approaches in Salford when I was there some years ago. Very impressive, very good - but unfortunately nothing that went into any products so far.
To give you a contrary example: BR's first Atmos produced radio drama was only aired in binaural on DAB+ and FM. The first scene of it is located in a taxi. I was in my car during broadcast, just a simple stereo with some 6 speakers built.in. It was there, too, quite an immersive experience, due to the similar acoustics and the 'binuaral cues' being reproduced..

@hockinsk
Copy link

hockinsk commented Jul 2, 2024

FWIW Dolby 'do' use AC4-IMS binaural stream rendered from the ADM for all Atmos-enabled stereo devices such as laptops, tablets and phones playing e.g. Tidal, Amazon Music, Apple Music etc. My Asus laptop has this option. When I tested what they were doing to the headphone binaural stream, it seems a variation on RACE ie Ambiophonics where there's a Cross Talk filter. This way they use the same stream for headphone binaural and speaker ambiophonics.
Little demo here where I rendered the AC4-IMS headphone binaural and played it through ambio.one RACE and matched the AC4-IMS. afaik there's not yet headtracking possible on the AC4-IMS, I assume it would need the full AC4 multichannel stream but that seems broadcast-only, not consumer music streaming yet. Apple of course use the 5.1 eac3-joc and do their own thing to it.
https://www.youtube.com/watch?v=QgeTNHvNSds

@hockinsk
Copy link

hockinsk commented Jul 2, 2024

In the Dolby Reference Player when you load an AC4-IMS render it allows the option to emulate the Speaker Virtualisation and test at what angle your e.g. laptop speakers are to hear what consumers will hear.
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants