More > JRiver Media Center 32 for Windows

Atmos

<< < (5/8) > >>

arcspin:
I think the Wikipedia regarding Atmos explains it in detail.
https://en.wikipedia.org/wiki/Dolby_Atmos

The application of Atmos in home theatres differs from cinemas primarily because of restricted bandwidth and a shortfall in processing power. A spatially-coded sub-stream is added to Dolby TrueHD or Dolby Digital Plus or is present as metadata in Dolby MAT 2.0, an LPCM-like format. This sub-stream is an efficient representation of the full, original object-based mix. This is not a matrix-encoded channel, but a spatially-encoded digital signal with panning metadata. Atmos in home theaters can support 24.1.10 channels and uses the spatially-encoded object audio sub-stream to mix the audio presentation to match the installed speaker configuration

In order to reduce the bit rate, nearby objects and speakers are clustered together to form aggregate objects, which are then dynamically panned in the process that Dolby calls spatial coding. The sound of the original objects may be spread over multiple aggregate objects to maintain the power and position of the original objects.

arcspin:

--- Quote from: hvac on June 06, 2024, 09:50:42 am ---My question, 7.1 input then something wonderful then even more channels giving a three dimensional sound. Do any of you know how the extra data in the magic box is derived? Online sources including Dolby just say it happens but not how.

--- End quote ---

I might understand your question better after rereading it.
Are you referring to the "Dolby Surround Upmixer (DSU)" ?

Where Dolby have a technology licensed to manufacturer to implement in their receivers and processors.
If so, only Dolby Laboratories knows that secret sauce.


"Dolby Surround Upmixer (DSU) or just Dolby Surround for short is Dolby’s latest version of surround upmixing technology to expand stereo, 5.1, 7.1 and 9.1 content to support ANY speaker configuration including any number of overhead channels.
It is a complementary technology to Dolby Atmos in a sense as it can expand channel-based content to utilise all the speakers in a Dolby Atmos installation.

DSU is an evolution from Dolby Pro Logic IIz and replaces Dolby Pro Logic, Dolby Pro Logic II, Dolby Pro Logic IIx and Dolby Pro Logic IIz on home theatre receivers and processors. While it is possible to license both the Pro Logic decoders separately along with DSU, only Yamaha did so on their CX-A5100 processor. Other companies decided not to pay for dual licensing and therefore the Pro Logic upmixers are now a relic of history."

https://simplehomecinema.com/2023/05/19/dolby-sound-formats-and-upmixers/

Upmixing in general:
"The Upmix technology uses a unique algorithm which extracts the ambience from the direct sound."

hvac:
“Spatially encoded substream” “ Dolby Surround up mixer” “official representation of the object based mix” “ metadata in Dolby MAT 2.0” “expand channel based into object based” “ DSU is an evolution from Pro Logic” on and on. Many press releases and it’s explained using the same terms in Wikipedia.
Read the above phrases and think about their meaning. String these technical terms together into several paragraphs. Now you get it. The explanation has created a black box process. Metadata is the data added to the stream which in Atmos case allows the object based stream to reach each of many speakers. (I have 7.1.2 about $2000 cost above what I had before Atmos.) The “object based stream” allows three dimensional representation of the soundstage or soundtrack. And it supports any speaker configuration. This is not a matrix encoded channel. Rather it is a space encoded digital signal.
These explanations tell me in lofty technical and clever terms how Atmos achieves their 3D goals. These phrases sound good especially when discussing Atmos to impress my friends. What do they mean? Simple words explaining what in the Atmos ecosystem is being done to the original signals. Is the Atmos process “blah blah blah” using the 7.1 recording alone as its starting point? Then the Atmos process, no matter how erudite it’s explained is a manipulation of the 7.1. Very sophisticated but starting with the 7.1. I’ve read that the recording engineer has some leeway in the process of making Atmos.
Bottom line for me is, how does a 7.1 two dimensional recording become three dimensional with metadata? That’s the crux. Everything else is fluff. Either Atmos happens at the point of collecting the data, adding microphones and adding “objects “ (don’t call them channels). Or it massages the data collected when the recording was recorded from the multiple channels feed. To my mind it must be one or the other.
This thread has been allowed to continue by JRiver and I thank them for allowing me to further my knowledge. Many of the criticisms expressed have come regarding my lack of understanding of the technical language. But none has, other than saying I’m wrong, proposed an answer to the question above. Collection of data not previously collected by original recording OR massaging (you can insert any polysyllabic word) the original feed. Created not recorded.

arcspin:
In movies all sounds are added in post-production by the Sound designer and created by the Foley artist in a Foley studio.
Dialog are however recorded as much as possible live but can be enhanced in a process called Automated Dialogue Replacement (ADR) in post.

That means that the Sound designer have total control over all sound when mixing the audio tracks in Pro Tools.
It is in Pro Tools where the bed and objects are created and placed within the room.


Question: "Bottom line for me is, how does a 7.1 two dimensional recording become three dimensional with metadata"
Answer: Everything is done in post by the sounddesigner.
Answer 2: if upmixed with DSU, the Upmix technology "DSU" uses a unique algorithm which extracts the ambience from the direct sound.


"
However, very often the aim is to record as little other sound as possible. Almost all the other sound you hear - wind in the trees, clattering of cutlery, firing of weapons, footsteps on flagstones and anything else you can think of - is very often added afterwards by specialised “foley” artists, especially on big studio productions.

The advantages of this are many.

The director has more control over what is emphasised in the soundtrack
It is easier to edit without worrying about variation in background noise from take-to-take
Sounds can be “beefed-up” for dramatic effect
Foreign language versions can more easily be prepared since the dialogue is entirely separate from the sound effects.
"


"
The amount of a movie's dialogue that is recorded in post-production can vary depending on several factors, such as the filming conditions, the quality of the on-set recordings, and the specific needs of the film. In many cases, the majority of a movie's dialogue is actually recorded during the filming process on set using boom microphones and lavaliere microphones worn by the actors.

However, it is common for filmmakers to record additional dialogue or re-record certain lines in post-production to improve the audio quality, fix technical issues, or make creative changes to the script. This process is known as Automated Dialogue Replacement (ADR) or looping.

ADR is typically used when the original on-set recordings are of poor quality due to background noise, wind, or other technical problems. It can also be used to modify or improve performances, particularly in scenes where the actors' emotions or delivery need to be adjusted.

In some cases, filmmakers may also add additional dialogue in post-production to clarify plot points, enhance the story, or address changes made during the editing process. Overall, while a significant portion of a movie's dialogue is usually recorded during filming, post-production dialogue recording can be an important tool for ensuring the overall quality of the audio in a film.
"

hvac:
“Sounds are added post production…in a Foley studio. Can be enhanced by ADR.” Now I’m understanding. “The sound designer has control over the process.”
It is looks like the Atmos creation process happens immediately post production? Does this mean it can be applied to older films recorded before Atmos was created? Also since decoding is done in the home theater or elsewhere, would it be fair to say the encoding is created in the Foley studio? Now does the sound designer have leeway? I’ve read that they do? Does that leeway compromise the end Atmos product? Would it be fair to say the Atmos encoding done in the lab is where the Atoms movie is different from the usual Dolby Digital 7.1. Not just an electronic derivative of the original. Still to this lay person it is a little evasive.
You’ve cleared the air. Bravo.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version