More > JRiver Media Center 32 for Windows

Atmos

<< < (4/8) > >>

hvac:
Charky,
  I asked. When my suggestion was incorrect I demurred. Rather than point out the fact that I don’t know, an answer could be provided in clear English. I’m not facile with the intricacies of IT. So you can talk to me like a fifth grader if you like.
  I’ve read perhaps more than you regarding the Atmos issue that I have raised. Atmos signal to Atmos enabled decoder to playback of object based multi channel data. To say “object based” is circular reasoning. Atmos is object based and the object based signal comes from Atmos proprietary data which is what Atmos does to the 7.1 channel feed. Where does the data come from? Is it collected from the recording process? To say it is not electronically derived would led me to believe you know what/where the extra data originates. So please except my apologies if you are in someway offended. In plain English, I understand that you know where this data comes from and that is what I’m looking for.

DocCharky:
I feel like you still don’t understand.

It’s not ‘data’. It’s just sound. That's it. It can originate from anywhere. Perhaps it’s been fully recorded on the spot with a multi-microphone setup, if we’re talking about a live concert, for example. Or maybe it’s manufactured from scratch by the sound engineer who takes various sounds and positions them as desired, if we’re discussing the latest movie blockbuster. Obviously, nobody recorded the Millennium Falcon in the sky flying above a microphone. It’s a combination of sounds that have been placed there by the sound engineer.

hvac:
That helps. The data is manufactured or manipulated from the original 7.1 DD signal. And we don’t know how the manipulation is accomplished? So any movie even 20 years since release can be Atmos encoded with the extra object data. What ever that is, we don’t know.

Thank you sincerely for going farther than most in an area which isn’t necessarily in your wheelhouse. I get it.

eve:
It's important to distinguish from the distributed home video atmos mix, and the original "session", as well as what a theater gets.


The home video Atmos mix is a partial 'render' of the audio, basically the entire "atmos" mix, with actual objects, gets rendered down to a bed mix which is standard 7.1 audio. This allows it to work on non atmos systems. In addition to this "bed" there's basically a metadata stream which tells the atmos decoder how to pan and shift audio OUT of the bed mix to atmos speakers, furthermore there are actual objects encoded (though not in all atmos streams, and some will have less objects than others) which are mixed into the correct available speakers at the right time.


DD+ Atmos does not have objects IIRC, just metadata. Someone should correct me on this though, if they know better.


I'm not an atmos savant, there's better people to ask than me.




hvac:
I’m not understanding.
  What happens at each stage of the Atmos object based creation, we don’t have that explanation from Dolby Labs.
Is new data being collected at the recording studio. 30 or more discrete objects, recorded? This wouldn’t explain how Star Wars and similar old audio tracks can get the Atmos upgrade. Or is Atmos created with processing data that already exists in the 7.1 HDMI signal. Does the new Atmos object based system use 20 or 30 or more microphones to create. Then downmix etc. I doubt it. Metadata is a good word but data about data again seems circular logic.
Bottom line is new data added or electronically generated? Tell me without an IT background how it’s done.
My simple brain thinks there’s a a line between data collection then encoding then decoding. At what point is the new Atmos signal created. As they say in 2001 Space Odyssey “something wonderful.”

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version