INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: Real Time Audio Processing, Skips and Stutters, and OSX  (Read 4827 times)

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Real Time Audio Processing, Skips and Stutters, and OSX
« on: November 07, 2015, 08:41:24 am »

Earlier this week, I was listening to an episode of the always-fantastic Debug podcast, and there is a section in there that made me go... Hmmmm. It is in the second half of an episode I mentioned once before, a while back.  I hadn't gotten around to listening to the second half until now.

First, a bit of background... The guy they're interviewing is something of a CoreAudio expert on the Mac. Chris Liscio is the guy who wrote both Capo and Fuzzmeasure. (By the way, if you have a Mac and you're a nerd, Fuzzmeasure is absolutely insanely cool.) Most of his applications are audio-focused. He's no dummy, as they say on the show. They spend a bunch of time discussing Swift and CoreAudio and all sorts of interesting nerdy stuff.

Anyway, there's a particular section.  Listen:
https://overcast.fm/+I_KX3nSA/1:19:03

Seriously, listen to that section. They talk about how you can't, or shouldn't, use Objective-C at all in your audio rendering callback, because the Objective-C runtime can do some sneaky stuff behind your back, like allocating memory or taking locks. Chris comments about how you can do it, and you can get away with it, and lots of people do it, but that eventually, in certain situations under load, you'll end up with skips in your audio (unless you're insanely careful).

That sounds... Familiar.

I spent a bit of time googling after listening and found a couple articles that discuss similar concepts:
https://mikeash.com/pyblog/why-coreaudio-is-hard.html
http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing

It sounds like, between the podcast and the stuff I found in brief searching, that you cannot call any Objective-C runtime function, or access any object's properties (via the dot prefix) at all anywhere in your audio callback function, or in any thread that might block the thread that runs the audio callback function, or else the runtime can take a lock behind the scenes and cause audio glitches. And, from what Chris says in the podcast, it'll manifest in exactly the kinds of ways we've seen here. Powerful machines will hide the issue, bigger buffers will hide the issue, but in a resource constrained environment, you'll eventually see glitching.

I know basically all of MC is written in C++, but I wondered about the code you used to bridge to OSX and Core Audio in particular. Are you sure you aren't accidentally accessing the Objective-C runtime, or doing any of the other "restricted" things listed in the second of the two linked articles above, in something that might touch or block the audio callback function? Something that updates the UI, perhaps?
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10718
Re: Real Time Audio Processing, Skips and Stutters, and OSX
« Reply #1 on: November 07, 2015, 09:18:06 am »

And here I am wondering how those "audiophiles" ever came to the conclusion that Mac is a good audio OS. :)
Sounds like a terrible OS design to me.
Logged
~ nevcairiel
~ Author of LAV Filters

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71417
  • Where did I put my teeth?
Re: Real Time Audio Processing, Skips and Stutters, and OSX
« Reply #2 on: November 07, 2015, 09:31:53 am »

I think I might like selling used cars now.
Logged

gvanbrunt

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 1232
  • MC Nerd
Re: Real Time Audio Processing, Skips and Stutters, and OSX
« Reply #3 on: November 07, 2015, 09:42:32 am »

And here I am wondering how those "audiophiles" ever came to the conclusion that Mac is a good audio OS. :)
Sounds like a terrible OS design to me.

Yep. Before OSX it used to be much better than Windows for audio. Windows had the same issues in it's OS that caused no end of grief. The since have made major advances in media that have made it a non issue. It sounds like OSX is in the same place Windows used to be for audio. At least it appears you can work around it, which you couldn't on Windows.

Why am I even telling you this? What you know about audio software design would make me look like a caveman trying to improve the sound of banging two rocks together. Just thinking out loud I guess...
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: Real Time Audio Processing, Skips and Stutters, and OSX
« Reply #4 on: November 07, 2015, 10:30:30 am »

Sounds like a terrible OS design to me.

I agree the design is not ideal for Media Center's needs. Much of the design decisions impacting this seem to be focused on very low-latency playback. It seems like from what I'm reading that achieving latencies measured in microseconds is achievable on OSX (and Linux) but is very tough to achieve on Windows (at least through WASAPI). That's very essential for things like effects processing for a guitar or a software sampler, for example, but doesn't matter much at all for a media player.

In any case, I don't know if this is something you're already doing (and we're only talking about the audio rendering callback function that feeds data directly to CoreAudio), but it might be worth checking through that code and verifying that there's nothing that might be blocking in there.  If so, it doesn't matter how large of a circle buffer you have "above" it, if the render callback gets blocked for more than 5.8ms (at 44.1kHz), it will glitch.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

JimH

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 71417
  • Where did I put my teeth?
Re: Real Time Audio Processing, Skips and Stutters, and OSX
« Reply #5 on: November 08, 2015, 12:58:41 am »

There are at least two processes involved here.

A.  The buffer MC or other software uses to keep its pipeline full.  

and

B.  A similar buffer that the sound device uses to keep its pipeline full.

Any problems with OSX might affect process A, but I don't see how they could affect process B.

And, if what you say is true, this is an Apple problem and would be a very wide-spread problem for any audio software.

Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: Real Time Audio Processing, Skips and Stutters, and OSX
« Reply #6 on: November 08, 2015, 09:10:15 am »

There are at least two processes involved here.

A.  The buffer MC or other software uses to keep its pipeline full.  

and

B.  A similar buffer that the sound device uses to keep its pipeline full.

Any problems with OSX might affect process A, but I don't see how they could affect process B.

Yes, but you have to transfer data between Buffer A and Buffer B. I should preface it by saying that I've never programmed against Core Audio or anything else like it, so I could be wildly misunderstanding how this works. I've read a bunch of the documentation though.

I think the way it works is that in between Buffer A and Buffer B in your description, is a render audio callback that sends the audio from Buffer A to Buffer B. If this callback is blocked, then your audio will glitch. The Buffer B pipeline in the audio device driver will contain the glitch (it is filled in real time). What code contains this callback depends on how "low" down you are in the Core Audio stack (so how you use Core Audio determines whether your code even contains this callback).

If you are using an Audio Queue to render playback, then that object performs the audio render callback, and you're safe, as long as you never underrun the buffer set inside the Audio Queue. That is probably what most software does (well, the few applications that don't just use the AVPlayer built into AV Foundation), as the Audio Queue automatically converts formats and handles rendering the audio to the device for you.

You can, however, connect an Audio Unit directly instead, and not use the Audio Queue system. If you are doing this, then you directly control the output device (no automatic conversion happens) and then your code contains this callback, and your code is handing audio data directly to the device. From Apple's documentation on Audio Units:

Quote
Render Callback Functions Feed Audio to Audio Units

To provide audio from disk or memory to an audio unit input bus, convey it using a render callback function that conforms to the AURenderCallback prototype. The audio unit input invokes your callback when it needs another slice of sample frames, as described in Audio Flows Through a Graph Using Pull.

The process of writing a render callback function is perhaps the most creative aspect of designing and building an audio unit application. It’s your opportunity to generate or alter sound in any way you can imagine and code.

At the same time, render callbacks have a strict performance requirement that you must adhere to. A render callback lives on a real-time priority thread on which subsequent render calls arrive asynchronously. The work you do in the body of a render callback takes place in this time-constrained environment. If your callback is still producing sample frames in response to the previous render call when the next render call arrives, you get a gap in the sound. For this reason you must not take locks, allocate memory, access the file system or a network connection, or otherwise perform time-consuming tasks in the body of a render callback function.

Those are the render callbacks labeled in this graph (from Audio Flows Through a Graph Using a Pull):


That's what I'm talking about.

And, if what you say is true, this is an Apple problem and would be a very wide-spread problem for any audio software.

I think it is only a problem in software that uses its own custom audio engine. Almost no one does, and they just use the higher-level abstractions that Core Audio or AV Foundation provides.  Perhaps you are also using a higher level abstraction (an Audio Queue, probably) and you aren't performing any "non-buffered" audio callbacks. Then all of this is probably noise.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/
Pages: [1]   Go Up