Earlier this week, I was listening to an episode of the always-fantastic Debug podcast, and there is a section in there that made me go... Hmmmm. It is in the second half of an episode
I mentioned once before, a while back. I hadn't gotten around to listening to the second half until now.
First, a bit of background... The guy they're interviewing is something of a CoreAudio expert on the Mac. Chris Liscio is the guy who wrote both
Capo and
Fuzzmeasure. (By the way, if you have a Mac and you're a nerd, Fuzzmeasure is absolutely insanely cool.) Most of his applications are audio-focused. He's no dummy, as they say on the show. They spend a bunch of time discussing Swift and CoreAudio and all sorts of interesting nerdy stuff.
Anyway, there's a particular section.
Listen:
https://overcast.fm/+I_KX3nSA/1:19:03Seriously, listen to that section. They talk about how you can't, or shouldn't, use Objective-C
at all in your audio rendering callback, because the Objective-C runtime can do some sneaky stuff behind your back, like allocating memory or taking locks. Chris comments about how you can do it, and you can get away with it, and lots of people do it, but that eventually, in certain situations under load, you'll end up with skips in your audio (unless you're insanely careful).
That sounds... Familiar.
I spent a bit of time googling after listening and found a couple articles that discuss similar concepts:
https://mikeash.com/pyblog/why-coreaudio-is-hard.htmlhttp://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothingIt sounds like, between the podcast and the stuff I found in brief searching, that
you cannot call any Objective-C runtime function, or access any object's properties (via the dot prefix) at all anywhere in your audio callback function, or in any thread that might block the thread that runs the audio callback function, or else the runtime can take a lock behind the scenes and cause audio glitches. And, from what Chris says in the podcast, it'll manifest in exactly the kinds of ways we've seen here. Powerful machines will hide the issue, bigger buffers will hide the issue, but in a resource constrained environment, you'll
eventually see glitching.
I know basically all of MC is written in C++, but I wondered about the code you used to bridge to OSX and Core Audio in particular.
Are you sure you aren't accidentally accessing the Objective-C runtime, or doing any of the other "restricted" things listed in the second of the two linked articles above, in something that might touch or block the audio callback function? Something that updates the UI, perhaps?