Matt, I am not sure how much effort such a convolution engine integrated with your great 64bit fp audio engine would be.
btw the last time I was so excited about the prospect of a new feature for Media Center was when I suggested madVR support - and look how that turned out ;)
Things sometimes take time here. ;D They had already been working on madVR support for 11 months because I requested it in January, 2010. (http://yabb.jriver.com/interact/index.php?topic=55827.0) ;)
I think the math behind convolution (FIR filters) is simple, unless there's some complexity I'm not aware of.Hello Matt,
Here's a nice overview:
http://www.songho.ca/dsp/convolution/convolution.html
the basic maths IS pretty simple. Just multiplications and additions.
BUT: if you try to compute long filters (e.g. 65536 taps or even more) then the CPU load will be heavy.
So the solution is to apply a Fast Fourier Transformation. With FFT the convolution process is just a vector multiplication. Then you apply an inverse FFT and get the result back in the time domain.
BUT: also a FFT convolution has its obstacles. Especially if you try to get a low CPU load (to avoid bad influences by CPU jitter) AND still get a low latency. Typically you need to use a buffer (FFT package size), this means you have to collect samples. This causes latency. If the package is too small, then the CPU lod can rise quickly. If the package size is too big, then you get a big latency.
Today there are also zero latency algorithms, but the have to organize a lot of threads with package sizes of different lengths.
Imagine a window of 1024 samples (so 512 FFT frequencies).
Let's say we keep the results of the last 4 FFT runs (so only 4096 samples): FFT[0], FFT[1], FFT[2], FFT[3]
We would accumulate samples until we have 1024 samples, then process a block using pseudo-code like:
// roll previous FFT results back
FFT[3] = FFT[2]
FFT[2] = FFT[1]
FFT[1] = FFT[0]
// perform FFT on current window
FFT[0] = PerformFFTOnCurrentSamples
// convolute
for (int window = 0; window < 4; window++)
for(int frequency = 0; frequency < 512; frequency++)
FFTOutput[frequency] += FFT[window][frequency] * [Constant From Impulse File]
// iFFT to return to time domain
iFFT(FFTOutput) to get output samples
You clearly need for each samplerate its own filter according to it. A program like VSTConvolver is working nice but lacks of the automatic switch between music sources with changing samplerates.
In addition you will find fore sure users who like to switch between filter banks for comparison of filters, e.g. blind tests.
And of course you will find requirements where people like to apply multiway solutions = multiple outputs from a single input. Or they like to apply combinations of filtering jobs e.g. filter the left channel and add a filtered right channel. Something like crosstalk cancellation or crosstalk introduction (headphone listening).
So e.g. with a 3-way stereo system and 3 filterbanks and 6 possible samplerates you already have to deal with 106 filter kernels of size 512 kB (65536 taps double precision coefficients) and even more.
Just as some first comments. Simple solutions are available but users like TheLion will complain about for sure ;)
Dumb Q, what does this do for us?
You know those functions on your AV Receiver using the small mic to "measure" your room and tune the audio to that? That, if done right in a better quality i imagine.
The hard part is the measuring and calculation, not the actual filtering, FWIW.
Can anyone here point me in the right direction as far as a measuring device goes
As someone who's new to this I Hope I don't sidetrack the thread too much here. Can anyone here point me in the right direction as far as a measuring device goes, I'm assuming we're talking about essentially a high quality dynamic microphone for the computer? I have a decent desktop microphone here with a usable frequency response of up to 8khz although I know I'd have to be joking if I even began to think that this would make for a usable substitute.
To extend on what Hulkss suggested: Of prime importance is using a ->calibrated<- omnidirectional mic and a decent mic-preamp with ASIO driver. Calibrated means that the mic has been measured against a reference and correction tables to its impuls response are delivered with it. This brings to results of any decent mic (a Behringer ECM8000 will suffice if calibrated) in the ballpark of each other.
I have only seen frequency response correction tables for a mic like the ECM8000. Where can a person get correction files for phase response?
I meant frequency response correction tables ;-) How do you apply your FIR filters for general audio streams (WDM/DirectX streams like your mentioned computer games)?
As someone who's new to this I Hope I don't sidetrack the thread too much here. Can anyone here point me in the right direction as far as a measuring device goes, I'm assuming we're talking about essentially a high quality dynamic microphone for the computer? I have a decent desktop microphone here with a usable frequency response of up to 8khz although I know I'd have to be joking if I even began to think that this would make for a usable substitute.
I have only seen frequency response correction tables for a mic like the ECM8000. Where can a person get correction files for phase response?
We have some other higher priorities than convolution right now, so I can't say when we might put this work into a real build.
DRC = Dynamic Range Compression
Get another acronym, this one is used already. :P
This has been my dream for the last 10 years!
For a start, the entire diy audio community would emigrate here, thats everyone on partsexpress, diy audio, diy hifi, loads from home theatre shack/forum, and on and on.
And I thought it was only me ;) Make them post here and this feature might get more priority from Matt, Jim, et all
I have started with REW myself. Giving your background I would strongly recommend you take a closer look at Uli's Acourate! Coming from REW you will be more than pleased ;)
Does the plan include active speaker setup (crossovers/multiple channels outputting the same audio)??
...encorage people to play with active speakers as a future improvement- creating the most technically accurate sound system possible, and saving thousands of dollars. This would render all high end processors obsolete!
This is working great right now and can only get better.
I just pulled a very expensive pre/pro with Audyssey Pro and two DEQX processors out of my rack and replaced them all with a home built HTPC. I develop the XO and correction filters with Audiolense. Their philosophy combines loudspeaker correction, room correction, and digital XO all into a single filter for each driver. Super quick and easy to use compared to anything else.
JRiver MC does all the digital processing including the ConvolverVST plugin for the FIR filters. Then straight to my 16 channel DAC over USB. Then directly into the power amps. Digital internal volume control with JR Media Center works perfectly.
You can go nuts with multiple room measurements, target response curves, different bass management strategies, all in software.
Ready for prime time now! The final missing piece for set-up of 7.1 systems with active XO just came in build 17.0.62 of MC.
I've been waiting a long time for this solution to be available and it was lots of fun to pull it all together just as all the pieces were ready.
Does this mean we would need to set all sources to resample to one rate, or that you (JRiver) can use one filter and extrapolate what should be used for other sample rates?
- Filters can be any sample rate (so one high sample rate, high precision filter can be used for all sources)
Does this mean we would need to set all sources to resample to one rate, or that you can use one filter and extrapolate what should be used for other sample rates?
I have the internals of a native convolution engine working.
Some points of interest:
- All processing is 64bit
- Any number of paths, targetting any input or output channel, is supported
- All FFT / iFFT evaluation is lazy so only run when necessary
- Filter files can be in any format supported by Media Center (.wav, .ape, .flac, .mp3, 16bit, 32bit, etc.)
- Partitioning is used to avoid latency (equal length partitioning for now, maybe unequal length someday)
- Latency is handled automatically so lip-sync for video works without additional user configuration
- Filters can be any sample rate (so one high sample rate, high precision filter can be used for all sources)
- Handles flushing nicely so the tail of the last song isn't heard when playing a new song
- Handles volume attenuation for clip protection automatically
.
Definitions:
I'm trying to use the same terminology as Convolver to minimize confusion.
- Path: a single convolution effect consisting of an input channel, filter, and output channel
- Filter: the correction / convolution used by a path, normally an impulse file
I still have to work out a user interface and reading of convolver text configuration files, so it might be a little while before we have anything to release.
I have the internals of a native convolution engine working.
Some points of interest:
- All processing is 64bit
- Any number of paths, targetting any input or output channel, is supported
- All FFT / iFFT evaluation is lazy so only run when necessary
- Filter files can be in any format supported by Media Center (.wav, .ape, .flac, .mp3, 16bit, 32bit, etc.)
- Partitioning is used to avoid latency (equal length partitioning for now, maybe unequal length someday)
- Latency is handled automatically so lip-sync for video works without additional user configuration
- Filters can be any sample rate (so one high sample rate, high precision filter can be used for all sources)
- Handles flushing nicely so the tail of the last song isn't heard when playing a new song
- Handles volume attenuation for clip protection automatically
.
Definitions:
I'm trying to use the same terminology as Convolver to minimize confusion.
- Path: a single convolution effect consisting of an input channel, filter, and output channel
- Filter: the correction / convolution used by a path, normally an impulse file
I still have to work out a user interface and reading of convolver text configuration files, so it might be a little while before we have anything to release.
Question regarding filter files. In Convolver, there is a number of ways you can set up your filter. My preferred way is to have one wav filter file with sequences for each channel (total of 15 channels for 7.1 audio because of XO between 7 channels to two subs), which is read an interpreted via a config file. The structure of this file is documentet at http://convolver.sourceforge.net/config.html (http://convolver.sourceforge.net/config.html).
Have you been thinking along the same lines, or will us old Convolver users need to restructure our filters, e.g. divided into one file pr channel? What are your thoughts on using a config file similar to Convolver?
Anyway I like to say that a filter samplerate conversion has to be done carefully. In case of a low to high samplerate conversion: how do you handle the stop band area, especially above fs/2 of the lower samplerate? You need to remove the stopband.
In case of high to low samplerate conversion: anyway there will be the brickwall filter influence.
Another point: IMO it is necessary to allow the partition size to be defined as a parameter. With multiple filters the CPU load can get quicky higher. So a bigger size, e.g. 8192 or 16384 can help a lot. An alternative is to analyze the capability of a CPU and to optimize the setup like fftwisdom (library FFTW) tries to do.
Matt, this is amazing and talk about fast!! Wow. Good stuff.
-Jim
One thing that would help me with this is if someone could send me a few simple impulse response / filter files. All I have for testing now are echo examples like "concert hall" that aren't too impressive. I'd like one that is just an equalization with no timing / phasing effects. For example, a frequency correction filter for a speaker or headphone.
I think we'll try to just support Convolver configuration files to start with.
We might end up with our own format (or package of everything in a single ZIP package) later.
I have the internals of a native convolution engine working.
Some points of interest:
- All processing is 64bit
- Any number of paths, targetting any input or output channel, is supported
- All FFT / iFFT evaluation is lazy so only run when necessary
- Filter files can be in any format supported by Media Center (.wav, .ape, .flac, .mp3, 16bit, 32bit, etc.)
- Partitioning is used to avoid latency (equal length partitioning for now, maybe unequal length someday)
- Latency is handled automatically so lip-sync for video works without additional user configuration
- Filters can be any sample rate (so one high sample rate, high precision filter can be used for all sources)
- Handles flushing nicely so the tail of the last song isn't heard when playing a new song
- Handles volume attenuation for clip protection automatically
.
Definitions:
I'm trying to use the same terminology as Convolver to minimize confusion.
- Path: a single convolution effect consisting of an input channel, filter, and output channel
- Filter: the correction / convolution used by a path, normally an impulse file
I still have to work out a user interface and reading of convolver text configuration files, so it might be a little while before we have anything to release.
Matt, first THANK YOU very much for developing this! Its huge for audiophiles
everywhere.
Second, I can't get my Audiolense convolution file to work - at least the Status
indicates "Not Valid" after selected the WAV file via the Browse function. I tried
ticking the Convolution DSP on and off via the left-hand menu. I've attached it for
you to look at. Its for a 6 channel active system (2, 3-way speakers). Convolver
seems to work fine with it.
Thanks again,
jdubs
I got this from jdubs in email. I'll reply here because it might help others as well:
Thanks for the kinds words.
I'm able to load and use your WAV file. It looks like it's splitting frequencies for tri-amping nicely. Of course, it sounds terrible on my 2 and 8 channel test setups where I'm not tri-amping!
Make sure you're actually playing audio when you turn it on and off. Try starting and stopping playback if it's not engaging.
There's some little bug with getting it to engage on a fresh install. I saw the same thing. We'll get it fixed next build.
I'm not clear how you're measuring the output. Since convolution has a delay before it starts working when starting, could this explain your low frequency anomalies?
The yellow curve also shows some dips like the pictures I've posted before.
assume a convolution of 1024 samples with 1024 taps. The result has 2048-1 = 2047 samples. It must have as told by maths.
With FFT convolution you have 513 bins for the kernel and for the music package. If you simply multiply and transform back you get 1024 samples result and this is wrong.
So you need to apply zero tagging. Enlarge the kernel and the music package each to 2048 samples, apply zero padding, apply FFT. This give 1025 bin. Do the multiplication and the IFFT and you get a 2048 samples convolution result. resize to 2047 samples to be exact.
If you run a streamed convolution then you have to send 1024 samples to the output and save the following 1023 samples. With the next input package you repeat the process but add the saved samples to the 1024 samples prep'd for the next output.
You can read much better about how to do this at the free book at www.dspguide.com (IMO an excellent book about DSP, one of the best, easy to read, easy to follow and even with very good program examples).
resize to 2047 samples to be exact.
I'm not clear how you're measuring the output. Since convolution has a delay before it starts working when starting, could this explain your low frequency anomalies?Matt,
IMO the best way is to define some test signals and to check your program for the known result.
So the convolution of two signals with samples 1 0 0 1 will give the result 1 0 0 2 0 0 1. You can expand it to 1 (1022*0) 1 and you will get 1 (1022*0) 1 (1022*0) 1
Your FFT convolution must behave the same.
Sample rate and frequency are two different things. A sine sweep at very high sample rates does not mean you microphone needs to record up all the way to fs/2. A 96kHz freq/impulse response measurement will be recorded only up to somewhere above 20kHz (Audiolense recording was up to ~25kHz).
One downside of making high sample rate sine sweeps is a reduced S/N ratio. If it has any practical negative consequences, I don't know.
Since my previous post has been totally ignored, and since there obviously are several members here with much deeper understanding than me (not too difficult), would anyone be willing to take the time to explain where I have misunderstood? Thanks a lot :)
There's just a lot happening in this thread. I think what you said makes sense.
Even if your microphone doesn't work past 20 kHz, I would think you could still make measurements and correction files at a high sample rate like 192 kHz.
Some points of interest:I cannot get sync right. I use a 8ch input, to 15 paths and 8ch output, 132k taps. With ConvolverVST, the audio had to be delayed 1570ms as a fixed value. According to your statement above, Matt, I first set Audio delay to 0ms (Tools-Video-Advanced-A/V sync value). That did not work. I finally ended up delaying the video 1510ms, which worked for a brief period in one movie. After some time, it seems the delay changes. In another movie, the delay was close to, but no exactly 1510ms even from start.
- Latency is handled automatically so lip-sync for video works without additional user configuration
You can get an early look at the native convolution system using this build:
http://files.jriver.com/mediacenter/channels/v17/latest/MediaCenter170064.exe
Fantastic!! - thanks so much Matt!
I downloaded and tried it out with a few listening tests, more detail later, but it certainly works.
However, for some tracks, I notice distortion on either very quiet classical tracks or on very "hot" mastered tracks. Some occur on 44.1KHz samples and the classical w
Since my previous post has been totally ignored, and since there obviously are several members here with much deeper understanding than me (not too difficult), would anyone be willing to take the time to explain where I have misunderstood? Thanks a lot :)I have not answered directly (despite I have caused the question) because the thread topic is a bit different.
+∞ for native convolution :)Impressing results, you got my attention:) However, audiolense is really expensive (for me...), so can someone suggest cheaper or free solution? Or can I use the free trial, do the measurings and build the correction filter, and save it and use it even when the trial has expired? Or must I buy the full version to be able to do all the tricks you mentioned on your tutorial?
For those using (or thinking of using) JRiver and Audiolense, I wrote a series of blogs posts, including setup, configuration, and measurements.
Hope you don't mind a few links...
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-1
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-2
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-3
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-4
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-5
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-conclusion
Cheers, Mitch
can someone suggest cheaper or free solution?
I use the open source
http://drc-fir.sourceforge.net/
We might buy a set or two and rent them for a week at a time. Can you recommend good equipment?
BTW 2: Note that even if you use open source software, you will need to have a sufficiently good low latency sine sweep generator/recording solution. And a calibrated mic and mic preamp (note that many mic preamps use tubes to get that colorful voice recording, not exactly frequency linear, are they...?). This means you will need to spend a few bucks on gear anyway.
Just be aware that different DRC solutions may (or rather will) give different results. This is not because one is objectively more correct than the other, but the developers seem to have different philosophies on what is the correct way.
BTW, Audiolense can be downloaded and tested, and you will get a fully functional filter, but the filter will be attenuated 10dB (or was it 20dB?). I am not sure if you get a trial on the XO version. But at least the 2.0 version is available.
BTW 2: Note that even if you use open source software, you will need to have a sufficiently good low latency sine sweep generator/recording solution. And a calibrated mic and mic preamp (note that many mic preamps use tubes to get that colorful voice recording, not exactly frequency linear, are they...?). This means you will need to spend a few bucks on gear anyway.
We might buy a set or two and rent them for a week at a time. Can you recommend good equipment?That's a really good idea.
BTW 2: Note that even if you use open source software, you will need to have a sufficiently good low latency sine sweep generator/recording solution. And a calibrated mic and mic preamp (note that many mic preamps use tubes to get that colorful voice recording, not exactly frequency linear, are they...?). This means you will need to spend a few bucks on gear anyway.
I use the open sourceThanks for that:) Is it only cmd-based, or is there some GUI for it? Also, can it support 5.1 sound? I have already microphone (came along with my Yamaha rx-v3900, is it good enough?)... I wonder also, if I have already done DRC, when I use Yamaha´s microphone based room correction? If so, does that software do better job in sound quality (and 64-bit prosessing via JRiver)... Sorry being n00b, but english isn´t my native language so I have maybe misunderstood something (again:()
http://drc-fir.sourceforge.net/
We might buy a set or two and rent them for a week at a time. Can you recommend good equipment?
I'm not sure I understand this - are saying that, in addition to a good soundcard and calibrated microphone and amp (for recording), we need a function generator as well? Does the function generator have to be separate piece of hardware, or is this capability provided in software built into some or all packages like Audiolense and REW?
For boobs like me with more interest than expertise, could an expert create a comprehensive and detailed list of every single software and hardware component needed for a complete solution for DRC with JR and its new convolver? thanks very much.
I think this is very exciting - this offers me the biggest chance for a major improvement in sound since I got my first real stereo in the 80's! Looking foward to a rental solution!
Thanks for that:) Is it only cmd-based, or is there some GUI for it?
There is now way yet to set a fixed lower level to avoid clipping I think. Do you have all your audio files analyzed by jriver. If so then enable volume leveling and clip protection in MC's dsp that should do away with most and maybe all of the clipping.
Trumpetguy, I think I get it: a simple and robust solution would include a software package that could generate the sweeps and analyze results, and the hardware would involve a soundcard or external soundbox whose input(s) would connect to the microphone and the outputs would connect to one's home theater. So the software package would send out the signal though the outputs of the sound card or box, and the microphone would pick up the results, and the software package would analyze what it sent out and received and do its thing.
If that's accurate (please correct me if not), then a rental solution that would involve the use of a soundcard installed inside a computer would seem problematic, but a solution that involved an external soundbox that connected to a bidirectional port on one's computer would seem more practical. Which box or boxes on Lynx's website would seem most appropriate for 1) a two channel DRC solution, 2)a multi-channel solution (at least 5 channels, I think, but ideally enough to align with the actual number of channels in ones' home theater setup). If nothing from Lynx is appropriate as an external box, are there others that would be? thanks again.
If nothing from Lynx is appropriate as an external box, are there others that would be? thanks again.
...
For boobs like me with more interest than expertise, could an expert create a comprehensive and detailed list of every single software and hardware component needed for a complete solution for DRC with JR and its new convolver? thanks very much.
...
However, for some tracks, I notice distortion on either very quiet classical tracks or on very "hot" mastered tracks. Some occur on 44.1KHz samples and the classical was at 176.8KHz sample rates.
When I switch back to the old ConvolverVST, the distortion goes away.
Maybe I am doing it wrong?
I have a post describing the same thing. As Matt describes in some earlier post, the leveling method may be too aggressive for some filters. Maybe that is what you and me are experiencing. It would be useful to have some way to attenuate the output to avoid distortion. Preferably automated or as a fool-proof fixed value, I really do not want to be thinking of this during playback.
The distortion I hear sounds like static or crackling.
I made a before and after recording (30 secs each) of some classical music (low level) coming out of one of my speakers.
Here is the clean version using convolverVST:
http://soundcloud.com/mitchco/convolvervst
And using Convolver:
http://soundcloud.com/mitchco/convolver
Hope that helps.
Mitch
It is also important to minimize latency. Using in/out on the Lynx does that.Why would the latency matter? If you change your buffer size wouldn't you still get the same measurement and generate the same filters?
I think you should consider a clip monitor in the Convolver. A long term data catch, resettable whenever the users chooses to, and functionality that enables the user to adjust attenuation manually. Although some informed automatism with possible manual override would work too.
Some users will have shortage of gain in their system and would want to be set their system up with as little filter attenuation as possible, with the risk of occationally clipping. Others have more headroom and would aim for a more conservative setting where clipping is practically unlikely to occur.
In any case I will advice against any default amplification of the filters, as these are typically designed to avoid clipping while keeping the dynamic range as high as possible. The level matching should be done by attenuating the unfiltered music instead. There will be a lot of users who run their system at 0dB digitally and handle the attenuation in the analog domain.
I share the concerns with regards to filter resampling. Ideally that should be taken care of as part of the filter design process. You should also be aware that a filter can be resampled with methods that can't be used with music, but gives better results, technically.
PS: This development effort seems to be heading in the right direction. Keep up the good work!
I think the distortion will be fixed next build. There was an issue with how many FFT items were being convoluted (see the discussion earlier with AudioVero).
Why would the latency matter? If you change your buffer size wouldn't you still get the same measurement and generate the same filters?
192000 2 6 0
0 0
0 0 0 0 0 0
h:\acourate_Daten\aktiv\gut\Cor1S192.wav
0
0.0
4.0
h:\acourate_Daten\aktiv\gut\Cor1S192.wav
1
1.0
5.0
h:\acourate_Daten\aktiv\gut\Cor2S192.wav
0
0.0
2.0
h:\acourate_Daten\aktiv\gut\Cor2S192.wav
1
1.0
3.0
h:\acourate_Daten\aktiv\gut\Cor3S192.wav
0
0.0
0.0
h:\acourate_Daten\aktiv\gut\Cor3S192.wav
1
1.0
1.0
I did some extensive testing today.Which build were you testing?
I took my reference filter made with Acourate (export as 64bit fp) and loaded it with the new JRiver Convolution and ConvolverVST (using the same Convolver txt config file). Then I generated a Pink PN test signal with the latest REW revision and recorded the convoluted output with RTA (REW 5.0, calibrated mic). I switched between JRiver Convolution and ConvolverVST and tried different configurations (from Mono to 7.1, always with a 80Hz XO to the subwoofers).
Well - to my surprise there where different frequency responses with these two convolution options (see RTA screenshots attached). The difference is isolated to the XO freq band around 80Hz. I tried to confirm the results by watching a few demo scenes (bdmv's) and there is a difference to be heard - probably more than the frequency response RTA shot shows.
So the question is why a "bit-perfect" process like convolution results in different response using two different Convolvers? Does the internal 64bit processing of JRiver Convolution make such a difference? Are there still bugs in the JRiver convolution engine? Which one has the "correct response"? If you put a gun to my head I have to say I slightly prefer the sound of ConvolverVST with the movie test sequences. The RTA shot looks better with JRiver Convolution. Matt?
It seems you have the skills to do proper technical testing.
It seems you have the skills to do proper technical testing. Could you care - and would it be possible - to do the same comparison between foobar convolver and MC convolver and/or ConvolverVST in MC? Not completely sure what you would be proving, but if also foobar is different - which is used by quite a number of audiophiles - I would not be too concerned if there are a new kid on the block that is a bit different from the older ones. Assuming the math and numerics are correct and correctly implement, who's to judge (and even prove) which gives the more correct result? And then I mean the more technically correct result. Let us not enter into beliefs and subjectiveness in this thread....
The proper way to test is with loopback of the output like AudioVero did in post 76 of this thread.
2. The logsweep has a silent lead-in for 0.5 seconds. The logsweep also has a fade-in. So when the first small samples start the recording keeps silent until a certain threshold level is reached. This level is -36.12 dB. It happens only the first time. I have created a double logsweep sequence and the second logsweep fade-in is ok. A logsweep recording by Acourate does not show this behaviour. So of course during this short period MC is not bitperfect. This needs to be confirmed by JR.
3. I have first tried to avoid the double logsweep by activating the repeat function. The recording shows a very strange behaviour between the repeated tracks. IMO this has to be investigated by JR. If necessary I can send pictures.
4. The little spectrum display of MC during the logsweep plaback is worth nothing. It is not ok as it shows just garbage. To be confirmed by JR.
5. Back to the basic test. First I have created a simple linear phase filter with 65536 taps all zero except taps 32768=1. This is just a delayed perfect Kronecker pulse. The recording of the logsweep including convolution has failed as the recorded level has been lower. There is the option in the convolution dialog 'Normalize filter volume (recommended)'. After unchecking the recording level is ok now and the comparison of the original signal with the recorded signal shows again a bitperfect behaviour ! (Exception see 2.)
JR may need to clearly document what happens if the normalization option is checked.
6. Then I have created a Neville-Thiele highpass fiter 250 Hz as another filter kernel and recorded again. The comparison also shows a correct behaviour. The comparison is a bit tricky. The test signal is a 24 bit wav. Convolved by MC with a 64 bit filter. Then signal then passes the soundcard with 24 bit resolution. Thus the convolution result has to be converted from float to integer. This requires some truncation or rounding. For the reference I have also convolved the original signal by the filter. As I do not know what happens inside MC I just have compared the recording with the convolved signal (double float). The difference is below 1 bit. Thus I conclude a perfect result with bit transparency also with convolution.
:)
Double-check your Options > Audio > Do not play silence setting.
Double-check your Options > Audio > Switch tracks mode. Cross-fading doesn't sound good with logsweeps ;)
Use DSP Studio > Analyzer for accurate analysis. The little spectrums in the player window are eye candy and designed to look neat, not be accurate with regards to frequency binning, linearity, etc.
That all sounds good. Does this mean we pass?IMO yes (until something else is recognized :( ). But I still do not know what the normalization is doing (as it is a recommended setting). :)
One little note is that using 16-bit input files will use 32767.0/32768.0 instead of 1.0 for the maximum, because the standard way to convert from 16bit to floating point is to do [value] / 32768. This gives an effective range of -1.0 to 0.99997.
So using a 32-bit or 64-bit floating point filter would be a better choice if you are testing for bit-perfectness. This way, you have a full -1.0 to 1.0 range.
So your conclusion is that JRiver convolution results in a bit-perfect output stream. Therefor its convolution engine cannot be "bettered" and any difference I hear between it and ConvolverVST is a) in my head (I need a doctor!) or b) ConvolverVST is not bit-perfect.Even two bit-perfect convolvers can sound different. Do not forget the CPU load and the influence of threads. Also the influence of other processes like wlan, indexing, browsing, defragmenting and so on can influence the sound from minor effects to stutter, clcks, pop-outs and noise. The best way is to avoid all unneccessary CPU load.
Even two bit-perfect convolvers can sound different. Do not forget the CPU load and the influence of threads. Also the influence of other processes like wlan, indexing, browsing, defragmenting and so on can influence the sound from minor effects to stutter, clcks, pop-outs and noise. The best way is to avoid all unneccessary CPU load.Here we go again! ::)
But I still do not know what the normalization is doing (as it is a recommended setting). :)
So the convolution result is ok. Another topic is possibly the behaviour with threads and CPU load and multiple filters, e.g. 7.1 channels with multiwayspeakers ;D
Even two bit-perfect convolvers can sound different. Do not forget the CPU load and the influence of threads. Also the influence of other processes like wlan, indexing, browsing, defragmenting and so on can influence the sound from minor effects to stutter, clcks, pop-outs and noise. The best way is to avoid all unneccessary CPU load.
It makes the RMS of pink noise (rolled off after the range of human hearing so high frequencies don't cause misleading results) of the output -6 dB from the input. The -6 dB is headroom to avoid clipping. Clip Protection will take care of anything past that.
It's possible your carefully crafted filters don't really need this. But lots of test impulse response filters have huge total gains, and sound terrible (or even trigger 'Protect mode' and turn off the sound) without normalization.
Level matching between filters should also make it easier for a user to compare filters.
I was testing with a 12 path filter at high samplerate and it runs many times real-time. In other words, performance is quite good already.
However, it may be possible to increase the parallelism. We'll think about this later when we're confident everything else is working well with convolution.
The calibration files for these microphones are very accurate. Each mic has a linear calibration file consisting of 256 entries (25 per octave) at a resolution of 0.1dB. Each file is tailored to ONE specific microphone, not to a particular day's production run or group of microphones which measure within a certain tolerance.If you don't buy that, you can spend more and get a professionally calibrated one starting at $70 from Cross-Spectrum Labs (http://www.cross-spectrum.com/measurement/calibrated_dayton.html).
I've used the Art USB Dual Pre and a similar ECM8000 Mic from Behringer. Good entry level set-up.
I also do my channel delays there (Audiolense filter are channel delayed automatically - but I would still optimize subwoofer delay manually as this always was a little but relevant optimization in the XO band).
BUT the XO band is not the same. Therefor time alignment with ConvolerVST and JRiver Convolution is not the same.
Yes. I found the Audiolense delay settings to be not reliable - it doesn't provide the smoothest XO band. It may have to do that I run multiple subwoofers from one output channel.
The test AudioVero performed, does it confirm correct cross-over behaviour ... or was it a proof that the convolution was correct for a filter without xo?So far I have understood that there is only one filter for each channel. This way I have checked the output of such a channel for correctness.
That exact combination was my entry setup as well. Recommended.
Ok, I see that I need to test a Convolver config in addition. I'll try this tomorrow.
96000 2 3 0
0 0
0 0 0
C:\AcourateProjects\Test\Highpass96.wav
0
0.0
0.0
C:\AcourateProjects\Test\Highpass96.wav
1
1.0
1.0
C:\AcourateProjects\Test\Lowpass96.wav
0
0.5 1.5
2.0
So what, if anything, is that setup lacking that caused you to upgrade? I'm trying to minimize my start-up costs, but if I'm just going to want to upgrade, it might make sense to spend a bit more upfront.
Also, any comments about 16-bit 48Hz (ART) vs 24-bit 96KHz (Tascam)? Will it make much of a difference in the measurements?
Finally, the system I'll be using this on consists of:
- A Dennon AVR-988 (http://usa.denon.com/us/Product/Pages/ProductDetail.aspx?PCatId=AVSolutions(DenonNA)&CatId=AVReceivers(DenonNA)&Pid=AVR988(DenonNA)) receiver
- 3 av123.com ELT525T towers (http://www.hometheatersound.com/equipment/av123_elt25_5_0.htm) across the front
- 2 pairs of Emotiva ERD-1's (http://emotiva.com/erd1.shtm) for the side and rear surrounds
- A Boston Acoustics PV-600 (http://www.bostonacoustics.com/PV600-P293.aspx) 10" subwoofer
This is all in a DIY 12'x22' home theater. Obviously none of this is high-end gear, again my focus is always bang-for-the-buck. Do you think DRC will be worth the cost and effort on a system of this caliber?
Thanks!
-John
You also have to set the channel configuration in the "output format" tab of DSP Studio.Thanks, the channel config does the job.
In a coming build:
Faster: Convolution uses SSE3 in the convolution kernel when supported (gives about an 8% speed-up to the convolution engine on supported processors).
TheLion,
as a user of acourate , I have been using pristine space for convolution and plogue bidule for routing.
I would be interested to know how the LFE and sub channels are routed, and how you deal with multiple subs.
As I understand it, you have 2 different filters for the LFE channel and the low passed sub channels. These are mixed together. Where do the delays occur in the chain?
Do you have the one filter for the pair of subs? Would it be better to have a filter for each sub?
Finally, can MC's room correction dsp feature be used for the channel delays and routing, then the convolver would just be doing sigle channel convolutions?
Brad
96000 2 4 0
0 0
0 0 0 0
C:\AcourateProjects\Test\Filter96.wav
0
0.0
0.0
C:\AcourateProjects\Test\Filter96.wav
1
1.0
1.0
C:\AcourateProjects\Test\Filter96.wav
0
0.5
2.0
C:\AcourateProjects\Test\Filter96.wav
1
1.5
2.0
C:\AcourateProjects\Test\Test96.wav
0
0.25 1.25
2.0
C:\AcourateProjects\Test\Test96.wav
1
0.0
3.0
<filter filename>
<filter channel>
<input channel.weight 0> ... <input channel.weight n>
<input delay 0> ... <input delay n>
<output channel.weight 0> ... <output channel.weight m>
<output delay 0> ... <output delay m>
C:\AcourateProjects\Test\Test96.wav
0
0.25 1.25
1.3 0
2.0
2.58
We're not currently supporting delays in the convolution configuration file. It's no problem to add support, but I figured I'd wait to see if any real world users wanted this. If so, someone please send me a sample that uses delays. I'm matt at jriver dot com.Matt,
Thanks for the tests Uli. It's good to get another passing grade :)
We're not currently supporting delays in the convolution configuration file. It's no problem to add support, but I figured I'd wait to see if any real world users wanted this. If so, someone please send me a sample that uses delays. I'm matt at jriver dot com.
Please note the following: We have to define how "output delay" is used. I use it as "inverse-delay".
Is the feature "Latency is handled automatically so lip-sync for video works without additional user configuration" already implemented? The last time I tried it it wasn't the case.
That's not my reading of the spec. Each number is a delay in milliseconds, not an inverse relative to the largest delay.
See the last example here:
http://convolver.sourceforge.net/configegs.html
It works for me, correcting the latency due to convolution itself. If your filters introduce a delay, you'll have to compensate for that manually.
You might make sure lip-sync is correct with convolution off. Then test with convolution on.
Since we're supporting Convolver configuration files, I think we should do it according to their specification. My reading of the spec is that the third line should be a line delay in milliseconds, not a distance (inverse) in milliseconds.
If we later make our own spec, I agree that doing the inverse might be better. Or even using a distance like we do in our Room Correction.
btw Matt, how many decimals with the delay setting are supported by your convolution engine? What happens if I enter something like 3.85ms for output delay?
Since we're supporting Convolver configuration files, I think we should do it according to their specification.
Convolver | JRivolver | |
yes | yes | Supports Convolver config file format Config Format (http://convolver.sourceforge.net/config.html) |
no | yes | All processing is 64bit |
no | yes | FFT/iFFT evaluation is lazy (only run when necessary) |
no | yes | Pink noise RMS output automatically normalized to -6 dB below input |
yes | yes | Any number of paths, targetting any input or output channel |
no | yes | Filter files (impulse response) can be in any format supported by Media Center (.wav, .ape, .flac, .mp3, etc.) |
yes | yes | Partitioning is used to reduce latency |
no | yes | Partition size will automatically adjust with the sample rate |
no | yes | Latency is handled automatically for lip-sync (not including filter delay) |
no | yes | Filter files can be any sample rate (one filter can be used for all sources) |
no | yes | Handles flushing nicely so the tail of the last song isn't heard when playing a new song |
yes | yes | Handles volume attenuation for clip protection automatically (JRiver DSP clip protection used) |
no | yes | Convolution uses SSE3 in the convolution kernel |
yes | yes | Output delays supported |
yes | no | Input line delays |
yes | no | Multiple output channels in a path |
yes | yes | Multiple input channels to a path |
yes | yes | Input channel weights |
yes | no | Output channel weights |
yes? | yes | Bit perfect (limited testing by Uli Brüggemann) |
I seem to have some processing issues:
2ch 16/44.1 2 paths in/out: Performance 13.3x real time (using SSE3), smooth playback
7.1 24/96 (also 24/48) 15paths in 8 channel out: Performance 1.0x (using SSE3), terrible glitches and frequent hick-ups. One of three CPUs peaks most of the time, average CPU load 30-70%.
This does not happen with the old ConvolverVST running with 8 partitions. Is my AMD athlon (benchmark ~1350) too weak? Is the new convolver this CPU intensive by design?
15 paths on 8 channels at 96 kHz all in 64-bit with an older CPU is probably just past the limit.
Would you mind zipping up your configuration and filter files and mailing them to matt at jriver dot com? I'm interested to see how it performs on a newer CPU.
with convolver I use 32 bits float mono PCM correction files, JR does not seem to accept my files
shall I change the extension to raw with no other change?
does JR support "empty" output channels like in convolver:
"Output channels that have no filter path associated with them will be fed with silence"
this is usefull to map channels to the right speaker (active XOver and multiamplification)
(I am sure there are other means to do that but I like having only one config files)
extension: .PCM
the kind used by denis sbraggion for room correction
what kind of file and extension should I use for 32 bits mono filters?
(sure I could use 16 bits wav files, but since all my computing of filters is 32 bits...)
I seem to have some processing issues
I remember you said you were running 132k filter taps. Try half that. I noticed if I specify 65K taps in Audiolense I get 132k as reported by convolverVST, so I ask for 32k taps in Audiolense. Bernt is looking into it. We are using way more taps than necessary.
With high bit resolution in the convolver, 48k rate filters are plenty good too.
A filter with 64 k taps or 65536 coefficients in double precision float has the file size of 65536 * 8 bytes = 524288 bytes = 512 kB. In single precision representation it has the file size of 256 kB. This is with a raw = headerless file. In case of another file format like e.g. wav there will be some more bytes. In case of compression like flac of course the file size will be smaller.I have stored the filters as 32bit float. How would I know double or single? I can probably import the wav file into Matlab to analyse it.
No scerets here. So I wonder about the trouble with filter lengths.No trouble really, just never thought of it. And there may be a mismatch between user choices and actually created filter length in AL in the current version. I don't know that, but there are indications. GUI issue, I guess.
How would I know double or single? I can probably import the wav file into Matlab to analyse it.You can e.g. import the file as raw data file in Audacity. There you can select between PCM, single float, double float, little or big endian and so on. You will see the result directly and know what format makes sense.
For multichannel music and movies and music my cpu seems to be too weak, forcing me to stick to ConvolverVST until either the new convolver has some feature reducing cpu intensity or I have set up a new htpc. Both are equally welcome!
However, I would hope most in this thread consider our 64bit pipeline worth the performance penalty (I do).
I spent some time testing with your filter set.
I got the SSE3 going a little faster (about 5%), but you're probably still out of luck for 15 paths on a JRMark 1300 machine. I think ConvolverVST is at an advantage here since it uses 32bit instead of 64bit precision. That means it can use SSE to do 4 operations a cycle instead of 2 like 64bit allows. It also means it's processing half as much memory. However, I would hope most in this thread consider our 64bit pipeline worth the performance penalty (I do).
As a frame of reference, my i7 work machine runs your 15 path filter @ 96 kHz at about 5.7x realtime. Since that's only really one of the four cores, it's just no sweat on a machine like that.
However, I would hope most in this thread consider our 64bit pipeline worth the performance penalty (I do).
However, I would hope most in this thread consider our 64bit pipeline worth the performance penalty (I do).
@ Trumpetguy : Until you get your HTPC upgrade simply reduce the filter resolution. I find that going from 131k to 66k tabs gives about 33% faster processing with JRiver Convolution. Try using 33k tabs with Audiolense.
@ Trumpetguy : Until you get your HTPC upgrade simply reduce the filter resolution. I find that going from 131k to 66k tabs gives about 33% faster processing with JRiver Convolution. Try using 33k tabs with Audiolense.
Now I'm lost. If I load my .cfg file:
c:\users\bill\documents\juice hifi\audiolense 4.2\correctionfiles\kitch_panels_001 2.0_441.cfg
c:\users\bill\documents\juice hifi\audiolense 4.2\correctionfiles\kitch_panels_001 2.0_48.cfg
c:\users\bill\documents\juice hifi\audiolense 4.2\correctionfiles\kitch_panels_001 2.0_882.cfg
c:\users\bill\documents\juice hifi\audiolense 4.2\correctionfiles\kitch_panels_001 2.0_96.cfg
and you say J River convolver isn't autoselecting the filter, then how exactly does that .cfg file work? If I understand your response, you're saying J River is only using one filter, but internally modifying that filter to conform to the sample rate of the input stream? So in the above file, which of those 4 files is J River actually using?
- I have a huge problem with video delays. eg. If I play 48khz content the inherent filter delay is double compared to 96khz (given the same filter resolution). So I always need to manually double the video delay when switching to content with half the sampling rate.
The delay from convolution does not change with sample rate (we partition to avoid that).
And the filter delay should not change either -- when we resample, no timing information changes.
So I'm not understanding what you're describing. It might be worth starting a new thread to discuss it.
Matt,
when using individual filters per sampling rate (not using your filter resampling) the filter delay (and therefor the necessary video delay to compensate) doubles when you cut sampling rate by half. Eg. a 48khz filter has half the filter delay than a 96khz filter with the same filter resolution. If you double the filter resolution (eg. 131k taps instead of 66k) the filter delay doubles. Therefor a 48khz 66k tap filter has the same filter delay as a 96khz 131k tap filter.
Today many tracks are mastered near to 0 dBFS (full scale) due to the loudness war. About 75% of all CDs show intersample clipping (= the interpolated curve between the samples is higher than 0 dBFS). In addition correction filters that also change phase can cause clipping because the crest factor of the music signal is changing by phase shifts. With online filtering the algorithm needs to react on clipping in realtime. The most simple algo is to cut limit all signal amplitudes to max 0 dBFS. A compression algorithm is possible but it can also introduce pumping effects. Of course it is also possible to create some headroom by attenuating the music signal with a certain gain factor, e.g. -3 dB.
Intersample clipping: with an artificial signal it is possible to demonstrate a clipping between samples of even about 12 dB !
IMHO we have also to think about active speaker systems, we should not forget them. (*)
So if e.g. a stereo signal feeds 3 channels for each side (this is of course also supported by Convolver VST) then it may become difficult just to talk about speaker distances. How do you measure the distance to the acoustic center of a midrange driver compared to a tweeter? By a folding ruler ? ;D Indeed we find "distances" or better delays of just one sample. That's why e.g. Brutefir is defining the delays in samples (and subsamples).
The best way is to let the user select the unit, either feet or meter or millisecond or samples. So he chooses what is best to his opinion.
We always have to delay the quicker sound (a tweeter or a closer speaker). So also IMO it is best to treat a delay by its definition and to avoid inverse delays. :) Simply define the delay for the slowest unit/device/driver as 0 and refer to it with the other units/devices/drivers.
(*) PS: IMO this is also a good example why the strict channel assignment in the room correction option is confusing. At least for me. The active speaker channels have nothing to do with front, lfe, surround channel or back channel.
WOW that was a lot of information I think my brain has indigestion. :D
Does anyone know of a place where a person could send a mic so a calibration file can be created.
Matt - is this possible?
i hear with comvolver support a few years meanwhile. At the moment I have a 4-way active system running, the filters coming from audiolense XO. This setup realise me a dream which I had since the beginning of this millenium. But one thing I very often miss:
2 or better 3 buttons for preloading *.cfg files in the convolver dsp menue. The clicked button activates the loaded file. By clicking another button the preloaded file will be activated.
With that functionality an online comparison of different measurements,target curves and so on is possible. This would be the most important thing to decide the quality of different filters, and it is my greatest whis for JR (just behind an input for my record player or CD player - which seems impossible ...)
Thanks in advance for answering
markus
We will use whatever cfg file you select, and resample it if needed to support other sample rates.
I would recommend picking your highest quality cfg file, "kitch_panels_001 2.0_96.cfg" in this case.
Someday we may support switching filters based on sample rate. I'm still trying to understand the technical merits for that approach as opposed to downsampling using our resampler (which has quite a good reputation).
Matt,
The new JRivolver is working OK with my configuration and filters. It is about 4 times slower though (realtime x 8 vs x 30). I have multiple inputs on the last three paths and JRivolver counts each one as a path (maybe processing that way too?). ConvolverVST reports 19 paths and JRivolver says 37 paths. The 7 inputs per path should be combined and then convolved I would think?
If MC also would behave like a virtual audio driver then it can get the sound stream from other applications
I was really hoping nobody would be using something like Dynamic Range Compression by now. So I consider the acronym DRC to be free from bad meanings ;-)
btw in your country it has a great meaning: http://www.drc.de/
But being a 64 bit fp engine, how demanding (in terms of CPU) does one expect this engine to be (when it is fully developed) compared to the ConvolverVST ?
Could anyone report on performance on AMD-processors? SSE3 don't do us much good, unfortunately (a good reason to switch to Intel... as I always wanted but never found the money to do :P).I am using Athlon X3, not X2, which is fully capable of doing 2 path convolution with a 132k tap filter at 15x real time. I would be surprised if the X2 shouldn't also. With many paths in a multichannel setup (in my case 15), the X3 is barely able to process a 32k filter at 2.5x real time. Thsi should give you some ballpark figure on where your X2 will perform. I am buying an i7 at my earliest convenience :-)
EDIT: A really spoiled feature request: possibility to use 32-bit processing instead (please don't shoot me for asking :)). For me, who is using an AMD Athlon X2 240e (2.8ghz), sometimes performance can be a problem... but then, it really is low on both heat and power consumption = quiet. One can't get it all, I guess :).
Best regards,
Mikkel
... capable of doing 2 path convolution with a 132k tap filter at 15x real time. ... able to process a 32k filter at 2.5x real time.
How do you calculate or measure the real time factor?
The convolution engine doesn't remove the need for manual A/V-sync - as I thought it would. I still need to delay video manually when watching movies to make it lip-synced. I'm using a 2-ch Hi-Fi with Digital Room Correction from AudioLense, no active crossovers or anything fancy. Am I missing something or do I need to start hunting for the excact video-delay once again?
I am buying an i7 at my earliest convenience :-)
Wheter there is a delay or not, I don't know to be honest. I only use AudioLense to make filters and I don't know too much about the tech behind the magic.
... I still need to delay video manually when watching movies to make it lip-synced. ...
hulkss: Upgrading to the latest version made the playback-skipping dissapear. Clearly not a CPU issue. I run 6 channels/6 paths. My Intel Atom N270 does the job very well with its 1.6GHz singel core processor.
Is my actual CPU Intel Dual Core E7200 with 3.06 GHz sufficient for simultaneous audio and video?
I'm not familiar with .PCM files. Do they have a header?
A 64-bit or 32-bit WAV (which is PCM data with a header) would work if you can convert.
Nevertheless, there will always be unknowns with regards to latency. Distance to the speakers, video latency in the computer as well as in the screen or projector, audio buffers in the computer and sometimes in hardware downstream.
So, do pcm files need to be converted into wav or are they accepted now?
I couldn't understand from the rest of the conversation if you added support for that.
As for convolution latency, I was wondering if it would work to put impulses through the filters, and calculate how long it takes from input to output.This proposal does not include the buffer time of ASIO buffers (or soundcard buffers) and the sound travel time to the listeners seat. With an agreed filter format it would be possible to include the filter delay by an appropriate tag information.
This proposal does not include the buffer time of ASIO buffers (or soundcard buffers)
and the sound travel time to the listeners seat
As for convolution latency, I was wondering if it would work to put impulses through the filters, and calculate how long it takes from input to output. Then, the convolution engine could just add this time to the latency. Of course it would use the shortest latency of any channel so that timing between speakers would be preserved.
Well.......... ::)
I guess I found the solution.
And somebody mentioned that somewhere in this forum.
Download ConvolverVST and install it.
Problem gone ;D
Koldby
In a coming build:
NEW: Convolution automatically adjusts for filter latency for cases where a delay is baked into the filter files (still preserves intentional timing differences between channels).
I don't really understand the point of baking leading silence into all the filters (so it's not for relative speaker distances). The change above will correct the latency from this.
We might also be able to skip the convolution step for this leading silence (or even trim the filters of the leading silence to remove the latency) for a performance gain, but really it seems like whatever if building the filters shouldn't be adding this delay.
Matt, for a future build, are you contemplating automatic filter switching based on the playback material's sample rate?
Yes, but I can't say when it might be added.
In a coming build:
NEW: Convolution automatically adjusts for filter latency for cases where a delay is baked into the filter files (still preserves intentional timing differences between channels).
I don't really understand the point of baking leading silence into all the filters (so it's not for relative speaker distances). The change above will correct the latency from this.
We might also be able to skip the convolution step for this leading silence (or even trim the filters of the leading silence to remove the latency) for a performance gain, but really it seems like whatever if building the filters shouldn't be adding this delay.
hello, I'm not an expert but I think leading silences might have to do with linear phase filters and filters with phase correction, minimum phase filter having no leading silences... If I am right I would prefer no change as I am doing phase correction, and linear phase Xover
Maybe there's some really low level stuff happening in that 0.5 seconds (pre-echo or something) that's relevant?
+1
please Matt do not touch the filters
linear phase filters are symetrical ie they have as many leading and trailing zeroes
DVD playback:
Is there a solution to get dvds to play without large stutter when the convoled filters have a large delay?
I know that converting all dvds to mkv is one solution, but notmy preferred one.
Ideally there would be a custom video playback setting that would fix the problem
when is it possible to use active filters in MC17?
when is it possible to use active filters in MC17?Erich,
I have a 2-Way with extra Sub with filters created in acourate.
I still have to use an VSTHost for this and cannot use directly MC17.
44100 2 4 0
0 0
0 0 0 0
C:\AcourateProjects\Erich\ErichCor.wav
0
0.0
0.0
C:\AcourateProjects\Erich\ErichCor.wav
1
1.0
1.0
C:\AcourateProjects\Erich\ErichCor.wav
2
0.0
2.0
C:\AcourateProjects\Erich\ErichCor.wav
1
1.0
3.0
192000 2 6 0
0 0
0 0 0 0 0 0
h:\acourate_Daten\aktiv\gut\Cor1S192.wav
0
0.0
4.0
h:\acourate_Daten\aktiv\gut\Cor1S192.wav
1
1.0
5.0
h:\acourate_Daten\aktiv\gut\Cor2S192.wav
0
0.0
2.0
h:\acourate_Daten\aktiv\gut\Cor2S192.wav
1
1.0
3.0
h:\acourate_Daten\aktiv\gut\Cor3S192.wav
0
0.0
0.0
h:\acourate_Daten\aktiv\gut\Cor3S192.wav
1
1.0
1.0
Hallo Uli,
and where is the channel mapping to my fireface?
I have a 2.1 system full active (2x TMT+HT, 1x Sub).
My actual config is:Code: [Select]192000 2 6 0
0 0
0 0 0 0 0 0
h:\acourate_Daten\aktiv\gut\Cor1S192.wav
0
0.0
4.0
h:\acourate_Daten\aktiv\gut\Cor1S192.wav
1
1.0
5.0
h:\acourate_Daten\aktiv\gut\Cor2S192.wav
0
0.0
2.0
h:\acourate_Daten\aktiv\gut\Cor2S192.wav
1
1.0
3.0
h:\acourate_Daten\aktiv\gut\Cor3S192.wav
0
0.0
0.0
h:\acourate_Daten\aktiv\gut\Cor3S192.wav
1
1.0
1.0
@ Matt: BTW I wonder what happens if the sum of left and right input channels exceed 0 dB. Matt, can you tell us?
The issue is that the Microsoft DVD Navigator (the thing that reads DVDs on Windows) will not provide the audio more than a little ahead of the video. This doesn't work well if there's a large audio latency. People that set the primary buffer size in Options > Audio to a large size run into this same problem.
One solution would be to do DVD title play, which plays the raw MPEG of the main title. This is what we do when streaming a DVD to a DLNA box. This solution could work locally as well, but would disable trailers, menus, etc.
Another solution would be to find or write another DVD navigator. However, it doesn't seem like anyone has made much progress on this, possibly because of the DRM that can be baked into DVD.
Is there a setting in MC to enable DVD title play only?
DVD playback: Is there a solution to get dvds to play without large stutter when the convoled filters have a large delay?
The normalize volume option in the convolution engine targets -6 dB.
Hi, when is it possible to use active filters in MC17?
I still experience the occasional stutters and/or slowing down of the audio.
The only think that missing is the support of different filters for different samplerates.
Is there a chance for this in the future ?
Try increasing buffering in Options > Audio > Output mode settings...
Is an atom based set-up suitable for 6- or 8-channel convolution at 44kHz? (No need for video as machine is dedicated to music). I'm current running an old MOBO with Intel Q6600 (4 Cores, 2.4GHz) which works well but the MOBO is on it's last legs with support for a new solid-state h/d not solid. The thought was to move towards a full silent computer - as it sits near the listening position - and hence the interest in Atom (something like Intel ATOM Dual-Core D525 processor). Has anyone got experience with this type of processor running MC convolution?
Zydeco
If using REW and MC is it possible to do corrections on a per speaker basis? JRiver seems only to take a single convolution file, but I would like to do corrections at the individual speaker level?You may take what you want here, just suggestions. It depends of what you are trying to do.
In other words: I would like to measure and correct each speaker in my 2.2 setup (4 speakers). After I have each speaker adjusted, using the 4 correction filters, I would like to measure the combined (corrected) response in REW and make an additional filter for further (combined speaker) adjustments.
Can this be done using REW and JRiver MC? Or, can this only be done with other programs such as Audiolense?
Thanks
First, why don't you do all the filters and X/O right in MC and use REW only for measurements?...
MC EQ's are working fine and you can even monitor frequency response in real time in REW while you constructing the MC's filters....
jacques
Jacques. Are you saying to use mc EQ not convolution? I suppose I could use EQ to get the frequency response of each speaker in basic balance and then try to run REW through mc in loopback mode ( with the EQ filters applied) to do convolution on the combined all-channel sweep.Yes, I do not use convolution anymore. I did not like REW's automatic target resolutions, it is more accurate to do it manually through MC's EQ on. I used a pink noise wave file and REW's RTA graph and MC's parametric EQ side by side. MC's EQing is directly responding on the REW'S RTA graph for monitoring results. You should take all precautions on where to apply corrections and where not, using the"excess group delay graph". MC's low pass and high pass filters are great for XO construction.
With a 2.2 setup ( 2 mains + 2 subs) I want to BOTH work the x/o's between subs and mains AND do room correction. Certainly room correction works best with convolution rather than simple EQ.
What software are others using to make the filters for x/o and or DRC?
Thus, if using the PEQ and/or Room Correction components of MC DSP studio and NOT the Convolution component, one is, at best, matching passive X/O and never achieving active X/O potential. On the other hand, two important consequences of adding delay and phase (linear-phase adjustments) are enhanced: 1) correction artifacts, i.e. "pre-ringing" (hearing parts of the correction sooner in time than the direct signal) and, 2) even greater single-listening-position sensitivity (all other locations will likely be even worse). I'm sure I've at least partially misstated this, but hopefully not fundamentally so.WOW!! Caleb and thanks for this synthesis it is very impressive.
I must ask you this trivial question. Why MC's PEQ could not achieve active X/O potential? It is digitally processed, X/O slopes (order) and frequencies position are adjustable, time delays can be managed too. Phase is also manageable by the way isn't it? One can move speakers physically too (DIY).
10)...I prefer to do XO and DRC incrementally. By this I mean I would like to proceed asfollows:
--A) Set speaker locations while running RTA w Pink PNoise, triangle formulas, laser measurements and other conventional methods;
--B) Take near-field frequency response measurements ofeach driver with band limited frequency sweeps to determine sweet range of eachdriver and successive driver to set X/O points and slopes;
--C) Measuring mid-field and in exact path betweenspeaker and listening position and at exact listening height, run DRC SW tocreate linear-phase filter to set delay and phase adjustments for the driverson individual speaker basis, based on that speaker's relative acoustic centerfor its individual drivers;
--D) Load these per speaker correction filters into DRC software or JRiver MC vialoopback feature http://yabb.jriver.com/interact/index.php?topic=70242.0and then use DRC SW for combined response (single sweep to allchannels/speakers) adjustment room correction based on desired "housecurve".
--E) Create required Convolver config text filepointing to the above generated filters and load into JRiver MC convolution
--F) Load these combined correction filters into DRC software or JRiver MC via loopbackfeature with convolution applied, re-measure from listening positionand adjust existing filters, as desired/needed.
Does anyone know in DRC SW one can I load incremental and/orfinal adjustment filters and perform additional measurements/corrections "ontop of" existing filters?
7) @ TheLion describes he Audiolense does not correctly calculate the delay for his subs. How do you know what the correct delay SHOULD BE? Is it only based on relative distance, or other factors? Why do you think the SW calculation is wrong?
TheLoin should be TheLion. I've changed it.Now that's funny :)
1) Do we manually create the config text file, or will programs generate this for us (likely in need of editing). E.G. I understand that DRCDesigner may create the config file for DRC generated correction filters.
acourate doesn't but I believe that audiolense will create the file
2) Recognizing that REW may not be as robust as other products like Acourate and Audiolense, if desired can our config file point to REW correction files, or are they somehow not compatible with the config file usage?
Unless REW has updates that I'm not aware of, REW does not calculate files for convolution
3) If one desired, can a config file point to filters made by more than one program (e.g. if one chose to do per speaker corrections in REW and then combined post-adjusted room correction in another program, assuming all filters are .wav files)
yes
4) Recognizing the limitations described above, if one wanted to use MC PEQ and/or Room Correction in addition to convolution, what should be the order of each DSP component, i.e. should PEQ come before Convolution? Why?
ordering can affect the result see below
5) @ TheLion describes he uses Accurate for filters, but then uses Audiolense to calculate delays, which he then manually adds to the config file, because Accurate does not calculate delays (as well or at all?). Why not do it all in Audiolense?
6) @ TheLion describes he uses hardware DSP on his subs and SW DSP on everything else. Why not do it all in SW? To do otherwise implies a HW solution is superior to 64bit convolution in its present state-of-the-art. Is that your feeling? Then why not do it all in HW?
7) @ TheLion describes he Audiolense does not correctly calculate the delay for his subs. How do you know what the correct delay SHOULD BE? Is it only based on relative distance, or other factors? Why do you think the SW calculation is wrong?
delay for subs are different as there are so few (often <1) wavelengths in a room. I have use dthe process where I started with the delay calculated from the distance, then used the RTA in REW and adjusted delays until I got the smoothest result.
Calculation of the delays for the sub in acourate and audiolense can be erroneous as the impulse response does not have a sharp peak.
8 ) Being sensitive that Uli has contributed greatly to this thread and thus MC's execution of convolution, what are the pluses/minuses of Acourate vs Audiolense vs DRC, REW, etc
REW does not do convolution filters (ie time domain)
I have used DRC and acourate, I find acourate much better. Apparently there are different philosophies for the psychoacoustic analysis.
I have not tried the audiolense demo, and it is more user friendly. TheLion reported that he preferred the results from acourate.
9) As Uli pointed out, because @ TheLion is summing both the frequencies below 80hz from all 7 separate speaker channels and the LFE channel below 160 hz, all to his subs , and because each of those 7 speakers/channels are at different distances and thus have different delays (and are different than the LFE), the delay (and likely the phase) from the summed signals to the subs must be all messed up (7 different delays/phase x 2 signal components coming to 2 subs, it must be completely smeared), because at present MC does not support additional parameters for summed delays. Given your precision, why isn't this a problem for you?
This can be fixed by performing the delays for speaker distance after the bass management. I am not sure where the delays are in the chain in the convolver in MC, but you could instead use the delays in the room correction DSP module, and place it after the convolver
Does anyone know in DRC SW one can I load incremental and/orfinal adjustment filters and perform additional measurements/corrections "ontop of" existing filters?
-you could run multiple convolutions in series, but I don't think MC allows multiple instances of the DSP filters (could be wrong), so you would need a different convolver such as convolverVST or pristine space.
This would be tricky however, as you would need to do the measurement of the additional filter, with the first one in place already (acourate's logsweep recorder allows this). For changing house curves, this doesn't really make sense.
For separating room correction and active XOs it might make sense, but adds significant computational load (if filters are 64 bit)
The reason that the calculated sub delay doesn't work for TheLion is most probably that the delay after correction is not comparable to the delay before correction. At lower frequencies the difference due to a low pass filter for instance, can be substantial. The delay management in Audiolense works well enough to keep the speakers timed within a sample under normal circumstances.
I'll have a go at some answers, see below
1) Do we manually create the config text file, or will programs generate this for us (likely in need of editing). E.G. I understand that DRCDesigner may create the config file for DRC generated correction filters.
acourate doesn't but I believe that audiolense will create the file
2) Recognizing that REW may not be as robust as other products like Acourate and Audiolense, if desired can our config file point to REW correction files, or are they somehow not compatible with the config file usage?
Unless REW has updates that I'm not aware of, REW does not calculate files for convolution
4) Recognizing the limitations described above, if one wanted to use MC PEQ and/or Room Correction in addition to convolution, what should be the order of each DSP component, i.e. should PEQ come before Convolution? Why?
ordering can affect the result see below
7) @ TheLion describes he Audiolense does not correctly calculate the delay for his subs. How do you know what the correct delay SHOULD BE? Is it only based on relative distance, or other factors? Why do you think the SW calculation is wrong?
delay for subs are different as there are so few (often <1) wavelengths in a room. I have use dthe process where I started with the delay calculated from the distance, then used the RTA in REW and adjusted delays until I got the smoothest result.
Calculation of the delays for the sub in acourate and audiolense can be erroneous as the impulse response does not have a sharp peak.
9) As Uli pointed out, because @ TheLion is summing both the frequencies below 80hz from all 7 separate speaker channels and the LFE channel below 160 hz, all to his subs , and because each of those 7 speakers/channels are at different distances and thus have different delays (and are different than the LFE), the delay (and likely the phase) from the summed signals to the subs must be all messed up (7 different delays/phase x 2 signal components coming to 2 subs, it must be completely smeared), because at present MC does not support additional parameters for summed delays. Given your precision, why isn't this a problem for you?
This can be fixed by performing the delays for speaker distance after the bass management. I am not sure where the delays are in the chain in the convolver in MC, but you could instead use the delays in the room correction DSP module, and place it after the convolver
8 ) Being sensitive that Uli has contributed greatly to this thread and thus MC's execution of convolution, what are the pluses/minuses of Acourate vs Audiolense vs DRC, REW, etc
REW does not do convolution filters (ie time domain)
I have used DRC and acourate, I find acourate much better. Apparently there are different philosophies for the psychoacoustic analysis.
I have not tried the audiolense demo, and it is more user friendly. TheLion reported that he preferred the results from acourate.
9) As Uli pointed out, because @ TheLion is summing both the frequencies below 80hz from all 7 separate speaker channels and the LFE channel below 160 hz, all to his subs , and because each of those 7 speakers/channels are at different distances and thus have different delays (and are different than the LFE), the delay (and likely the phase) from the summed signals to the subs must be all messed up (7 different delays/phase x 2 signal components coming to 2 subs, it must be completely smeared), because at present MC does not support additional parameters for summed delays. Given your precision, why isn't this a problem for you?
This can be fixed by performing the delays for speaker distance after the bass management. I am not sure where the delays are in the chain in the convolver in MC, but you could instead use the delays in the room correction DSP module, and place it after the convolver
Can/will acourate and audiolense please verify the above?
I believe that audiolense will, but acourate won't generate the convolver config file. It's not that hard though
REW does correction filters as .wav files. These may or may not be as robust as Audiolense / Acurate / DRC, however I do not understand why the REW .wav filters cannot be referenced in a cofig file. Please explain...thanks
It seems that you could use REW filters in a convolver. They would however, only be minimum phase correction. YOu would probably not get much (if any) improvement in the impulse response of your system, unlike with linear phase filters
If you mix correction files from different programs in one convolver config file, make sure they are all for the same sampling rate and are of the same length
I do not see/follow youre reference "ordering can affect the result see below" Please be more specific...Thanks
when using delays, the order of the delay can affect bass management
Would Audiolense and Acourate please add their perspective to this...Thanks
Would Audiolense and Acourate please add their perspective to this...Thanks
Would Audiolense and Acourate please add their perspective to this...Thanks
Is this the recommended workflow?
BradC, just checking...do you have any identity of interest (affiliation) with either Audiolense or Acourate? No offence implied, just trying to keep all information clear.
I have purchased acourate and used DRC in the past.
Thanks very much for your thoughtful and knowledgeable reply!!
If your soundcard can do loopback in hardware, you can use a VST host, such as plogue bidule to allow REW to run with the corrections applied.
Even if your hardware can't loopback internally, if you can assign a software channel to any hardware output in the mixer, then you could use a physical loopback cable
MC does this via software. The only question is if its (or for that matter a hw loopback) causes time domain issues that render the resulting measurements of little benefit .Hi
If your soundcard can do loopback in hardware, you can use a VST host, such as plogue bidule to allow REW to run with the corrections applied.So you say that a VST host "as plogue bidule " could allow me to run REW sweeps through MC without using the "live://loopback" url option?
Even if your hardware can't loopback internally, if you can assign a software channel to any hardware output in the mixer, then you could use a physical loopback cable
...would it be possible to use the REW card calibration option and calibrate with MC in the path...? That could eliminate delays issues and render possible comparison before and after corrections?
So you say that a VST host "as plogue bidule " could allow me to run REW sweeps through MC without using the "live://loopback" url option?
If you do not recall, the "live://loopback" is not working in my win XP setup. the VST option could work and solve my problem?
Also, does anyone know why most have said we can only use REW RTA, not MC loopback, to inspect before vs after results?? It seems to me the way to do it is via MC loopback, unless there are issues that arent apparent.I posted how to use the REW RTA (http://yabb.jriver.com/interact/index.php?topic=69725.0) and was using it last October, but that was before MC had the loopback feature. I use the loopback for both before and after REW measurements so my signal chain is identical. There is no need to use REW in standalone mode anymore.
ThanksI would guess that you could see changes with the phase graph in rew where you can save the first sweep measurement from REW alone and save the other one passing into MC and overlay the two phase curves. The overlays of the impulse responses will be nice too.
Ill be running tests today to determine if sweeps show and time-domain changes in loopback vs non-loopback. Ill include REW calibration and hardware 2nd channel mic loopback...
Ill post results so others can evaluate any concerns over effectiveness of this procedure...
Yes, if your hardware can do loopback.OK, I think I could do the same when my RME HDSP card will be installed.
I use an RME fireface 800 that can send the hardware outputs to an input channel internally.
Plogue bidule uses the input channels, applies delay, convolution and bass management via VST plugins and sends the results to software output channels. The software outputs are routed to different hardware outputs
I use this process from applying correction to TMT5 and windows media center.
I have compared the before and after correction using REW and plogue bidule. The 'after' looks as expected (matches target curve)
The only catch is that if you move the microphone between before and after measurements, the results no longer look as good.
OK, I think I could do the same when my RME HDSP card will be installed.
If the Plogue bidule uses the RME input channels (which consist in a sweep coming from the REW output setting and has been sent to one RME output channel) and MC playback is directed to an other RME output with corrections made in the VST plugin?
Is it that simple? ;)
I could use the MC EQ on top of the plogue bidule or it as to be the plogue bidule corrections only?
A bit confusing! I will have to try it in real.
Mitchco, you can use Volume Levelling DSP plugin and add a fixed number of dBs. Normalize filter volume in Convolution plugin also works really well for me. Well, sometimes it creates short term clipping. Then you can see MC turning normalization level down slowly.
Normally it's best to turn channels down instead of up, so you won't force clip protection to engage.Right. But MC's normalization does turn the channels up to meet some criterion for max 0dB? Which in turn sometimes leads to clip protection engaging. Or have I misunderstood something fundamental here?
Using Internal Volume is also a good idea, since that gives you free headroom for convolution or volume adjustments.I have MC volume disabled since I use the Lynx mixer for volume adjustment (and the straight to power amps). Would setting the Lynx mixer to 0dB and using Internal Volume in MC give any improvement regarding headroom? I cannot see how, but if it is so, an explanation would be good.
Such a solution seems a bit risky, though, since other applications may pass through the Lynx driver at full 0dB signal strength.I set my motherboard soundcard as the default audio device in Windows. This way no other applications can access my Steinberg UR824. I then use the loopback in JRiver if I want sound from another application.
2. Changed: Volume settings are available in Options > Audio > Volume (and also still available in Menu > Player > Volume or by clicking the icon next to the volume slider).
3. NEW: Added 'Options > Audio > Volume > Maximum volume' for enforcing a maximum level that Media Center will be capable of setting.
5. NEW: Added 'Options > Audio > Volume > Internal volume reference level' for specifying what volume level is shown as +0.0dB (all other volumes will be reported relative to that reference level).
6. Changed: The OSD and volume slider in Theater View use the same volume display text as other parts of the program (improves support for 'bitstreaming' display, decibels for internal volume, etc.).
I have MC volume disabled since I use the Lynx mixer for volume adjustment (and the straight to power amps). Would setting the Lynx mixer to 0dB and using Internal Volume in MC give any improvement regarding headroom? I cannot see how, but if it is so, an explanation would be good.
Now I have set up my system with Internal volume and Lynx mixer at 0dB. And - set default device to some audio device not connected to anything. I'll give it a try.
Is it correct that that the signal chain should be
Adjust x dB for Replay Gain
Adjust x dB for internal volume
Room Correction (reduces all channels by 10dB except LFE)
Convolution
Can I send the bill for new speakers to JRiver when they blow....;-) ?
That seems good.Thanks, I was wondering. I knew, but have forgotten, thanks for the link.
You probably know this, but for anyone else following along, the order of volume effects is not relevant:
http://wiki.jriver.com/index.php/Audio_Bitdepth#Bit-Perfect
I've used the program this way for a long time, and it works great.The way I have handled volume before have also worked fine, which explains why I have never bothered to make changes. Now I wathced a movie last night using Internal volume and it worked perfectly (as I would expect from MC). The nice thing is also that the volume bar appears on screen. Using the Lynx mixer I always had to bring it up on the screen to see the volume level.
Just be sure to select 'Volume Protection'.
The only negative thing is that volume lags behind the adjustment by a second or so. Is there a technical reason why, or is this a design choice?
Matt,
I guess I just answered my own question -> when using a filter with different filter resolution for the channels (like mixed 66k and 131k taps) the convolution engine doesn't compensate for this - the result is that the channels with 131k taps play delayed to the channels with 66k taps. So I will not use mixed filter resolution.
much of the community that I mention are not aware of what J River really is over another player like wmp. I think a bit of marketing over that way wouldnt be a bad idea. The community is slowly coming round to the idea that computers can play quality audio, but theres a whole lot of marketing thats battling the change (cables for example) who pay big money for a good magazine review. Ill admit that at one point I was one of them.
Ive been on a mission to build the most acurate/best audio system for 15 years, (thats how I got here) most of my efforts are in building speakers and amplifiers though. I have £50k in drivers ready to build my ultimate active speaker. (it is a private cinema though)
Proper EQ combined with JMLC profile 90 deg corner horns on beryllium compression drivers, and active crossovers = no early reflections, and eq'ed late ones + sub 0.2% distortion at reference level across the whole spectrum. Its audio nervana if I can do it.
I am using JRiver for some years and Audiolense for generating convolution filters since half a year. My audio system and Audiolense is 2 channels stereo with no active crossovers etc. Recently I started to notice no more sound difference between convolution activated or deactivated. I did a room measurement with Carma 3.0 playing the frequency sweep through JRiver with convolution on and off - the graphs look the same. The DSP-windows for convolution shows: Processing 0 paths and speed factors from 110x to 220x depending on the source resolution and sampling frequency. I recall the effects of convolution were very distinct to hear some weeks ago.
What may be wrong or what may I do I wrong?
I'd like to ask for recommendations on how to best support convolution where you have a different file for each sample rate.Hello Matt,
Should we just support some naming convention, like adding " (44.1 KHz)", " (48 KHz)", etc. to the end of the filter files? Or should the configuration file contain a list of filters? Or something completely different?
Thanks for any help.
44, 48, 88, 96, 176, 192, 352, 384 = samplerate
Especially d) can be used for an automatic search of filters depending on samplerate.
Why not just have 4-6 slots to load config files and allow the user to assign the sample rate to each file?
You can simply create your own internal configs by stripping 44 from the filenames and adding 48, 88, 96 ...
Then check the specified folder for the availability of the filter files.
In case you do not find a complete file set for a samplerate you may issue a warning and apply a filter samplerate conversion as already carried out.
That's easy to code, but a little cryptic.Anyway there must be a syntax for a config file. So it does not really matter what definition you like to use.
Is '44' at the end of the filename the magic marker? What if it's 48, or 44.1, or 44kHz at the end? Would it be better to use a marker that's more explicit like [SampleRate]?
Still no support for switching of filters. Please guys, JRiver is the best of audiophile players, and internal convoler is so great feature...
Thank you!
I tried the new automatic sample rate switching last night and it worked perfectly.
Great! I'd like to have filters for 44.1 and 48 khz sampling rates.Bernt has said (https://groups.google.com/forum/?fromgroups=#!searchin/audiolense/sample$20rate/audiolense/ZH7PDr8lAdo/et5N7K_YpucJ) that you can measure with just one sample rate and the conversion when you save the filters will be basically lossless.
For best results I assume I should measure with Audiolense at both sample rates, then generate the filters?
Any tips to get this right on the first try and how do I confirm it is working in JRiver?When you save the filter, just make sure that 44.1 and 48 are selected in the pop up window. By default, Audiolense ends the filter name with something like 7.1_441.cfg or 7.1_48.cfg. Just pick one of these files and JRiver will then use the correct filter based on sample rate. Remember to set your reset your sample rate in Output Mode to no change or set 88.2 to 44.1 and 96 to 48 if you also have content with those sample rates. If you check the status windows of the convolution engine DSP during playback, you will be able to verify that JRiver has selected the correct filter based on sample rate.
I have Audiolense with stereo (2.0) in combination with JRiver 17 but the created config files with Audiolense gives the message "Not valid" at the Convolution screen in the DSP settings. I have filters created for all the frequencies up to 96 KHz. The config files are looking good to me. I also tried the same in JRiver 18 (build 068) but after selection of the filter config file the program stops functioning. In the output setup I use only down sampling for files higher than 96 KHz and no upsampling (because of the limitation of my DAC). I reinstalled also my JRiver software but did not get it working until now. Maybe I do not understand it completely but is there a description about the config file (or example) available? Other tips are also welcome!
DVD playback:
Is there a solution to get dvds to play without large stutter when the convoled filters have a large delay?
I know that converting all dvds to mkv is one solution, but not my preferred one.
Ideally there would be a custom video playback setting that would fix the problem
The issue is that the Microsoft DVD Navigator (the thing that reads DVDs on Windows) will not provide the audio more than a little ahead of the video. This doesn't work well if there's a large audio latency. People that set the primary buffer size in Options > Audio to a large size run into this same problem.
One solution would be to do DVD title play, which plays the raw MPEG of the main title. This is what we do when streaming a DVD to a DLNA box. This solution could work locally as well, but would disable trailers, menus, etc.
Another solution would be to find or write another DVD navigator. However, it doesn't seem like anyone has made much progress on this, possibly because of the DRM that can be baked into DVD.
The issue is that the Microsoft DVD Navigator (the thing that reads DVDs on Windows) will not provide the audio more than a little ahead of the video. This doesn't work well if there's a large audio latency. People that set the primary buffer size in Options > Audio to a large size run into this same problem.
One solution would be to do DVD title play, which plays the raw MPEG of the main title. This is what we do when streaming a DVD to a DLNA box. This solution could work locally as well, but would disable trailers, menus, etc.
Another solution would be to find or write another DVD navigator. However, it doesn't seem like anyone has made much progress on this, possibly because of the DRM that can be baked into DVD.
With 18.0.70 you can now use LAV decoding and madVR rendering for DVD's. I wonder if this fixes the issue?
About DVD playback (which is of huge interest for me since my movie collection still consists mainly of DVDs):
Matt suggests to do DVD title playback (I don't mind losing the menus etc.). Have people tried and/or how is it done?
Best regards,
Mikkel
The only way I have succeeded is to rip the main track to mkv, e.g. using MakeMKV. No need to decode the audio. This method allows problem free use of convolution.
+∞ for native convolution :)
For those using (or thinking of using) JRiver and Audiolense, I wrote a series of blogs posts, including setup, configuration, and measurements.
Hope you don't mind a few links...
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-1
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-2
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-3
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-4
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-part-5
http://www.computeraudiophile.com/blogs/Hear-music-way-it-was-intended-be-reproduced-conclusion
Cheers, Mitch
I do the same but I really would prefer not to. It is a cumbersome way to watch a DVD :). I hope a solution will appear eventually.
44100 2 6 0
0 0
0 0 0 0 0 0
h:\acourate_Daten\aktiv_2k5_v1\Cor3S44.wav
0
0.0
4.0
h:\acourate_Daten\aktiv_2k5_v1\Cor3S44.wav
1
1.0
5.0
h:\acourate_Daten\aktiv_2k5_v1\Cor2S44.wav
0
0.0
2.0
h:\acourate_Daten\aktiv_2k5_v1\Cor2S44.wav
1
1.0
3.0
h:\acourate_Daten\aktiv_2k5_v1\Cor1S44.wav
0
0.0
0.0
h:\acourate_Daten\aktiv_2k5_v1\Cor1S44.wav
1
1.0
1.0
Mitch,
thanks for the great articles, but the links have changed. I was able to find them again by going here.
http://www.computeraudiophile.com/blogs/mitchco/
All of your articles are here, the ones above are currently split on page 1 and 2.
Thanks again.
Hi,
I want to use the switching feature (use accourate in an full-active 2.1 configuration).
My configuration file is this at the moment:Code: [Select]44100 2 6 0
0 0
0 0 0 0 0 0
h:\acourate_Daten\aktiv_2k5_v1\Cor3S44.wav
0
0.0
4.0
h:\acourate_Daten\aktiv_2k5_v1\Cor3S44.wav
1
1.0
5.0
h:\acourate_Daten\aktiv_2k5_v1\Cor2S44.wav
0
0.0
2.0
h:\acourate_Daten\aktiv_2k5_v1\Cor2S44.wav
1
1.0
3.0
h:\acourate_Daten\aktiv_2k5_v1\Cor1S44.wav
0
0.0
0.0
h:\acourate_Daten\aktiv_2k5_v1\Cor1S44.wav
1
1.0
1.0
I have a lot of wav files with the different samplerates.
But I don't know how to change my configuration or is it done automatically?
Thanks,
Erich
44100 2 4 0 --> play and convolve 44Khz 2 channel musicfiles to 4 channels
0 0
0 0 0 0
c:\filter\Cor1S44.wav --> this is lowpass (acourate Cor1) Stereo (left/right) filter
0 --> take the (first) left channel of lowpass Stereo filter
0.0 --> input channel left from musicfile
0.0 --> output channel lowpass left (soundcard analog 1)
c:\filter\Cor1S44.wav --> this is lowpass (acourate Cor1) Stereo (left/right) filter
1 --> take the (second) right channel of lowpass Stereo filter
1.0 --> input channel right from musicfile
1.0 --> output channel lowpass right (soundcard analog 2)
c:\filter\Cor2S44.wav --> this is highpass (acourate Cor2) Stereo (left/right) filter
0 --> take the left channel of lowpass Stereo filter
0.0 --> input channel left from musicfile
2.0 --> output channel highpass left (soundcard analog 3)
c:\filter\Cor2S44.wav -> this is highpass (acourate Cor2) Stereo (left/right) filter
1 --> take the right channel of lowpass Stereo filter
1.0 --> input channel right from musicfile
3.0 --> output channel highpass right (soundcard analog 4)
are there any other ideas to solve the problem?QuoteAlso, make sure you're not up/down mixing in the Output Format.
Hello,
The open source DRC application does not attempt to match output levels between channels in its filters. For this reason, channel balance can be off. Both Brutefir and JConvolver configuration files allow one to change the volume of each channel. How can I do this with the convolver configuration file that JRiver uses?
Thank you,
Alan
Bob, I answered you at the Audiolense forum, but I'll post something here, too.
I think your main issue is that you need to set the Output Format DSP for 5.1. This "opens" six channels and lets them be used as you want. Make sure you check the box next to Output Format to make sure the DSP is used, too. If you don't do this, JRiver will only open as many channels as are in the source. In your case it is only opening two channels.
1) I assume the JRiver playback volume control precedes the entire path and plugins in the output DSP section. Although there is an option in some of the plugins to send them full level. Not sure how JRiver would implement that exception anyway without moving their order in the chain... Matt, if you're listening, if I put the parametric equalizer last in the plugin chain and set its dither to 24 bit, there is no further dsp (recalculation) in the system on the way to the DAC? In other words, it's 64-bit floating point through and including the last plugin. Then the last plugin dithers the signal at a 24-bit level and then with no further modification JRiver delivers it to the DAC. The DAC of course truncates the incoming bits below #24. Or the parametric truncates after dithering, which is not a problem either, provided there is no other processing after it.
2) I can't get the live input to work in any way. Not sure why you recommend using a different ASIO device for the live input. That would be expensive if you need six channels! But anyway, I tried many different combinations of ASIO devices, as I have several hooked up to this computer, and in no case would the live input work. I get an error report from JRiver "something wrong with playback".
You need a different device, because Windows talks to one device (the default) and that loopsback to us where we route it to your real device. A cheapie motherboard soundcard works well.
Sorry, I was addressing loopback not line-in playback.
You're correct that line-in with ASIO can often use the same device as the output.
However, we open the input and output sides separately. So it will only work if the ASIO driver likes this, and most don't.
It'd be neat to change this in JRiver, but our architecture keeps inputs and outputs separate so it's a little complicated.
4) I can't get the automatic filter sample rate detection to work in convolution, using Audiolense's standard format, where each config file ends in a certain sample rate number. The status screen in the convolution constantly shows the same filter is working. So for a current workaround I engaged the online sample rate conversion in JRiver, always upsampling or downsampling incoming rates that are not 96k to 96k. No offense, but I think the SRC in JRiver sounds a little bit harsh, a subtle "edge" to it to my ears compared to no SRC. It's not that bothersome to my ears, but I'd like to patch it out if we can get this automatic cfg. filter switching worked out. I understand it did work in MC before #18 when you guys worked it out... see, I'm standing on the shoulders of giants! Many thanks and best to you all.Are you using the naming convention posted earlier in this thread?
xxxx2.0_441
xxxx5.1_48
etc.
The regular expression is:
^(.+)(\\d{1}.\\d{1})_(\\d{2,3}).cfg$
Which outputs:
Name
Channels
Sample rate
Oh, sorry. I may have caused the confusion because I don't even know what MC calls "loopback". This is most unfortunate. Well, first I have to get line-in working. Is there a debugging tool or log I can send you? And if it's complicated, for the foreseeable future I'll live with having another (stereo) ASIO device for the line input. Yeah, it makes it more complicated for me, too. :-(.
BK
2. automatic switching works for _441 _48 _882 _96 _192. But doesn't work for _1764
In this or another thread someone posted how to do loopback, but that is a separate thing from line in. Can someone please outline that procedure?
Bob,
when you are looking for the best possible way to use convolution together with line in "live" playback I strongly recommend using AcourateConvolver. It's sole purpose is to give you this capability - and it does it very well (including less delay).
Via AcourateASIO (a module of it) it can also be tightly integrated with JRiver - it serves as "virtual ASIO soundcard". Therefor you simply select AcourateASIO as "soundcard" for output in JRiver. AcourateConvolver takes this stream, does convolution in 64bit fp, digital volume (perceived loudness ISO 226) and flexible channel routing.
I have been using Audiolense for 2 years now, and Acourate for 18 months. You should definitely give both of them a try - depending on your room and setup there are significant differences in achievable sound quality.
Best
Walter
Thanks to you and Brad. Yes, I am already in the process of checking out Acourate. The convolver and the ASIO link see like my perfect solution to getting live line in with automatic sample rate switching, or playback from JRiver, and 24-bit dither (according to Uli he can implement that at the tail of his chain in Acourate convolver) all in the same application and integrated environment. The only weaknesses I have learned so far is Acourate is not so strong in the Surround domain, but I can wait till he gets that act together.
For that matter I have to figure how to get PCM audio in multichannel out of DVDs and Blu Rays encoded in DTS and Dolby Digital in JRiver... sounds like it's supposed to work (and what a marvelous thing that would be!). If it does work, I'll sell my Marantz A/V preamp with HDMI input and multichannel balanced XLR outputs because I won't need it anymore! But the Marantz can read and decode cleanly SACDs, DVD-A, Blu-Ray, DVD through its HDMI input from my Sony Blu-Ray player and that's an amazing achievement in itself---but it's limited to analog output due to the copy-protection issues and so I eagerly await seeing JRiver talk to Acourate at 64-bit float over ASIO in a fully integrated environment! We are living in interesting times.
By the way, I got JRiver's sample rate switching to switch the convolver filters as well. It was simply a matter of getting the file names to EXACTLY conform with the specifications. And I thank (I think) Brad for turning me on to that format. The help here on the forum has been terrific. I'm shivering with excitement. In the meantime I have an automatic sample-rate switched playback from JRiver for file playback through the Convolver, dithered to 24 bits going. And it sounds sweet, beautiful and terrific, best my system has ever sounded. And that's saying a lot, believe me.
And for line input, I kludged together VST Host (a great program) and Convolver (source forge) VST. It doesn't automatically switch, but I can live with it for a few weeks until I get Acourate together.
Take care everyone, I'll fill you in as we progress. I couldn't have done it without you! BTW, we need to move this whole thread over to MC 18, eh? Maybe the forum administrators can move it.
Bob
I read your thread over at the audiolense forum with interest.
You describe a problem with different attenuation at different sample rates. This may not be the solution, but I had a problem (using acourate) where I couldn't get correct amplitudes when measuring subs and mains separately. Uli said that the problem was that he has been unable to find a correct scaling algorithm for logsweep recordings for different freq ranges.
Hence the solution was to record the subs and mains with full range sweeps. So your problem may be that you record the different sampling rates over different freq ranges (20-24k and 20-44k).
Brad
Bob,
would you mind posting here, or over at the acourate yahoo forum, your experiences and results with acourate.
I would be interested in what parameters gave you your preferred results
Thanks
Brad
Hi. New to the forum, hello everyone..
Have a dsd question
Will the convolver handle dsd bitrates? Or is 192k the highest it can do?
Will dsd over dop be convolved with only filters up to 192k via some kind of upsampling?
And of so will my 2.6 core duo Mac mini handle it? If not is a file with less taps the way to go?