INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2 3 4 5 ... 10
 1 
 on: Today at 02:49:19 pm 
Started by S. Pupp - Last post by S. Pupp
JRiver opens in Theater View, as intended.  When I exit theater view and go to standard view, there is no menu bar.  If I click on the desktop, the Finder menu bar appears, as expected.  If I then click on JRiver, the Media Center 31 menu bar appears, but then immediately slides upward, exiting the screen.  I cannot change any settings as a result.

I tried deleting and reinstalling JRiver MC31, with no luck.

I am using a Mac Mini M1 running OSX Sonoma.  I am using MediaCenter310083.

Any help that could be given would be greatly appreciated.

 2 
 on: Today at 12:29:22 pm 
Started by JimH - Last post by eve
Thanks for the link.  This may actually work, a server treating video and audio as separate AES67 streams and the final client just compare timestamps and sync, or do, what are required.

You wouldn't even need to actually 'send' the video. Again, the way things work in what I've set up is essentially the playback application syncs itself to the PTP clock on your network. Your audio gets decoded from the video, and instead of sending that audio to a 'device' you push packets containing that audio directly into the network.

Now if you wanted to get really complex you could do something with your DSP node where it sends back a message to the initial playback application saying 'hey, my dsp pipeline takes this long to complete' and offset the video by that amount in the playback application. I'm just spit balling here but essentially, sync the playback pipeline to ptp, playback starts, but it doesn't, instead you send empty audio samples (same channel count + format) to your DSP for say, half a second. Your DSP node tells the playback application 'this took me 3ms from the packet receiving to completion of my dsp pipeline' , then the playback application accounts for said delay, and the actual video + audio stream starts. You could hypothetically cascade this too, if your DSP node sends OUT an AES67 stream, whatever picks that up could also tell the playback application 'look, it takes me 5ms from getting a packet to that sample hitting D/A' and add that into the offset I guess?
Really though, +/- 1ms, 2ms for video sync, it's imperceptible in many respects.





 3 
 on: Today at 12:14:56 pm 
Started by Tango-DJ - Last post by marko
To me, it makes sense that BPM is not available.

The field is auto populated when you use the "analyse audio" tool, and, if that option is selected for auto import (I think it is set by default?), it would already be correctly populated?

-marko

 4 
 on: Today at 12:05:20 pm 
Started by Dennis in FL - Last post by Dennis in FL
I spoke too soon....I'm getting hangups every now and then.   My Raspi5 is headless and it no longer has Real VNC as an option.   I've been fiddling with Microsoft remote desktop and Tiger VNC as well as Real VNC as a viewer.   I think I have the NAS mounting correct, but something is hanging....

 5 
 on: Today at 11:18:33 am 
Started by OpenEnd - Last post by Awesome Donkey
Yes, I use NFS myself for my NAS. But you can do it with SMB, CIFS, etc.

 6 
 on: Today at 11:08:50 am 
Started by Alex_W - Last post by Alex_W
Profuse apologies for resurrecting this old topic.

*setup:
-- JRiver 31 (was on 28 until a week ago; waiting for the 32 ;) ) in a virtual Linux box on a NAS; library on a NAS; files are a mix of stereo and multi-channel .ape, .flac, .wav (most 16 bit/44.1 kHz, some 24 bit/96 kHz), .dsf -- there're no issues here;

-- renderers are two separate network-connected devices with their own amps/speakers (TEAC NT-505 and Sony STR-DN1080); each has an associated JRiver DLNA server with separate settings -- also no issues after some minor tweaking, they do what I'm asking of them;

-- controller: I login remotely to the virtual Linux box (RemoteViewer) and use the main JRiver interface to serve music to the renderers -- yet again, no big issues here.

*some details and the question:
-- TEAC-associated DLNA server has:
Audio Mode --> specified format if necessary,
Format --> 24 bit PCM,
Audio/Advanced --> files to convert .flac and .ape,
Audio/Advanced/DSP studio has a couple of things set (EQ, some up-sampling just to play with).

I know the DSP is being applied, since TEAC shows the "up'ed" sampling frequency of the file served, and I can hear the EQ working... now, to the question (...drumroll...)

the AudioPath stays empty, "not using JRiver audio engine". How do I get it to populate?

I recall reading that setting AudioMode to "specified format" will populate the AudioPath, but I don't want to convert .dsf files... I also remeber reading that "specified fromat if necessary" _should also work_...

What am I doing wrong?

Cheers,

Alex


 7 
 on: Today at 10:59:13 am 
Started by JimH - Last post by JimH
I bought JRiver MediaCenter Licence last year, but my MediaCenter still working as MC30 (don't know why I'm not upgraded automatically to MC31...).
Can I upgrade to MC32 ? And how!
Please read the first post in this thread.

You need to download and install MC31.  Then restore your license.  Details are in that post.

 8 
 on: Today at 10:59:09 am 
Started by fitbrit - Last post by eve
I have a bit of a niche question.

A customer of mine wants a VERY high-end multichannel setup, but is unsatisfied with multi-channel DACs. Instead he wants to use multiple stereo DACs for native DSD multichannel surround music processing.
My initial thought was that three SOtM USB cards in the same system going to three physically distinct - but identical model - DACs would work. However, I am not so sure now, as to whether MC would see them as three separate sound devices if they all use the same ASIO driver. Can anyone give any insight in how to achieve this?

I know that the DSD music would (probably?) have to be decoded to PCM to separate the channels to distribute to the appropriate cards first, so native DSD playback would not be possible.

USB D/A wont work here.

If he's dead set on multiple stereo D/A units, he'll need to split out AES signals to go to each D/A unit and sync the clocks.
This would be accomplished with an audio interface with the required number of digital outs + word clock out (or slaved to another clock). The audio interface then appears as a single device on the source PC. The audio interface itself could be USB but this way you're not trying to sync up multiple asynchronous USB D/A units.
Additionally, if his desired # of channels is 8 or under, the Okto would be something to look at. Performance / measurement wise it's really top tier.

 9 
 on: Today at 10:55:38 am 
Started by JimH - Last post by eve
Another possible approach: https://www.audiosciencereview.com/forum/index.php?threads/introducing-hang-loose-convolver-from-accurate-sound.23699/page-7#post-1786373


Quote
I hear you and been meaning to get back to you. I am working with a company that develops custom digital audio I/O boards compatible with Raspberry Pi. Can't go into too much detail yet but likely be a 1U chassis and the number of digital I/O channels will be configurable. Trying for the "Swiss Army" knife of DSP I/O supporting HDMI, TOSLINK, AES, DANTE, USB, and

Oh man, consider me intrigued.

 10 
 on: Today at 09:16:11 am 
Started by JimH - Last post by Dennis in FL
If you could get one new improvement, what would it be?

I'm always puzzled by what source is "Playing Now"  ---- I click on a song and it goes to the wrong place.    Love to see a positive feedback indicator that what I choose is better displayed and remains default.

Or maybe a video on You Tube to instruct.

Pages: [1] 2 3 4 5 ... 10