INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB  (Read 63663 times)

6233638

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 5353

As discussed here, I feel that DXVA2 Copy-Back in LAV Video is a better choice than CUVID for hardware acceleration with Nvidia graphics cards.

1. CUVID forces the graphics card into its highest performance state. One of the reasons to use hardware accelerated decoding is to reduce power consumption.
With a high powered graphics card such as my GTX 570, most of that benefit is lost.
DXVA2 Copy-Back allows the graphics card to drop down to the medium power state if the GPU load is low enough.

2. Some videos show decoding errors when using CUVID. This can be a problem with hardware acceleration in general (seems to be less tolerant than software decoding) but seemed to be more of an issue with CUVID. I haven't done extensive testing of this though.

3. DXVA2 Copy-Back seems to be more stable inside Media Center. On my system Media Center will sometimes hang with the Hardware accelerate video decoding when possible option enabled. The video keeps playing but the program is unresponsive. This has never happened since I set up LAV Video manually to use DXVA2 Copy-Back.

4. Deinterlacing is often activated when it should not be if CUVID decoding is used.

5. DXVA2 Copy-Back generally performs better than CUVID does in my experience.


And DXVA2 Native is not a good solution for Nvidia, as it does not allow madVR's deinterlacing to work correctly. (you cannot force film mode/IVTC)
I tested performance both on my main system with a GTX570, and with a low-end GT610 a while back (where performance really mattered) and DXVA2 Copy-Back performed best with both cards.

I can't speak for what's best with AMD/Intel, and I don't know what you are currently using for them when hardware acceleration is enabled. (if it's even an option)
Logged

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #1 on: April 29, 2013, 12:13:01 pm »

On AMD, DXVA2-CB only works properly on 7xxx series and above cards, so its not a good general choice, it'll be dead slow on older cards.

What i would suggest (pseudo-code incoming):

Code: [Select]
if Intel and not Live TV: /* there were/are some issues with live-tv behaviour and QuickSync */
  use QuickSync
elseif NVIDIA and ROHQ:
  use DXVA2 Copy Back
else
  use DXVA2 Native
end

If you want to detect AMD 7xxx series (and above) cards somehow, they could also use Copy-Back in ROHQ mode and benefit from the IVTC option in madVR, but thats up to you if you want to bother managing a device id list or something similar.
Logged
~ nevcairiel
~ Author of LAV Filters

Matt

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 42372
  • Shoes gone again!
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #2 on: April 29, 2013, 12:14:45 pm »

Thanks for starting a thread.

Currently we use this ladder of LAV defines for most files and CPUs (there are some special JTV, Atom CPU, etc. tweaks):
1) HWAccel_QuickSync
2) HWAccel_CUDA
3) HWAccel_DXVA2Native (if not tried as #2)

Note that HWAccel_DXVA2CopyBack is not used.

What ladder would be better?  Hopefully nevcairiel will weigh in.

[ edit -- as usual, Hendrik is faster than me :P ]
Logged
Matt Ashland, JRiver Media Center

Matt

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 42372
  • Shoes gone again!
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #3 on: April 29, 2013, 12:19:44 pm »

A few questions:

Is DXVA2 Copy-Back only a good choice for madVR, not EVR?

Currently we don't probe the target GPU during graph building.  Is GetAdapterIdentifier(...) to get a D3DADAPTER_IDENTIFIER9 our best bet to check for nVidia?  Or would it be more efficient to ask "will CUVID work" and assume that means it's nVidia?
Logged
Matt Ashland, JRiver Media Center

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #4 on: April 29, 2013, 12:24:15 pm »

Copy Back doesn't really offer any advantages for EVR (unless you want to post-process the video in software before, but standard RO doesn't do this), while it uses slightly more resources (which can cause issues on some systems, like the ION/Atom situation that you already special-cased)

Probing for CUVID seems like an expensive operation. GetAdapterIdentifier() is what the DXVA2 code in LAV does to detect the vendor to enable some vendor-specific quirks, and you probably already have a D3D device.
I use my existing device, and D3DDEVICE_CREATION_PARAMETERS from GetCreationParameters() supplies the "AdapterOrdinal" as the device id for GetAdapterIdentifier.

VendorId list:
Code: [Select]
#define VEND_ID_ATI     0x1002
#define VEND_ID_NVIDIA  0x10DE
#define VEND_ID_INTEL   0x8086

PS:
There is one advantage to copy-back, its more flexible. For example, if you have some crazy TV station which switches from MPEG-2 4:2:0 content to MPEG-2 4:2:2 for some reason (4:2:2 is not hardware compatible), the DXVA2 Native decoder would probably just fail and give up (because it can't fallback to software decoding), however the DXVA2 Copy Back decoder can fallback to software decoding, and continue with that. This is a rather unlikely scenario, but it is possible to happen. Probably more likely in some screwed up file someone created rather then a TV broadcast though. Personally, never got complaints about such problems ;)
Logged
~ nevcairiel
~ Author of LAV Filters

Matt

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 42372
  • Shoes gone again!
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #5 on: April 29, 2013, 01:29:35 pm »

Next build:
Changed: Red October will use DXVA Copy Back with nVidia hardware when possible instead of CUVID when using madVR ( http://yabb.jriver.com/interact/index.php?topic=80258.0 ).

The graph building has never used GPU information before, so this change does require creating a Direct3d device, asking it a few questions, and releasing it.  We do know the target window / location at graph building time, so we should be talking to the correct adapter.  It made me wonder if the decoder actually knows the correct adapter before playback starts, since we don't add and position the renderer until later.

Since we're now going to the trouble to get GPU information, we'll use the same information as a gate so we won't probe LAV about using CUVID unless we know we'll be playing to an nVidia device.
Logged
Matt Ashland, JRiver Media Center

6233638

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 5353
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #6 on: April 29, 2013, 01:51:31 pm »

Excellent, thank you!
Logged

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #7 on: April 29, 2013, 02:14:55 pm »

It made me wonder if the decoder actually knows the correct adapter before playback starts, since we don't add and position the renderer until later

DXVA2 Copy-Back always uses the primary display adapter, because it really doesn't matter which adapter is being decoded on. This is just blindly assuming people wouldn't setup a dual-GPU system with different GPU vendors (which would appear to be a rare thing, and combinding nvidia and amd also has its fair share of issues)
DXVA2 Native gets the device from the renderer, so it'll use the one the renderer is positioned on.
Logged
~ nevcairiel
~ Author of LAV Filters

mrfx

  • Junior Woodchuck
  • **
  • Posts: 63
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #8 on: May 01, 2013, 01:52:12 am »

[...] This is just blindly assuming people wouldn't setup a dual-GPU system with different GPU vendors (which would appear to be a rare thing, and combinding nvidia and amd also has its fair share of issues) [...]

AFAIK it's not rare thing, because a lot of notebooks have setups with Intel GMA + Nvidia GPU.
Logged

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #9 on: May 01, 2013, 02:15:10 am »

AFAIK it's not rare thing, because a lot of notebooks have setups with Intel GMA + Nvidia GPU.

Thats not the same thing, because thats switchable graphics, not really "dual" setup - it behaves completely differently anyway.
Logged
~ nevcairiel
~ Author of LAV Filters

Matt

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 42372
  • Shoes gone again!
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #10 on: May 03, 2013, 10:14:06 am »

This change is causing troubles on one machine:
http://yabb.jriver.com/interact/index.php?topic=80357.0

Any advice?
Logged
Matt Ashland, JRiver Media Center

InflatableMouse

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3978
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #11 on: May 03, 2013, 10:35:59 am »

I'm not seeing the same behavior on my GT430. In fact, its behaving better than before and my GPU core temperature has decreased by about 10 degrees C during playback.
Logged

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #12 on: May 03, 2013, 11:13:55 am »

It seems unlikely that DXVA2-CB would get you to 100% CPU (unless its a Atom CPU or something like that), especially on a multi-core it could potentially only ever use one core, so 50/25%, but not 100
Just to make sure, you don't activate YADIF deinterlacing, or do you?

Anyway, systems with extremely slow CPUs with acceptable speeds of GPUs can get some boost from using DXVA2 Native and in madVR setting under "rendering -> trade quality for performance" both the "don't use copyback for DXVA" options.
This comes at a small cost of quality (chroma is slightly blurred, iirc), but should improve performance.

Note that the quality thing only happens with madVR, EVR handles Native DXVA2 differently, and this doesn't happen.

Sadly MC doesn't offer any such fine-grained controls. Of course the madVR settings you can do yourself, however LAVs hw accel mode cannot be adjusted without going custom.
Logged
~ nevcairiel
~ Author of LAV Filters

bv1pacb

  • Junior Woodchuck
  • **
  • Posts: 56
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #13 on: May 03, 2013, 11:37:12 am »

I’m running Windows 7 Ultimate 64-bit with 8 GB ram, an AMD Dual-Core Athlon II X2 270 3.4GHz on a GIGABYTE GA-990XA-UD3 Motherboard.  Not an awesome processor, but I’ve had no issues or limitations either.  Once home, I’ll check MC settings.
Logged

InflatableMouse

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3978
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #14 on: May 03, 2013, 01:01:25 pm »

Can you check task manager and see which process is causing the 100% cpu?
Logged

BryanC

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 2661
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #15 on: May 03, 2013, 01:05:52 pm »

Quote
GIGABYTE GA-990XA-UD3

Have you installed the latest BIOS? F14b.
Logged

bv1pacb

  • Junior Woodchuck
  • **
  • Posts: 56
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #16 on: May 03, 2013, 01:35:00 pm »

Have you installed the latest BIOS? F14b.

F14b, yes.

Also most recent nVidia driver.
Logged

bv1pacb

  • Junior Woodchuck
  • **
  • Posts: 56
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #17 on: May 03, 2013, 07:14:34 pm »

Once home today, booting up revealed that my symptom of 100% CPU with no throttling was incorrect.  My apologies for any head-scratching or confusion caused.  I did not reboot when upgrading from 180175 to 180177 yesterday, and I think that this must be the reason for the anomaly.

However, further trials still showed no changed in the GPU load parameters, both for Blu-ray and DVD playback, so I decided to check the madVR settings as suggested by nevcairiel.

First, I could see no setting for "YADIF deinterlacing".  I did find the options for "don't use copyback for DXVA" under "rendering -> trade quality for performance".  These were unchecked, and I decided to leave them so.  More snooping lead me to the "scaling algorithms", sections, all three of which contain radio buttons to enable DXVA2 processing, although this was grayed-out in the "chroma upscaling" section.  I thought little of the fact that none of these were checked, thinking, "Hey, if this revision is supposed to 'just work', they'd already be enabled."   But I decided to try them anyway.

Mirabile dictu!

GPU temperature immediately dropped from 86C to 72C, load went from 80% to 48% and fan speed, from 82%, headed south to 54%, from which it can go no lower on my card.  All this for DVD playback.  Blu-ray playback showed an even greater improvement: previous 92C, 87% load, and 89% fan, came down to 53C, 56% load and 54% fan!

So, again, apologies, however your suggestions lead me to this hoped-for result.   :D
Logged

6233638

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 5353
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #18 on: May 03, 2013, 07:33:39 pm »

I'm glad to hear that this wasn't the cause of your problems, and that things are working smoothly now.
It's encouraging to hear that people are actually noticing reduced GPU load and temperatures from this change.

While DXVA2-CB was the most efficient with Nvidia cards in my testing, DXVA scaling inside madVR actually performed quite poorly compared to most of the other scaling algorithms. (both visually and in terms of GPU load)

The YADIF option Nevcariel mentioned is inside LAV Video rather than the madVR options, and I think it should be auto-configured by Media Center anyway.
Logged

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #19 on: May 04, 2013, 01:22:02 am »

The YADIF option Nevcariel mentioned is inside LAV Video rather than the madVR options, and I think it should be auto-configured by Media Center anyway.

Yeah this question was more directed towards Matt, but apparently all is well now.
Logged
~ nevcairiel
~ Author of LAV Filters

Samson

  • Galactic Citizen
  • ****
  • Posts: 391
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #20 on: May 04, 2013, 08:07:12 pm »

I did not reboot when upgrading from 180175 to 180177 yesterday, and I think that this must be the reason for the anomaly.



Hi All,
are we supposed to and if so should this not be an incorporated step in the install program?
Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14463
  • I won! I won!
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #21 on: May 04, 2013, 08:51:55 pm »

No....but you know what Windows is like.  If something is odd - reboot, double check, then report!
Logged
JRiver CEO Elect

InflatableMouse

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3978
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #22 on: May 05, 2013, 02:11:21 am »

With all the upgrades in version 18, I only remember having to reboot once and as a beta member, I don't think I even skipped one version. IIRC it had to do with shell integration.
Logged

gtgray

  • Galactic Citizen
  • ****
  • Posts: 261
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #23 on: May 05, 2013, 02:18:29 am »

On AMD, DXVA2-CB only works properly on 7xxx series and above cards, so its not a good general choice, it'll be dead slow on older cards.

What i would suggest (pseudo-code incoming):

Code: [Select]
if Intel and not Live TV: /* there were/are some issues with live-tv behaviour and QuickSync */
  use QuickSync
elseif NVIDIA and ROHQ:
  use DXVA2 Copy Back
else
  use DXVA2 Native
end

If you want to detect AMD 7xxx series (and above) cards somehow, they could also use Copy-Back in ROHQ mode and benefit from the IVTC option in madVR, but thats up to you if you want to bother managing a device id list or something similar.

So NEV is the LAV Video decoder at a distinct disadvantage if you have a 7790. I have tried every combination of setting in your decoder and I still get a ton of dropped and repeated frames. What I notice is that the dropped and repeated frames seem to be roughly, sometimes precisely the same number. Other times there will be neither until a live TV channel change occurs or there is an exit to the guide and then back to another channel. Often times the deinterlacing is not working and SD and there will be combing badly with deinterlacing artifacts.

I have seen the madVR OSD say deinterlacing is on, when it is turned off. I have seen deinterlacing reported as on or off by settings. Also I have seen deinterlacing on by upstream. None of it is consistent. I literally have spent 8-10 hours going through all the suggestion here and every combination of DXVA, none of which seem to make a bit of difference.
Logged

6233638

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 5353
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #24 on: May 05, 2013, 03:55:45 am »

If you are seeing dropped and repeated frames, it sounds like your madVR settings are too demanding for the card.

Try changing all scaling options to Bilinear.
Logged

gtgray

  • Galactic Citizen
  • ****
  • Posts: 261
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #25 on: May 05, 2013, 03:06:28 pm »

If you are seeing dropped and repeated frames, it sounds like your madVR settings are too demanding for the card.

Try changing all scaling options to Bilinear.

I have tried the lowest settings. It doesn't matter, the behavior is the same. Now the  AMD dispaly driver is new as the card is new so it maybe has some bug. It is not the general release AMD driver that works with all the other cards but is specific to the HD 7790. Still the fact that I can get it to work consistently without dropping or repeating frames by disabling deinterlacing in madVR at the cost of poor SD deinterlacing with madVR at the highest settings suggests to me it not a performance. This card is slightly higher in performance to a GT 650 Ti.

 So this is a pretty high bandwidth HTPC card. Just for grins and because I will need something with Blu-Ray menus for certain content anwyay (Spears and Munsil 2nd Edition) i ugraded a copy of PowerDVD9 that was a legacy thing on the box to PowerDVD 13. The PowerDVD 13 settings  menus identifies the GPU as AMD and lets you specifically select the AMD deinterlacing mode you want with the exact same nomenclature as AMD identifies them in the CCC. It worked perfectly even switching output frame rates and my Radiance saw the Blu-Ray content input to it as a perfect 23.976. I don't have a comparable OSD to what madVR has to report stats in PowerDVD but the renderer must be using something beyond EVR as their renderer because it produces a very nice image.

I guess I don't understand how the decoder and renderer stack work in the way they control deinterlacing. Maybe somebody can explain it to me. If I can get a perfectly deinterlaced HD image with madVR deinterlacing turned off and 0 frames dropped or repeated as long as I only watch HD be that for hours I have no issues. The only issue I have with that is if I change to an SD channel it is clearly is not deinterlacing it properly as you can see artifacts, progressive (720P) and 1080i content plays perfectly.

WMC with the MS decoder and EVR also work perfectly as one would expect. Red October Standard is fine. MadVR works fine with PowerDVD 9's decoder with madVR's deinteracing enabled, drops 0 frames but it to does not properly deinterlace SD content. That decoder has a properties configuratin that allows you to chose DXVA. Absolutely perfect for 10801i with full settings on madVR chroma upscaling producing a beautiful image on the 1080i and 720P broadcasts..but simply awful SD deinterlacing much worse than LAV.

Logged

InflatableMouse

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3978
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #26 on: May 05, 2013, 03:20:28 pm »

Was your previoius card in this system an Nvidia card?

I've seen problems with games not performing when the previous driver was not cleaned up properly (not the same as deinstalled from add/remove software). It might be worth a try, download driver sweeper from guru3d.com and tick to clean up Nvidia video drivers. It might even be worth ticking both Nvidia and AMD, reboot and reinstall the latest AMD driver from their website (supplied dvd's often contain older versions, even with newer models becuase the drivers work for a particular family, like 7xxx).
Logged

justsomeguy

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 525
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #27 on: May 07, 2013, 05:57:18 pm »

I have two machines that DXVA2-CB cause major video corruption. One a nvidia gtx 670 and another gtx 660. I setup Lav in custom mode and forced cuvid and playback is fine.
Logged

Matt

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 42372
  • Shoes gone again!
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #28 on: May 07, 2013, 07:23:46 pm »

I have two machines that DXVA2-CB cause major video corruption. One a nvidia gtx 670 and another gtx 660. I setup Lav in custom mode and forced cuvid and playback is fine.

You might try updating your drivers.

I'm on a GTX 680 and haven't seen anything strange since this change.
Logged
Matt Ashland, JRiver Media Center

InflatableMouse

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 3978
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #29 on: May 08, 2013, 01:36:04 am »

I have two machines that DXVA2-CB cause major video corruption. One a nvidia gtx 670 and another gtx 660. I setup Lav in custom mode and forced cuvid and playback is fine.

Samson mentioned video corruption when a client connected to a library server streams a video.
Logged

justsomeguy

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 525
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #30 on: May 08, 2013, 02:08:00 am »

You might try updating your drivers.

I'm on a GTX 680 and haven't seen anything strange since this change.

the 660 is using the latest 320.00 beta drivers and the 670 in on the official 314.22 WHQL.

Quote
Samson mentioned video corruption when a client connected to a library server streams a video.

It so happens that the 660 machine is a library server and the 670 is a client. So it could be that the 670 problem is the same as Samson is seeing. However that doesn't explain why the 660 server is doing the same thing. Maybe it is a driver issue with the latest 320 betas from nvidia. I'll have to do some more testing I think.
Logged

6233638

  • Regular Member
  • Citizen of the Universe
  • *****
  • Posts: 5353
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #31 on: May 08, 2013, 03:44:54 am »

Things are never easy, are they?

I'm surprised at this, because usually when there is blocking with hardware acceleration it happens with both CUVID and DXVA2, not one or the other.
Here's a good example of this that someone posted to Doom9 a while back: http://www.sendspace.com/file/awhcz8 - only plays well with software decoding.

Unfortunately, I have never cut out a sample or marked the times that it happens, but I do have a couple of Blu-rays that I seem to remember only showing blocking with CUVID and not DXVA2 or Software Decoding.
If it's only happening when streaming video, I wonder if there is something else that is causing it then?
Logged

Hendrik

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 10931
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #32 on: May 08, 2013, 04:24:51 am »

I have a 660 myself in my HTPC (which is also a library server client) and been using DXVA2-CB long before MC made the change in a custom mode.
The driver is slightly old, even windows update has been nagging me to update, so not sure what version exactly.

PS:
Samson seems to say that disabling HW accel doesn't help him, so the problem seems unrelated.
Logged
~ nevcairiel
~ Author of LAV Filters

Samson

  • Galactic Citizen
  • ****
  • Posts: 391
Re: A proposal: change Hardware Acceleration from CUVID to DXVA2-CB
« Reply #33 on: May 15, 2013, 01:48:12 am »

Samson seems to say that disabling HW accel doesn't help him, so the problem seems unrelated.

Samson mentioned video corruption when a client connected to a library server streams a video.

Hi all, Just to be clear and for what its worth, It happened only when played by the instance of MC on my HTPC when the HTPC was the renderer either a) direct to plasma TV screen or b) viewed remotely (remote vnc desktop viewing). Everywhere else the file played just fine, including same file with MC on my desktop pc, same file  streamed over network to MC on my desktop,…. and same file played with VLC on the HTPC.

I don’t now how I fixed it since after several hours of frustration my strictly 'scientific methodology' for troubleshooting went out the window.

For what its worth the 'fixed' version includes Hardware accelerate OFF. When I re-enable hardware acceleration again, the problem returns.

This was NOT the scenario when first tested as hardware acceleration on/off made absolutely no difference.
Logged
Pages: [1]   Go Up