Some power consumption numbers for DXVA.
System used was my laptop since it's closer in config to an HTPC than a desktop build (actually the laptop IS my HTPC right now).
Intel Core i5 - 450M
ATI Mobility Radeon HD 5650, 1GB DDR3, 400 unified shaders.
Screen resolution is 1600x900 (so there is always some scaling).
MPEG-2 is always decoded in software since I don't have a MPEG-2 HW decoder installed and was too lazy to install one.
Software used to read GPU load: GPU-Z
The system defaults to 26W at idle.
Notations:
- /24p, /30i means there are 24 or 30
frames, progressive and interlaced, respectively. The other kind of notations, like 1080i60, which refer to
fields, are not used.
- WBM = Windows Based Merit (with EVR)
Test file 1 - Kylie Minogue - Better Than Today (Children in Need - UK TV) - 1440x1080/25 MBAFF H264
Test file 2 - birds scene, Planet Earth, cut from Blu-Ray - 1920x1080/24p VC-1
Test file 3 - file 024 Anantech media test suite - 1920x816/24p
Test file 4 - file 036 Anantech media test suite - MPEG-2 1280x720/60p
Test file 5 - Kylie Minogue - Get Outta My Way (on Leno - NBC) - 1920x1080/30i
Test file 6 - file 005 Anantech media test suite - 1440x1080/30i
WBM RO Standard RO HQ (with GPU load)
Test 1 33W 40W 57W / 61%
Test 2 30W 43W 51W / 38%
Test 3 33W 38W 47W / 31%
Test 4 36W 36W 43W / 54%
Test 5 34W 35W 62W / 79%
Test 6 33W 39W 60W / 68%
Average power consumption WBM / RO ST. / ROHQ is
33.17W 38.5W 53.33W
(100%) (~116%) (~161%)
(in $$$ that's change money; at 53.3W/h and 0.12$/kWh you can run a device like that 24/7/365 for $56)
Note: as with any statistic, one has to be careful with details and numbers, otherwise the results can be used to prove anything. My average power consumption above can be heavily distorted by what you play (mostly Blu-rays, mostly 1080/30i files, etc; hell, I don't have any SD files on my test). Second thing the tests were run in fullscreen windowed mode, not exclusive. I have a feeling exclusive mode is not so taxing.
But those are not the problems. Here's what's troubling me:
- there were dropped frames with RO HQ (madVR). Not so many with exclusive mode but still there were. Even as I didn't hit 100% on either the CPU or the GPU. Which highlights that shaders numbers are not everything and the rest of the internal architecture of the video card matters (in general, or memory speed in special). Which doesn't help one bit in figuring out where we draw the line - from this line of cards up it works smooth like butter; from it down is garbage. And I want that demarcation line.
- the other thing that's killing us is the deinterlacing. In software is of high-impact, in hardware it's the CUVID way or the highway. I don't like this one bit, above all because it's limiting, and I don't like any limitations to my toys, even if it's out of principle.
Other conclusions:
- the $$$ impact is not that much - as proven by Matt
somewhere else, now the numbers extended to other architectures.
- the impact is from a different angle -> can you take a GTX 480 (which I'm sure with all its 1.5Gigs of 384bit DDR5 memory doesn't drop frames) and put it in a
M350 Universal Mini-ITX?
- as said, a device that uses 50-something W/h is nothing but how many can you build like that when you have to put top dollar video cards in them? What if you have more than one system like that in the house? Would a user still run them all 24/7 as desktop builds? The difference between 35W to 60W it's just the same as 160W to 185W, but which one would you prefer to keep running all the time?
So because of all these, please imagine a blinking purple with bells (to stand out) request, for a DXVA with EVR automatic profile.