INTERACT FORUM

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1]   Go Down

Author Topic: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000  (Read 26162 times)

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236

Just a subjective shootout for based on my setup between these three GPU options on my Shuttle SH67H7 with a i7-2600K CPU under RedOctober.

Winner: GTS450 / GTX550Ti.  Without a doubt it is the best of the three options as it is the only one that can properly deinterlace HD VC(i) content thanks to the ability to use LAVCUVID for the HW assisted decoding of a range of formats.  Keeps the CPU processing low as well.  It is not perfect however I and dislike:
 - The "silent stream" bug where the HDMI Audio is dropped and reestablished on each play (so you tend to miss the very first bit of of the audio
 - I really don't like the nvidia control panel, it is awkward to setup custom resolutions and I'm not sure what it is doing with the colour space
 - The Palit half height card I bought makes a terrible racket when the fan spins up.  Don't buy this card but other versions of the GTS450 are apparently very quiet
 - Versions with DDR5 are nice and stable with madVR (ROHQ), plenty of speed and shaders to do the work required.
EDIT:  Due to the noise of the Palit Card under load I just swapeed it out for a Gigabyte GTX550Ti  same comments as above at this stage but it is much quieter!

Honorable Mention: HD5670.  I really like the HIS HD 5670 card as it is quiet, has plenty of fast DDR5 mem and shaders so works will with madVR.  The only downside is in RO, VC(i) content is decoded using the MS filter and the deinterlacing is poor.  On other interlaced formats including AVC/x264 the deinterlacing is done in SW thanks to FFDSHOW/YADIF and it looks good.  If you don't have any VC(i) content this card is a winner.

Surprisingly Good: i7-2600K HD 3000.  I did a few runs with the IGP on the i7 chip (3000) and it worked pretty well.  It did not quite have the memory speed to keep madVR happy in Windowed Mode under RO HQ, but for most it will do a great job especially in RO Std.  If you do use RO HQ, then I'd suggest you set madVR to Exclusive Mode as it helps prevent dropped frames from the slower memory access of the 3000.  That said, for trouble free playback on this (and other IGP) go to RO Std and be happy!

Without doubt, the i7-2600K has enough punch to use SW decoding for all formats, so the choice of GPU for me is really about the finer points of each of the cards.  If you have a lower spec CPU but with a recent nvidia GPU then thanks to the use of LAVCUPID in RO HQ you really are well setup!  

Thanks
Nathan

PS - It looks like a need a quieter GTS450 card (or I'll just have to turn up the Vol!)
Logged
I'm not a python at JRiver - just another Aussie

Matt

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 41865
  • Shoes gone again!
Re: RO: nvidia GTS450 vs ATI HD5670 vs Intel X3000
« Reply #1 on: August 05, 2011, 09:49:25 am »

Thanks for the write-up.

I've been playing with a GTX 560 here at work, which has been great.  It can't do HDMI audio while driving two monitors, but the video side is really solid.

At home I've been playing with an i5 2500k and the integrated graphics.  The hardware might be able to hack it, but the drivers are letting it down.  FSAA causes a black screen, DXVA video decoding causes distortion, etc.

Anyway, it's nice to see that there are a lot of good, affordable HTPC video card choices right now.


( p.s. you forgot the Crysis 2 benchmarks ;) )
Logged
Matt Ashland, JRiver Media Center

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: RO: nvidia GTS450 vs ATI HD5670 vs Intel X3000
« Reply #2 on: August 05, 2011, 02:42:42 pm »

FWIW... I'm a bit of a hardware nerd, so the focus on GDDR5 in some of these discussions annoys me a bit.  GDDR5 is not the panacea many in the HTPC discussion world make it out to be.

For memory-sensitive operations, it is not the memory type that matters, but the total memory bandwidth.  This is a function of the memory type, the memory speed, and the width of the memory interface bus on the GPU.  Take the AMD Radeon HD 4870, for example.  This is the first card that shipped commercially with GDDR5 RAM.  It carried 1GB of 3600 MT/s (900MHz) GDDR5 memory on a 256-bit bus.  This results in a total memory bandwidth of 115.2 GB/s.  Whereas another much newer card, the Nvidia GTS 450, also has 1GB of 3600 MT/s (900MHz) GDDR5 memory, so to the casual observer these would seem equivalent from a memory performance perspective, right?  But those GDDR5 chips on the GTS 450 are strapped to a chip with 3 separate 64-bit memory interfaces, with one of them disabled and unused, which results in a 128-bit aggregate bus width.  This results in only 57.73 GB/s of memory bandwidth (or roughly 1/2 the speed of the year-older 4870).

Plus, and perhaps much more importantly, memory bandwidth is only one of MANY factors that impact on overall GPU performance, and it only applies at all in certain use-cases.  Many other things about how the graphics chip is structured have a much bigger impact, even if you limit the discussion to just memory-sensitive issues.  Take an even more recent example: the red-headed step child-card, the Nvidia GTX 550 Ti.  It has 1GB of 4104MT/s (1026Mhz) GDDR5 RAM on a 192-bit bus.  That gives you a not-too-shabby 98.5 GB/s of total memory bandwidth.  Except that to strap 1GB of GDDR5 to a 192-bit bus, Nvidia had to pull some tricks with the memory interface.  A 192-bit bus is made up of 3 separate 64-bit controllers.  Normally, you'd want the same amount of memory strapped to each controller.  1GB doesn't divide by three and result in memory chips that are actually manufactured.  So, normally you'd put either 768MB of RAM or 1.5GB of RAM on such a card (256/512MB per channel).  But, Nvidia wanted to put 1GB of RAM on the card to compete with what AMD was listing on it's boxes, but strapping 1.5GB of RAM on the card wasn't cost-effective.  So, they strapped 256MB of RAM to the first two controllers, and 512MB of RAM to the third.  In almost all rational work-loads, that last 256MB of RAM goes totally or mostly unused, so it performs like a 768MB card.  And, of course, there are massive differences in how effective a particular GPU architecture is in using the memory and theoretical memory bandwidth it has available to it (and even in different driver revisions).  Sometimes this literally comes down to where the physical structures are located on the cards and how the memory interface is arranged on the surface of the GPU chip, how efficiently the ROPs can use the memory, and what other bottlenecks exist in the graphics pipeline.

And then, to top it all off, those memory chip specs are only "suggested" by the GPU manufacturer.  The actual OEM (ASUS, Gigabyte, XFX, Palit, etc) can use a wide variety of different compatible memory chips and speeds.  So, one "GDDR5 GTX 550 Ti card" might use 1026MHz GDDR5 and perform like the review samples from Nvidia, while another one over here uses 1200MHz GDDR5, and a cheap one back there uses 900MHz GDDR5.  This problem tends to be "worse" with Nvidia hardware than AMD hardware because AMD keeps the OEMs "closer" to the official spec for all but their lowest-end GPUs.

The reason this type of thing got "picked up" is actually because Nvidia was very "late" to the game in releasing GDDR5 cards on the market (AMD had been shipping a full line with GDDR5 for almost an entire year before Nvidia switched most of their cards over).  Once they did start shipping cards that could use GDDR5 (which required a memory controller redesign on the GPU, and so a new chip to power the card), Nvidia simultaneously rebranded a bunch of their older generations of chips with new model names.  This made them "look like" they were part of the same generation of cards as the actual new GPU designs.  But because the new GPUs were the only ones that could use GDDR5, the Nvidia cards with GDDR5 also happened to be the ones with more recent GPU designs (the new GPUs also had other improvements that enabled you do to fancier things).  Not necessarily because-of the GDDR5, but because their product lines were and are so confusing with old GPUs rebranded with new model numbers and things like that, it became a handy short-hand.  If you have a card with GDDR5 from Nvidia, you can be sure that it is actually a more modern GPU design under the hood, which often translates to improved performance.  That improved performance wasn't necessarily due to the use of GDDR5 or any particular memory sensitivity of the task at hand, but it was a quick way to distinguish the new/good cards from the old/crappy ones.

As the tech has matured and Nvidia has shipped more and more low-end GPUs compatible with GDDR5 memory (and particularly as OEMs have seen that printing GDDR5 on the box helps them to sell the cards from a marketing perspective), this short-hand has become less and less valuable.  A low-end, old, crappy GPU with a huge pile of ultra-fast GDDR5 RAM strapped to it is still going to perform like crap.  Palit just gets to sell it for $10 more, and the GDDR5 is cheap, so who cares?
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236
Re: RO: nvidia GTS450 vs ATI HD5670 vs Intel X3000
« Reply #3 on: August 05, 2011, 05:09:54 pm »

Thanks for the background (there is always more than meets the eye with marketing features), and now udnerstand more on the impact of the mfr's implementations.  For pure HTPC use can we continue to relay on madshi's (et al at doom) simple recommendation to get a DDR5 over a DDR3 based card regardless of the implementation... are they all fast enough for madVR?  Eg I've blindly purchased both my ATI and nvidia card based on the DDR5 sticker on the box and I may have just been lucky, but both seem to have fast enough memory throughput for madVR.
Logged
I'm not a python at JRiver - just another Aussie

Daydream

  • Citizen of the Universe
  • *****
  • Posts: 770
Re: RO: nvidia GTS450 vs ATI HD5670 vs Intel X3000
« Reply #4 on: August 06, 2011, 03:45:46 am »

GTS450 has 192 shaders, 57.73 GB/s bandwidth (HD 5670 has 400 shaders, 64 GB/s bandwidth - at best)

So can we generalize that that's enough?

Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236
Re: RO: nvidia GTS450 vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #5 on: August 06, 2011, 05:15:21 pm »

And the Intel i7-2600K HD Graphics 3000 has 21.3GB/s memory bandwidth and 12 shaders.  http://en.wikipedia.org/wiki/Intel_GMA#HD_Graphics_.28GMA_HD.29

The other Intel IGP specs are lower again.
Logged
I'm not a python at JRiver - just another Aussie

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #6 on: August 07, 2011, 12:01:57 am »

GTX550Ti has 192 shaders, 98.5 GB/s bandwidth
Logged
I'm not a python at JRiver - just another Aussie

Daydream

  • Citizen of the Universe
  • *****
  • Posts: 770
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #7 on: August 07, 2011, 06:23:12 am »

Pulled from the wiki for quick documentations (is there any forum engine that does tables?)

For whoever wants to read the whole history in great details, Nvidia page is here, AMD page is here

Model                                 Config core                            Memory Configuration                                               TDP (watts)
                                                                  Bandwidth (GB/s)         DRAM type       Bus width (bit)   
GeForce GTS 450 (OEM)      144:24:24                   96                      GDDR5               192                                106
GeForce GTS 450                192:32:16                   57.73                 GDDR5               128                                106
GeForce GTX 460 SE           288:48:32                   108.8                  GDDR5               256                               150
GeForce GTX 460 (OEM)      336:56:32                   108.8                 GDDR5               256                                150
GeForce GTX 460                336:56:24                   86.4                   GDDR5               192                                150
                                         336:56:32                   115.2                 GDDR5               256                                160
GeForce GTX 465                352:44:32                   102.6                 GDDR5               256                                200
GeForce GTX 470                448:56:40                   133.9                 GDDR5               320                                215
GeForce GTX 480                480:60:48                   177.4                 GDDR5               384                                250


GeForce GT 545 GDDR5      144:24:16                  64                       GDDR5               128                                 105
GeForce GTX 550 Ti            192:32:24                  98.5                    GDDR5               192                                 116
GeForce GTX 560                336:56:32                  128.1                  GDDR5               256                                 150
GeForce GTX 560 Ti            384:64:32                  128.27                 GDDR5               256                                 170
GeForce GTX 560 Ti OEM     352:44:40                  152                     GDDR5               320                                 210
GeForce GTX 570                480:60:40                  152                    GDDR5               320                                 219
GeForce GTX 580                512:64:48                  192.4                    GDDR5               384                                 244
GeForce GTX 590                2x                          2×163.87                GDDR5           2×384                                 365
                                         512:64:48            



Model                    Config core                                          Memory                                                       TDP (W)   
                                                  Bandwidth (GB/s)       Bus type                         Bus width (bit)       Idle       Max.
Radeon HD 5550   320:16:08         12.8 25.6 51.2            DDR2 GDDR3 GDDR5            128                 10        39
Radeon HD 5570   400:20:08         12.8 28.8 57.6            DDR2 GDDR3 GDDR5            128                 10        39
Radeon HD 5670   400:20:08                    25.6 64           GDDR3 GDDR5                      128                 15        64
Radeon HD 5750   720:36:16                        73.6            GDDR5                                128                  16        86
Radeon HD 5770   800:40:16                        76.8            GDDR5                                128                  18        108
Radeon HD 5830   1120:56:16                       128             GDDR5                               256                  25        175
Radeon HD 5850   1440:72:32                       128            GDDR5                                 256                 27        151
Radeon HD 5870   1600:80:32                    153.6            GDDR5                                 256                 27        188
Radeon HD 5970   1600:80:32 ×2             2× 128            GDDR5                           2× 256                 51        294


Radeon HD 6450   160:08:04          8.5-12.8 25.6-28.8   DDR3 GDDR5                       64                          9             18 27
Radeon HD 6570   480:24:08                           28.8 64   DDR3 GDDR5                     128                        10 11        44 60
Radeon HD 6670   480:24:08                                  64   GDDR5                              128                         12                66
Radeon HD 6750   720:36:16                               73.6   GDDR5                              128                          16               86
Radeon HD 6770   800:40:16                               76.8   GDDR5                              128                           18            108
Radeon HD 6790   800:40:16                             134.4   GDDR5                               256                          19            150
Radeon HD 6850   960:48:32                                128   GDDR5                               256                          19            127
Radeon HD 6870   1120:56:32                           134.4   GDDR5                               256                          19            151
Radeon HD 6950   1408:88:32                             160   GDDR5                                256                          20            200
Radeon HD 6970   1536:96:32                             176   GDDR5                                256                          20            250
Radeon HD 6990   1536:96:32 ×2                   2× 160   GDDR5                            2× 256                         37           375
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #8 on: August 07, 2011, 10:36:46 pm »

FYI: Shader counts aren't equivalent.  What Nvidia defines as a "shader core" doesn't match functionally what AMD does, nor Intel.  Not by a long-shot.  They're all different and have different capabilities and operate at different speeds.

GPUs are complex beasts.  Best way (only really) to "rank" them is by benchmarking them using the applications that matter to you.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

Daydream

  • Citizen of the Universe
  • *****
  • Posts: 770
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #9 on: August 07, 2011, 11:47:41 pm »

FYI: Shader counts aren't equivalent.  What Nvidia defines as a "shader core" doesn't match functionally what AMD does, nor Intel.  Not by a long-shot.  They're all different and have different capabilities and operate at different speeds.

GPUs are complex beasts.  Best way (only really) to "rank" them is by benchmarking them using the applications that matter to you.

We have to start counting somewhere. If anything, generic and vaguely informative is better than wrong :).

Note to self: need to ask nev/madshi. Is LavCUVID/madVR using complex shaders or simple shaders?
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #10 on: August 08, 2011, 10:59:04 am »

Those "cores" aren't the right thing to count.  What that term means to the different chip design houses has wildly different meanings.

Take a look at the block diagrams here in this architecture review of the GF100 (the original Fermi card from Nvidia, the GTX 580).



Quote
As GPUs become more complex, these diagrams become ever more difficult to read from this distance. However, much of what you see above is already familiar, including the organization of the GPU's execution resources into 16 SMs, or shader multiprocessors. Those SMs house an array of execution units capable of executing, at peak, 512 arithmetic operations in a single clock cycle. Nvidia would tell you the GF100 has 512 "CUDA cores," and in a sense, they might be right. But the more we know about the way this architecture works, the less we're able to accept that definition, any more than we can say that AMD's Cypress has 1600 "shader cores." The "cores" proper are really the SMs, in the case of the GF100, and the SIMD engines, in the case of Cypress. Terminology aside, though, the GF100 does have a tremendous amount of processing power on tap. Also familiar are the six 64-bit GDDR5 memory interfaces, which hold the potential to deliver as much as 50% more bandwidth than Cypress or Nvidia's prior-generation part, the GT200.

If you look at the block diagrams, the unit that would actually make the most sense as a "core" would be the GPC.  That is the unit that is functionally "independent" of the others (other than memory access).  But that's not what they call a "core".  Maybe even one step down would still make sense, if you called a SM unit a "core".  These self-contained units include everything other than the rasterization engine, and have their own L1 cache.  Again, though, that's not what Nvidia calls a "CUDA Core".  No, instead they call the tiny little FP/INT Unit processing blocks "CUDA Cores", ignoring that these units are only one piece of the overall performance picture of the chip (has no impact on Texturing performance, Vertex processing, Tesselation, setup, output, nore can these CUDA Cores even fully implement all of the different Shader instructions that the chip may have to perform (for that, there are specialized "Cores" that do more complex instructions...

In the end, Nvidia couldn't actually ship the first gen Fermi GPU with all of the cores activated, so... If we just go by Nvidia's oddball "count" we get 480 CUDA Cores on the GTX 480.  (How much more proof do you need that this is a purely marketing driven number than the fact that the core count matches the model name, by the way?)

Now, take a look at the block diagram for the Cayman chip (the GPU that powers the Radeon HD 6950 and 6970).  The top level diagram looks broadly similar, though it isn't structured similarly at all.  Cayman has two "top-level" structures, vaguely reminiscent of the Nvidia GPC "cores".  These each have their own vertex setup engines, rasterizers, and tesselators.  However, they are made up of wildly different numbers of functional units, with a very different mix of capabilities.  If you drill down into the diagram further, you see this:



AMD calls each Stream Processing Unit (their equivalent of the CUDA Core) either a FP Ops Unit, an Integer Ops Unit, or a Branch Unit.  Their FP and Integer units are much more capable than the corresponding CUDA Cores (they don't require the "special purpose" auxiliary cores the Nvidia design does), but different operations take different amounts of time!  One FP Ops SPU on the AMD side can process 4 32-bit ADD operations per clock, but only 2 64-bit ADDs, and 1 "special function", and can't handle Integer operations at all.  But, they are functionally "cores" and AMD counts them up for you for a nice marketing number.  Cayman has 1536 "cores" in AMD's parlance.  Interestingly, this is a bit LESS than the previous generation Cypress GPU that powered the Radeon HD 5870 brought to the table (which listed 1600 Stream Processing Units).  But, the Cayman SPUs are much more powerful and capable than the Cypress SPUs...  But then, while Nvidia has simpler "cores" that are less capable, it runs many of them (the "simple" ones) at double the speed of the rest of the chip, so they can handle much more than it would seem at first glance.

Even your question about "complex shaders" vs "simple shaders" really only applies to the Nvidia-style architecture, which separates them that way (CUDA Cores can only handle simple shader arithmetic, whereas separate "cores" handle "complex shader ops").  All of AMDs "cores" can handle both simple and complex ops, but do them at different speeds, depending on the specific instruction and precision.

So, we have:

Nvidia GTX 480 (first gen Fermi card): 480 "cores"
AMD Radeon HD 5870 (Cypress): 1600 "cores"
AMD Radeon HD 6970 (Cayman): 1536 "cores"

But, if you look at the benchmarks, the Cayman card (6970) beats the pants off of the Cypress card (5870), despite "lacking" 64 "cores", and basically ties the Fermi card (GTX 480) despite having over three times the "cores" as the Nvidia offering.  And even those numbers can fluctuate wildly depending on the exact mix that a particular game uses, or even the settings used.  For example, take a look at the scores for HAWX 2 (a regularly Nvidia-friendly game).  In this test, the Cayman card can't even match the crippled GTX 470 card, and even loses to a modded version of the much-smaller GTX 460 card, much less come within striking distance of the green-team's $650 flagship card like it did with the Starcraft test (and many of the others).
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

gtgray

  • Galactic Citizen
  • ****
  • Posts: 261
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #11 on: August 08, 2011, 04:59:01 pm »

Just as note... the Retail GT 545, which is a DDR3 implementation and one of the red headed step children you talk about has a 1.5 gigabyte memory size..  43GB/sec memory bandwith  and it is a quiet single slot solution.

I think this discussion fails to identify that cards fit needs, slots literally. I have a Ceton in one of these boxes.. I can't have have a big heat engine sitting next to it without extreme cooling countermesaures. It is also an always on device so power consumption is also a real issue.

The GT 545 plays Red October HD with aplomb... If you are not a gamer many of these GPUs are massiver over kill.

I have two PC with this GPU, one a Sandy Bridge 2100. With the GT545 and an the appropriate PSU it runs 35 watts at idle and 75 watts doing HD interlaced contented. At what would more CPU and GPU performance be better... I am not sure exactly how. I don't game. I bitstream HD audio and the Denon does the EQ... what I missing?  I don't drop frame and my room isn't filled with an easy bake oven.

I have this last PC in bedoom office with a Panny 37" LED (35 watts). So at idle say 110 watts for the TV, PC and Denon - yes the Denon idles around 35 watts too. At reference volume, playing high big rate concert videos all together maybe 180 watts.. lots of folks think 180 watts is a low power GPU at idle.

Clue me in on what besides gaming in a practical sense is being missed ?

This focus at JRiver on big power and big power consuming machines seems kind of way out there when some of the hardware media extenders with SOCs running at 1 GHZ or less do fabulous playback. The Sage extenders are prime example.
Logged

Daydream

  • Citizen of the Universe
  • *****
  • Posts: 770
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #12 on: August 08, 2011, 06:31:40 pm »

But gtgray, that's exactly what we were trying (well, at least from my perspective). It may look like we got a bit sidetracked but the numbers I posted and Glynor's explanations I hope to help everyone to reference the smallest reference denominator of cards that do everything.
Logged

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #13 on: August 08, 2011, 10:26:11 pm »

the Retail GT 545, which is a DDR3 implementation and one of the red headed step children you talk about has a 1.5 gigabyte memory size..

I called it a red-headed step child only because of the bizarre memory configuration.

But, I'd say the GT 545 is even more of a red-headed stepchild.  OEM-only configurations are sketchy because they don't release much solid information on them.  I can say for sure, though, that there is almost NO reason to strap 1.5GB of RAM to that GPU with those kind of memory throughput rates.  If the GPU had the performance to use all of that RAM for anything real (it wouldn't, except in extreme circumstances), it would be too bottlenecked by the 43GB/sec transfer speed to use it effectively.

That's a perfect example of a marketing number.  DDR3 is cheap-as-hell, and a 192-bit bus is cheaper to make (fewer traces) with DDR3, and you can help to cover some of the gap.

The GT 545 is an odd bird.  The GDDR5 version lists a 128-bit bus (low-end), but the DDR3 version lists 192-bit.  This would suggest different functionally enabled units on the cards, or more likely, completely different GPUs.

Either way, they look very much like rebranded GTS 450 cards from last year, with new heatsink designs and craploads of RAM stapled on because it is cheap right now.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #14 on: August 08, 2011, 10:34:24 pm »

The GT 545 plays Red October HD with aplomb... If you are not a gamer many of these GPUs are massiver over kill.

I agree.  However, I also don't like to pay for more than I get.  For $150 (way back in March), if the GTS 550 Ti is a bad deal, then the GT 545 is certainly a bad deal at $150, especially many months later when you can get a GTX 550 Ti for $130 and a Radeon HD 6850 for $155.

If you really want a silent low-end card, there are all sorts of passive-cooled options at the low end out now or coming later this year.
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #15 on: August 09, 2011, 04:59:12 pm »

FYI I was a big proponent of low end passive "HTPC" cards like the 5450 and while they are fine for RO, they just don't have the throughput for RO HQ as they are not quick enough for madVR.

Here are my suggestions for HTPC duties in MC:

RO Std:  Modern IGP (eg Intel 3000) or a lowend discrete card like the HD5450 is fine for RO.  Ticking the "HW Accelerated video decoding" will help with HD playback if the CPU is also low powered and on nvidia based GPUs also gives very good deinterlacing support for these formats thanks to LAVCUVID.

RO HQ:  If you want the "best" quality output from all the common formats then you need a nvidia based GPU fast enough for madVR.  The entry point for this seems to be the GTS450/GTX550Ti.  The only "downside" to the ATI offering is the lack of a high quality deinterlacing option for VC-1 content.  Then again, if you don't have much of this it really does not matter and a HD5670 seems to be a good entry point.
Logged
I'm not a python at JRiver - just another Aussie

glynor

  • MC Beta Team
  • Citizen of the Universe
  • *****
  • Posts: 19608
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #16 on: August 09, 2011, 05:25:34 pm »

Things change, my friend.  GPUs have been far exceeding Moore's Curves for quite a few years now.

On that page, they demoed a passive GTS 450 (basically the same thing as the GT 545 discussed above) and a passive Radeon HD 6850, for example.  More reasonable cards included a 6670 and a 6570 (which Sapphire has launched), and both of these would have at least 2x the GPU power of the old low-end 5450.  In fact, the 6670 looks like it roughly triples the performance of the old 5450 in many 3D games.  It includes 1GB of GDDR5 vRAM on a not-too-shabby 128-bit bus, and easily slides past the 5670 you listed in specs and benchmarks.

With product cycles lasting around 6-8 months, if your "knowledge base" of what is possible with a certain category of GPU is more than 3-4 months old, it is painfully out of date.  If it is more than 6-8 months old (the 5450 launched in Feb 2010), then you missed a whole generation.  ;)
Logged
"Some cultures are defined by their relationship to cheese."

Visit me on the Interweb Thingie: http://glynor.com/

Daydream

  • Citizen of the Universe
  • *****
  • Posts: 770
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #17 on: August 09, 2011, 08:00:57 pm »

I propose not to build anything anymore. Let's wait for the next cycle. :)
Seriously if AMD comes out with 7000-series, Bulldozer and all the APUs based on that tech, won't we go again at it? And we're talking anytime between now and something like 3 months.

Maybe we should look at it from various other aspects that we "need". This discussion has been very refreshing tech-wise (thanks Glynor!), but all gravitates around one thing: power to drive madVR. On one hand there are a few more things to consider when building a machine (form factor, CPU power, function -> HTPC + some other purpose) that may trump the requirements for this single purpose. On the other hand if we remove the madvr variable, there are a zillion more options to build systems. Guess what I'm gonna do? :)
Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #18 on: August 09, 2011, 08:25:19 pm »

Glynor stop sitting on the fence and make some recommendations for RO and ROHQ.  ;D  You know more about this than the rest of us and I can only report on what I have actually tried.
Logged
I'm not a python at JRiver - just another Aussie

Ekpen

  • Citizen of the Universe
  • *****
  • Posts: 674
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #19 on: August 27, 2011, 07:18:24 pm »

Greetings:

I have been following this thread about Nvidia graphics cards and  RO integration.
I am building a game rig/htpc, suddenly I had to stop because my son needs a new Laptop which I have purchased for him.
Soon I will try to finish  this build very soon.
I want Glynor or any one to comment on this htpc setup, if it will work with RO HQ.

The Cpu is Intel 990X,
24 gig of memory.  (Viper Xtreme DDR3 pc3-16000- 2000Mhz.
Nvidia Gtx9800 X2 dual GPU.
I have two of these GPUs, I may use one or both. Do you think this 9800 x2 will perform like the GTX550TI or Nvidia GTS450?

All I need now is Windows 7 64 bit, and some 3 TB Hitachi Hard drives- possibly Raid Level 10 setup.

This HTPC will be my main server, though designed for games, JRMC16/17 will be run it, of course with Gizmo <grin>.
Will I be able to watch video on this HTPC without stuttering.?

Thanks.

Ekpen.
Logged

jmone

  • Administrator
  • Citizen of the Universe
  • *****
  • Posts: 14236
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #20 on: August 27, 2011, 11:47:39 pm »

Ekpen - I think you wil be more than fine as:
1) CPU: Intel Core i7 990X @ 3.47GHz has a passmark benchmark of 10,821.  You will be able to software decode all you media and will not need any hardware assistance (eg you do not need CUVID and hence can have your choice of GPU).  As a comparison I moved initially to nvidia from ATI as my Q6600 (passmark of 3,000 ish) did not quite have enough power to SW decode all my library (I've now also upgraded my CPU to a i7-2600K so I have plenty of options and overhead).
2) GPU: With the above CPU the only think you need in the GPU is the ability to run madVR without dropping frames, so it likes plenty of shaders (which you have) and nice fast memory throughput.  Again, your card has plenty of headroom as unlike games, Video is not that demanding.
Logged
I'm not a python at JRiver - just another Aussie

Ekpen

  • Citizen of the Universe
  • *****
  • Posts: 674
Re: RO: nvidia GTS450 vs GTX550Ti vs ATI HD5670 vs Intel HD Graphics 3000
« Reply #21 on: August 28, 2011, 11:45:46 am »

Thanks

Ekpen.
Logged
Pages: [1]   Go Up