nVidia 400 series

-Tangent/Rambling thought- AMD is clearly in the lead now, no question(if I needed to upgrade I'd be all over a 5870). But it's worth mentioning nvidia's G80(unified shader arch) was incredibly potent. IMO, if gt200 had achieved the clocks nvidia set out for they wouldn't be nearly as bad off as they are now. Yes AMD would have still wrestled away the performance and price/performance crowns with the dawn of their evergreen lineup(some say nvidia has taken back performance with Fermi - at least in dx11/GPGPU/tesselation).

I don't see AMD having the lead in performance, or price/performance, right now. Just curious, what makes you believe that AMD is "clearly in the lead now"?
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
I don't see AMD having the lead in performance, or price/performance, right now. Just curious, what makes you believe that AMD is "clearly in the lead now"?

Well it depends on what you're after. From a pure performance single GPU solution perspective(not single card dual GPU aka 5970) I will concede GTX480 is king of the hill…When I said ATI had the lead I meant more in terms of product success, overall price/performance, thermals/acoustics/power consumption and image quality.

That said, since this is a Fermi thread I will point out where it fails compared to AMD's evergreen lineup:

-First and foremost, AMD has a full DX11 lineup unlike NV - from the lowly 5670 up to their 5970. The real issue for NV's Fermi is that they are working with the exact same core, GF100. The difference between GTX 480, 470 and 465 is simply a few disabled blocks(aka binning), either by design or by necessity(IE disabling non/malfunctioning blocks). Which translates into parts that sometimes consume similar power while providing lower performance(http://www.anandtech.com/show/3745/nvidias-geforce-gtx-465/14). Where as ATI has Cypress(5800) series, Juniper(5700) series, Redwood(5600+5500) and Cedar(5400) series cards. All of which feature their own core. Now, GF104 might finally shake things up a bit – frankly that part is long overdue… As is a refresh/shrink of GF100. It's worth noting that even a die shrink might not be enough to reign in power and allow nvidia to produce a dual gpu card.

-If you read through the previously linked anandtech thread you will see that in terms of thermals, acoustics and power consumption Fermi's performance is terrible. Heck, AMD's dual GPU 5970 consumes less power under load than a single GTX480. Not only do you have a question of performance per watt but you also have to consider that an AMD GPU might be a pure drop in upgrade where as the GTX480 could require a new PSU. This is literally where I would find myself if I was considering upgrading GPUs as I have a Corsair 550vx(amazing PSU btw). Sure you could argue that Corsair's CWT and Seasonic built units are great and can be pushed up to and in some cases beyond their max(for my 550 that's 41A on the 12v rail). But why go there? Now I know performance per watt might sound like a crazy metric to even bother considering; yet in an area with exceptionally high energy costs and living in a house with my brother + 1 more housemate(both of which use my gaming PC more than I do) said performance per/watt figure becomes very important when considering any potential upgrade.

-If you look at AMD's design their smaller more efficient approach has paid off in spades. Fermi is a monster @ 3bn trannies and a die size of 529mm^2 especially when compared to Cypress which comes in at 2.15bn trannies and 334mm^2. Not only does AMD have more dies per wafer as a result of their physically smaller size, they're also less "complex" cores and as such have provided (significantly?) better yields.

die_comparison.png

-AMD has also finally improved upon something where they were sorely lacking/getting beat by Nvidia… Before, I found Nvidia's Angle dependent Anisotropic Filtering algorithm to be superior, now that AMD has implemented their Angle Independent Anisotropic Filtering they rule the roost.

http://www.bit-tech.net/hardware/graphics/2009/09/30/ati-radeon-hd-5870-architecture-analysis/12

-And lastly, Nvidia's drivers are no longer perceptibly better than ATI's, they're roughly equal. Which isn't necessarily bad, just not an advantage for the green team anymore.
 
Last edited:
Joined
Feb 28, 2010
Messages
380
When I said ATI had the lead I meant more in terms of product success, overall price/performance, thermals/acoustics/power consumption and image quality.

I'd say overall price/performance is *real* close right now, with ATI perhaps having a small advantage. ATI is better if you're really worried about power consumption, (few enthusiast gamers are), but their cards are not significantly quieter than nVidia's.

Not sure where you get ATI having better image quality, because that's definitely not true.


-First and foremost, AMD has a full DX11 lineup unlike NV - from the lowly 5670 up to their 5970. The real issue for NV's Fermi is that they are working with the exact same core, GF100. The difference between GTX 480, 470 and 465 is simply a few disabled blocks(aka binning), either by design or by necessity(IE disabling non/malfunctioning blocks). Which translates into parts that sometimes consume similar power while providing lower performance(http://www.anandtech.com/show/3745/nvidias-geforce-gtx-465/14). Where as ATI has Cypress(5800) series, Juniper(5700) series, Redwood(5600+5500) and Cedar(5400) series cards. All of which feature their own core.

I can see how that might be an advantage in terms of overall sales, (uninformed people buying a cheap video card because it says "Direct X 11 compatibility" on the box), but in truth, those lower/mid-range cards can't actually run many DX 11 games with acceptable framerates.


-AMD has also finally improved upon something where they were sorely lacking/getting beat by Nvidia… Before, I found Nvidia's Angle dependent Anisotropic Filtering algorithm to be superior, now that AMD has implemented their Angle Independent Anisotropic Filtering they rule the roost.

http://www.bit-tech.net/hardware/graphics/2009/09/30/ati-radeon-hd-5870-architecture-analysis/12.

Sure, if you want to compare the 5xxx series to nVidia's previous generation of cards, which is exactly what that article does. Personally, I'd rather be able to use Supersampling Anti-Aliasing in DX 10/11 games, which ATI cards are incapable of doing.
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
I'd say overall price/performance is *real* close right now, with ATI perhaps having a small advantage. ATI is better if you're really worried about power consumption, (few enthusiast gamers are), but their cards are not significantly quieter than nVidia's.

Not sure where you get ATI having better image quality, because that's definitely not true.

I can see how that might be an advantage in terms of overall sales, (uninformed people buying a cheap video card because it says "Direct X 11 compatibility" on the box), but in truth, those lower/mid-range cards can't actually run many DX 11 games with acceptable framerates.

Sure, if you want to compare the 5xxx series to nVidia's previous generation of cards, which is exactly what that article does. Personally, I'd rather be able to use Supersampling Anti-Aliasing in DX 10/11 games, which ATI cards are incapable of doing.

The Angle Independent Anisotropic Filtering was the IQ boost I was talking about… As for SSAA, how many good DX10/11 games are out right now and of those how many are RPGs?

Dx11 is the future, but(and this is entirely my opinion) why jump on the bandwagon as an early adopter? It goes without saying that there will always be better tech down the line, and while some might argue if you "wait" too long you might fall into the trap where you're always waiting for that next technological breakthrough. I'm of the opinion that if you have a GPU capable of cranking out acceptable frames for the games you enjoy and @ your monitor's native resolution(+ w/e eye candy you like) why bother?

Also, in the games that support SSAA, how does the card perform? I doubt it's good since at 4x SSAA the GPU has to sample each pixel 4 times… That kind of increased workload typically translates into a huge drop in performance. Does it take SLI to run 1920x1200 and 4x SSAA at playable frames? Also, gaming on a monitor with a higher pixel pitch reduces the need for AA so there are some scenarios where the IQ difference could be a complete wash.

I'm curious now, do you have any comparison screens of your own you'd be willing to share?
 
Joined
Feb 28, 2010
Messages
380
For me, I'd rather run at a higher resolution than run with AA on. Of course, being able to have both on is best, but not realistic when playing bleeding edge games. SSAA would make this problem worse it seems. Is it a moot point for newer games, and only useful for older titles that aren't too demanding?
 
Joined
Aug 18, 2008
Messages
15,679
Location
Studio City, CA
For me, I'd rather run at a higher resolution than run with AA on. Of course, being able to have both on is best, but not realistic when playing bleeding edge games.

Sure it is, if you have the hardware. I'm curently playing Metro 2033 at 1920x1200 with x4 AA.

Of course it depends on the graphics engine of the game. Some games look fine without any AA at all.
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
The Angle Independent Anisotropic Filtering was the IQ boost I was talking about… As for SSAA, how many good DX10/11 games are out right now and of those how many are RPGs?

Not a huge amount, but enough for me, and I don't limit myself to crpgs.


Dx11 is the future, but(and this is entirely my opinion) why jump on the bandwagon as an early adopter? It goes without saying that there will always be better tech down the line, and while some might argue if you "wait" too long you might fall into the trap where you're always waiting for that next technological breakthrough. I'm of the opinion that if you have a GPU capable of cranking out acceptable frames for the games you enjoy and @ your monitor's native resolution(+ w/e eye candy you like) why bother??

I agree in part, but I wasn't happy with my previous card (Radeon 4890), and I got a very good price on the GTX 470. I didn't upgrade just for Dirext X 11, although I'm already enjoying its benefits. (Tessellation is quite nice) :)

Also, in the games that support SSAA, how does the card perform? I doubt it's good since at 4x SSAA the GPU has to sample each pixel 4 times… That kind of increased workload typically translates into a huge drop in performance. Does it take SLI to run 1920x1200 and 4x SSAA at playable frames?

Afaik it doesn't matter if a game supports SSAA because you can force it almost every time. Of course performance is going to depend on the game itself. I wouldn't dare to try it with something like Crysis or Metro 2033, but plenty of older games are quite playable. At least the option is there for the people who do have dual-GPU setups.

I'm curious now, do you have any comparison screens of your own you'd be willing to share?

Not quite sure what you mean? Are you talking about SSAA specifically? Here's an article that talks about it, and shows a few comparison screens as well.
http://www.pcgameshardware.com/aid,...tched-image-quality-in-modern-games/Practice/
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
And some games were impossible to play at full resolution with AA turned on (like Crysis).

Yep, but the key word there is "were", as in, when it was first released. Crysis can be played at high resolution + AA with newer high end systems, and it's definitely not a problem for those with dual-GPUs.
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
Not a huge amount, but enough for me, and I don't limit myself to crpgs.

Unfortunately I don't have as much spare time anymore so I have to limit my game selection quite a bit.

I agree in part, but I wasn't happy with my previous card (Radeon 4890), and I got a very good price on the GTX 470. I didn't upgrade just for Dirext X 11, although I'm already enjoying its benefits. (Tessellation is quite nice) :)

I don't doubt that… Was just throwing in my own 2cents. I had always been partial to nvidia, but for me their last two major GPU releases(GT200 and now GF100) were big let downs. Not that either were bad I just had high expectations(their constant and seemingly limitless delays didn't help either).

Afaik it doesn't matter if a game supports SSAA because you can force it almost every time. Of course performance is going to depend on the game itself. I wouldn't dare to try it with something like Crysis or Metro 2033, but plenty of older games are quite playable. At least the option is there for the people who do have dual-GPU setups.

I meant DX10/11 titles(or am I mistaken, can the dx9 API also support full screen SGSSAA?)… Random info, Dx11 is a superset of Dx10.1 which is also a superset of Dx10. In other words, you have to really dig into a "dx11" game's functionality to determine whether or not it is actually utilizing any of the new functionality introduced by the new superset. The two biggest are tessellation and multi-threading(yes CPU side - finally something for our quad and hexa core chips to do).

Not quite sure what you mean? Are you talking about SSAA specifically? Here's an article that talks about it, and shows a few comparison screens as well.
http://www.pcgameshardware.com/aid,...tched-image-quality-in-modern-games/Practice/

Yeah, I remember seeing that when it was first published… Was hoping for real world examples as their textures seemed somewhat blurred then again they applied/increased TSSAA not SSAA.

Oh and I've seen ToMMTi-Systems SSAA tool posted up a couple times when searching for some info on SSAA, have you tried it out(If so any thoughts)?

Link : http://www.tommti-systems.de/start.html

Below 30 fps for me is "impossible".

While I didn't really like Crysis(came for free with my g92 gts) surprisingly it played somewhat "smooth" even under 30FPS. Though in general I tend to agree, < 30FPS = extreme pain.
 
Joined
Feb 28, 2010
Messages
380
Impossible to me equals extreme frustration. But possibly playable by someone with a strong ability to dissociate themselves from time. ;)
 
Joined
Aug 18, 2008
Messages
15,679
Location
Studio City, CA
To test that theory perhaps it is time to play an "impossible" game while getting acquainted with Mary Jane?

:biggrin:
 
Joined
Feb 28, 2010
Messages
380
I had always been partial to nvidia, but for me their last two major GPU releases(GT200 and now GF100) were big let downs. Not that either were bad I just had high expectations(their constant and seemingly limitless delays didn't help either).

I wouldn't be so quick to judge Fermi from second-hand sources. I'm pleasantly surprised with it, and it has performed even better than I expected. A good friend and gaming partner of mine has a 5850, and he says that now he wishes he had held out for a GTX 470. The only downside atm is cost.


I meant DX10/11 titles(or am I mistaken, can the dx9 API also support full screen SGSSAA?)… Random info, Dx11 is a superset of Dx10.1 which is also a superset of Dx10. In other words, you have to really dig into a "dx11" game's functionality to determine whether or not it is actually utilizing any of the new functionality introduced by the new superset. The two biggest are tessellation and multi-threading(yes CPU side - finally something for our quad and hexa core chips to do).

I don't think any version of Direct X would be incompatible with SGSSAA, but don't quote me on that, I'm too lazy to actually look it up right now. I do remember using it years ago (before MSAA) with earlier 3d games. Iirc didn't the old 3dfx cards, and early GeForce cards, use the same method?


Oh and I've seen ToMMTi-Systems SSAA tool posted up a couple times when searching for some info on SSAA, have you tried it out(If so any thoughts)?

Not familiar with it. Thanks for the link though, maybe I'll give it a try when I have some spare time.

Oh... and go easy on Thrasher. He's extremely sensitive to anything about a game that's not perfect to him. ;)
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
on my point again I really don't think they can call it Fermi until they implement the software.

The GPGPU features are supposed to be fully available as an import library rather than running CUDA files and accessing them as external objects.

And again, how much faster would your computer be running 512-1ghz with 120-512 cores compared to say the 2 or 4 CPUs at 3.2ghzwith all the architectures taken up by caching?



Also, my other research machine is running a 5870 but I haven't played with Stream yet.

My purpose is to do comparative work between the machines using OpenCL. When I get it to work it will open up a new benchmark you guys can argue about.

--

Speaking of the argument, I haven't heard any mention of Intel's cards yet.
 
Last edited:
Joined
Oct 19, 2006
Messages
5,212
Location
The Uncanny Valley
Speaking of the argument, I haven't heard any mention of Intel's cards yet.

Meh…. Who needs Intel? :)



Beware of criticizing anything JDR may like; you'll never hear the end of it. :p

I don't think he has anything to worry about, he doesn't seem opposed to trading factual information. ;)
 
Joined
Oct 21, 2006
Messages
39,138
Location
Florida, US
or discussing the real power of video cards, General Purpose GPU programming!
 
Joined
Oct 19, 2006
Messages
5,212
Location
The Uncanny Valley
Back
Top Bottom