Hunk'a Memory Video Card?

Zloth

I smell a... wumpus!?
Joined
August 3, 2008
Messages
8,238
Location
Kansas City
With DirectX12, does it now make sense to buy a video card purely for the memory? For instance, you could drop in a GTX 740 with 4GB of DDR5 for just $100. That's a huge memory boost for ultra high-res textures! (At least for a DirectX 12 game that wants them or allows mods.)
 
Joined
Aug 3, 2008
Messages
8,238
Location
Kansas City
There are only a dozen games which are released or in development which will use Direct X12 in the near future: http://www.pcgameshardware.de/Direc...ste-Uebersicht-Ark-DayZ-Star-Citizen-1164994/

So I guess unless you are playing one of these games, the question is irrelevant anyways.

And on top of that even the games which support DX12 have to specially configurate it that the memory/gpu stack.

I am pretty sure that it doesn't make too much sense for a while.
 
Joined
Jun 2, 2012
Messages
4,691
DX12 allows for the use of multiple mismatched GPUs, but, as far as I know, each GPU will handle a percentage of each frame, depending on its power. So, a weak card would only handle a small portion of the frame when paired with a more powerful card. If each card had 4gb Vram, the Vram on the weaker card would only be used for its small slice of the rendering, and the more powerful card would have to stretch its 4gb for the larger portion of the scene. So, the memory is not combined efficiently when cards are mismatched in power.
 
Joined
Nov 8, 2014
Messages
12,085
On the case of VRAM, your investing into it won't make any difference anywhere at all. Shadow of Mordor that wants 6GB of VRAM for ultra textures will remain a pathetic grinder. Wolfenstein that eats up 50 Gigs of your HDD space will still look like lowres since it's unoptimized huge textures are mostly a garbage. Etc.

Sure there will be a game, or two, soon, that will look drastically different on 2Gb and 4Gb of VRAM.
But IMO buying something extremely expensive for sakes of just a few games doesn't make sense.

I've said before, if you have so much money you won't feel an ich from buying a pair of MSI's 980Ti Lightning cards to overclock and pair in a machine - do it. If however your budget has a limit, grow a patience, prices will drop.
Instead of investing into abnormally priced VRAM you won't use that much today, how about buying yourself a new SSD instead? SSD is something you *will* use. Frequently.
 
Joined
Apr 12, 2009
Messages
23,459
@Joxer: I don't think you got his point. His idea was to throw in a cheap and weak card, just for the additional VRAM.
 
Joined
Jun 2, 2012
Messages
4,691
Jesus Christ yes I totally missed the point. Sorry.
After rereading his post - basically it'd come down to one GPU using VRAM from another.

I'm not sure what to say as theoretically, with proper drivers, it could work - if dx12 does allow such thing (and from what I've read it doesn't which still means nothing, I never researched the material thoroughly).

We need someone who read the whole material about dx12 to answer on this one.
Or perhaps ask on tom's hardware?
 
Joined
Apr 12, 2009
Messages
23,459
It doesn't work that way, and it would defeat the point of Vram anyway. The whole point is that the video card has its own dedicated ram directly connected to the GPU over a high speed bus.
 
Joined
Nov 8, 2014
Messages
12,085
Hmmm, good point Ripper. Games have been able to use main memory as a 'backup' to video memory since DirectX 10 or something. Would video card A be better off tossing the extra textures into video card B or (assuming there is some) just putting it in main memory? Either way, the information has to go through card A's PCIe port but is it better to go through the second PCIe port or to use the slower memory?

Tom's Hardware might be a good idea... or maybe the 3D Gurus. Jeez, I haven't been there in centuries.
 
Joined
Aug 3, 2008
Messages
8,238
Location
Kansas City
The key thing is that once a graphic card is forced to use memory off the card, stuttering will be the result. The bus width of PCIe sockets is about 30gb/s at the absolute max. The bus speed between the GPU and its VRAM is more like 300gb/s on a decent card. Once the card is forced to go through PCIe to find some more memory, you already have a problem.

Also bear in mind that there is nothing special about the VRAM itself (though it's usually a bit faster than system RAM) - the main thing is the speed with which it communicates with the GPU, using the dedicated controllers and bus on the card.

I would virtually guarantee that DX12 will not try to facilitate cards using each other's VRAM. What it does allow for is split-frame rendering, so that if you had two equal cards, they would each use their own VRAM to render half the scene - effectively combining their VRAM total, but never actually accessing each other's memory.
 
Last edited:
Joined
Nov 8, 2014
Messages
12,085
Stuttering shouldn't be a thing anymore. DX12 says that the game (or the game's engine) is responsible for getting all the resources in the right place at the right time. That's only if the game/engine developers can do a better job with their specific situation than driver developers have done with the general case, though.

Reading up more on this "explicit multi-adapter" technology, it does sound like what you're saying, Ripper. Some textures and such will need to be on both cards but not all of them, depending on what's getting rendered. (If the secondary GPU is just rendering the GUI for the game then it might have completely unique memory values... but it sure isn't going to need a complete gigabyte.)
 
Joined
Aug 3, 2008
Messages
8,238
Location
Kansas City
Back
Top Bottom