Yeah. I'm just thinking about those Fortnite benchmarks. Nvidia always does their "game ready" driver for the bigger titles. It looked like the intel card should have done better. I don't really know if an apples to apples comparison is compatible but the intel had about 600 "cores" and 1600MHz and the nvidia had 300 cores and 1300mhz. Also, intel had 4 more ROPS. But they were both getting similar max FPS but intel was getting these terrible low FPS and I'm just thinking nvidia has better Fortnite drivers. I mean, the card (Intel Iris Xe DG1) isn't aimed at gaming and so far its just an OEM part for Intel prebuilt rigs so it's likely there is something more that can be done on their software side of things.I don't think driver development will be too much of an issue for Intel. With DX12 and Vulkan, the drivers are much smaller and simpler. When they talk about those APIs being "close to the metal", what that essentially means is that a lot of work that would have been done in the driver is now handled explicitly by the game engine (which has its pros and cons).
Global Illumination, right? I mean, any card can do that. There's been lots of games that have shadows moving based on the time of day. I tend to think the most efficient, older solutions are probably the best. When it comes to competitive games playing in low settings is always an advantage. When you're focused on the gameplay who gives a shit if the lighting is realtime?With RTX, DLSS and so forth, I think they will become increasingly significant. It's not so much that users are crying out for fancy reflections and so on, but that if developers can rely on realtime GI and such, it gives them way more freedom, and can let them work more efficiently. The trouble is, that stuff is performance hungry, but I think they'll be keen to push for it now the new consoles can handle it.
I think I heard that the UE "Lumen" tech can make use of Nvidias tensor cores but the new AMD cards have no trouble with it. I mean, that PS5 demo was on the PS5 GPU which has 32 CU, iirc, while the desktop cards have 64 on the 6800. Also, I saw a video of someone playing around with that demo on PC and it looked like the whole thing only used 6GB ram. It would only display 22m triangles at any time, too. I wonder why Epic was going on about how important fast loading from HDD is when the whole thing fits comfortably in memory? It's always been the consoles have needed the fast streaming because they have relatively low memory and the PC port just loads the whole thing at once.