|
Your continuous donations keep RPGWatch running!
RPGWatch Forums » General Forums » Tech Help » PS4 High Tech

Default PS4 High Tech

April 26th, 2013, 06:17
An article on Gamasutra goes into really high tech detail on the PS4. Bus speeds and what memory is on what chip and so on.

I couldn't follow most of it but it does sound like the 8GB is all shared between main memory and graphics memory. That should let them do well when the higher resolution TVs come out. I wonder what ratio of memory games will likely be using in the first years of the console?
Last edited by Zloth; May 22nd, 2013 at 03:59.
Zloth is offline

Zloth

Zloth's Avatar
I smell a… wumpus!?

#1

Join Date: Aug 2008
Location: Kansas City
Posts: 2,771

Default 

April 26th, 2013, 06:49
Originally Posted by Zloth View Post
An on Gamasutra goes into really high tech detail on the PS4. Bus speeds and what memory is on what chip and so on.

I couldn't follow most of it but it does sound like the 8GB is all shared between main memory and graphics memory. That should let them do well when the higher resolution TVs come out. I wonder what ratio of memory games will likely be using in the first years of the console?
The direct pipe from system RAM to GPU is a pretty neat way of taking advantage of the RAM being unified and a significant difference from how things are handled in PC architecture as far as I'm aware.

Also the other thing that's pretty interesting is that many of the things they've done to remove or reduce bottlenecking seem to be things that can be taken advantage of by simply changing compiler settings. That means that there should not be as much need to focus development specifically on additional optimizing for the PS4 vs a standard PC.
Last edited by jhwisner; April 26th, 2013 at 06:59.
jhwisner is offline

jhwisner

SasqWatch

#2

Join Date: Nov 2006
Posts: 1,519

Default 

April 26th, 2013, 07:08
It's interesting how well thought out the PS4 architecture is. It was funny when the PS4 was revealed to have 8GB of RAM because all the developers working on the development kit were told it would be 4GB. Turned out to be a pleasant surprise Hopefully the final price point is not too intimidating.

Can't wait for E3 for more info! Interestingly enough, Nintendo opted out of the big E3 presentation which has massive media coverage and is holding smaller presentations instead. Apparently, they've already played all their cards during their recent Nintendo Direct presentations and probably didn't want to rehash existing news.

And the next Xbox (Next-box?) will be revealed on May 21. It will be interesting to see. Currently there's been some negativity regarding Microsoft's new console (rumors of online DRM, blocking of used games, etc). Here's their chance to win some people over.
Dr. A is offline

Dr. A

Dr. A's Avatar
Keeper of the Watch

#3

Join Date: Oct 2006
Location: Singapore
Posts: 801

Default 

May 1st, 2013, 05:01
I'm sure conversion will be pretty easy after the first year or two but what about the opening games? I'm guessing the latest rigs will be able to handle the games but that leaves out a pretty big hunk of the PC market. If a company has got a game with a lot of BIG textures running around, are they going to go to the trouble to make smaller textures for the not-so-high-end PCs or will they just jack the minimum specs up?
Zloth is offline

Zloth

Zloth's Avatar
I smell a… wumpus!?

#4

Join Date: Aug 2008
Location: Kansas City
Posts: 2,771

Default 

May 1st, 2013, 07:09
How is it any different than other xbox architectures? It sounds very similar to the unified bus architecture of the original xbox.
Damian is offline

Damian

Sentinel

#5

Join Date: Jul 2012
Posts: 401

Default 

May 1st, 2013, 15:28
Originally Posted by Damian View Post
How is it any different than other xbox architectures? It sounds very similar to the unified bus architecture of the original xbox.
It's the opposite of the unified bus approach - something done to save on cost not for the sake of performance. Rather than GPU access to dynamically allocated video and system RAM competing for bandwidth on the same bus as the CPU uses to access system RAM, each of these have their own high-bandwidth bus. Fewer bottlenecks as they don't have to compete for memory access.
jhwisner is offline

jhwisner

SasqWatch

#6

Join Date: Nov 2006
Posts: 1,519

Default 

May 2nd, 2013, 05:06
An article on AMD's tech here which is probably used for PS4 and possibly for the next XBox.
Dr. A is offline

Dr. A

Dr. A's Avatar
Keeper of the Watch

#7

Join Date: Oct 2006
Location: Singapore
Posts: 801

Default 

May 2nd, 2013, 05:49
Originally Posted by Dr. A View Post
An article on AMD's tech here which is probably used for PS4 and possibly for the next XBox.
Yeah it's the memory that's unified more truly unified, and this is accomplished while reducing bottle-necking by using 2 dedicated to memory and one bus (entirely on-die) between the gpu and cpu. Now the cpu-RAM bus and the gpu-RAM bus are conceptually familiar - similar to how a CPU accesses system RAM and GPU accesses VRAM on a normal gaming PC. The innovation is really the on-die passing of pointers between the cpu and gpu.

In a system with discrete VRAM and system RAM, the memory adressed by each component is obviously physically seperate. In those systems information that needs to be from system ram to the GPU has to be written by the CPU and then read across the PCIE bus to the GPU and written into VRAM. Normally systems with shared VRAM/system RAM do not entirely get around this bottleneck as they are just stripped down approximations of the above setup and do not have low-latency direct busses between the cpu and gpu. Even though the CPU and GPU write the information to shared memory, past systems did not allow them to affectively adress the same areas AT THE SAME TIME. So even with the dynamically shared RAM in the Xbox and staticly split RAM of the PS3, data written to memory by the CPU to be processed by the GPU still had to be "sent" to the GPU and written back into memory allocated for its use.

The xbox360 did speed this up compared to bargain pc implementations of shared vram because it has a fsb replacement bus on-die. That differs from what is being done by the PS4 because it was still performing the same process as described previously - passing all the data from CPU adressed memory to GPU adressed memory. It was just doing it on-die which is faster than what a cheap laptop would do but it wasn't a new bus - just some functions of the same familiar architecture shrunk down and put on-die. So yeah the innovation here is not just that they have a specialized bus for communicating between the GPU and CPU - as well as dedicated pipes between each and the shared RAM - but more importantly that GPU and CPU communications is made much more efficient by being able to point to data in memory rather than reading it all back to the other.



compared to


Not the best images to compare, but that might give a better idea of how the paths/chokepoints have changed. Considering my shitty and inaccurate description of the process, the pictures might be better than more words.

http://gamrconnect.vgchartz.com/thread.php?id=159410
Last edited by jhwisner; May 2nd, 2013 at 06:18.
jhwisner is offline

jhwisner

SasqWatch

#8

Join Date: Nov 2006
Posts: 1,519

Default 

May 2nd, 2013, 10:56
I like how I have an old post predicting this is exactly what AMD would do when the purchased ATI, and that it'd be perfect for consoles…..
GothicGothicness is offline

GothicGothicness

GothicGothicness's Avatar
SasqWatch

#9

Join Date: Oct 2006
Posts: 4,244

Default 

May 2nd, 2013, 11:08
I'm a big fan of AMD and ATI. Sadly, they took a long time to recover from their merger way-back-when so Intel & NVIDIA surged ahead.

Competition is always a good thing. Consumers will ultmately win in the end
Dr. A is offline

Dr. A

Dr. A's Avatar
Keeper of the Watch

#10

Join Date: Oct 2006
Location: Singapore
Posts: 801

Default 

May 2nd, 2013, 21:56
Originally Posted by GothicGothicness View Post
I like how I have an old post predicting this is exactly what AMD would do when the purchased ATI, and that it'd be perfect for consoles…..
It would also be perfect for gaming pc's. That's why they're utilizing it in an upcoming product line aimed at pc gaming. Depending on the strength of the processor that releases alongside the system boards, they might finally knock Intel off the high spots on benchmarks for gaming pcs.

Probably a good while away from being able to get a high-end PC CPU+GPU+APU all on one die though but it does at least have enough potential to be worth watching. For PC users, the greater ability to use GPU compute alongside graphics intensive processes could be interesting as well particularly because of how it might be used to accelerate AI processes.

http://www.bit-tech.net/gaming/2009/…-games-works/7

The idea has been around for a while but the hUMA setup from AMD could greatly reduce the performance hits that have, in part, limited how useful direct-compute applications were in gaming. In general this could allow for some greater degree of direct-compute utilization alongside moderate-high GPU utilization for graphics operations without additional hits on performance. That would be most specifically relevant to potential AI uses of Open CL or CUDA than physics calculations - which can be frustratingly difficult to budget shared GPU computing time/power for since you don't necessarily have a good way of predicting how long or how complex finding a solution is going to be without starting to calculate it.

So in 5 years or so when this sort of setup might be common in mid-range gaming systems and up (with some competing alternatives from nVidia and/or intel) a system based around something like hUMA might not only see better realized performance/gflop but also more sophisticated AI and more impressive physics without those taking a bite out of performance.
Last edited by jhwisner; May 2nd, 2013 at 22:34.
jhwisner is offline

jhwisner

SasqWatch

#11

Join Date: Nov 2006
Posts: 1,519

Default 

May 3rd, 2013, 10:28
In case anyone is interested, the E3 event for Sony will be on June 10th @ 6PM (PDT)

Time for your region is calculated here
Dr. A is offline

Dr. A

Dr. A's Avatar
Keeper of the Watch

#12

Join Date: Oct 2006
Location: Singapore
Posts: 801

Default 

May 22nd, 2013, 04:05
Well, this generation is all about the 8's. 8GB on both and both have 8 cores. I've heard musings, though, that the 8 cores might be 4 real cores with some sort of hyperthreading tech.
Zloth is offline

Zloth

Zloth's Avatar
I smell a… wumpus!?

#13

Join Date: Aug 2008
Location: Kansas City
Posts: 2,771
RPGWatch Forums » General Forums » Tech Help » PS4 High Tech
Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

All times are GMT +2. The time now is 04:02.
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2014, vBulletin Solutions, Inc.
Copyright by RPGWatch