Really? sorry to be skeptical about that but…….. if he were indeed working on that wouldn't he be on and NDA not to reveal anything to anyone ?
There is no NDA needed, its called CUDA and Fire Stream and OpenCL and DirectCompute and Alphaworks and what have you, as I've mentioned n the Fermi thread on this.
You want a name of a machine? How about
Tesla? There are several HPC (highly parallel computers) GPGPU out at any of the National Labs and in China.
My research is in OpenCL and I will be doing a cross comparison of the performance of my software on both a 5870 and an nVidia G285.
Your argument regarding floating point operations with GPU's being better is less than half right. GPU's still do not do proper IEEE floating point due to legacy issues so this is part of the problem.
Full integration of CPU/GPU is not likely to ever happen as these are very different chipsets. The calls in 2002+ for the end of the CPU for GPUs were too soon and extremely naive.
GPGPU programming is essentially a hack. What we are seeing with (the defunct) Laramie and other new CPU designs, such as i5 and i7, is however more vectorization on the chip itself - probably stripped down versions of the cores like having less cache for more processors. A specialized device for graphics will always be around regardless, but what we really need to do is remove the bottleneck of the PCIx slot and the memory swapping that the CPU needs to use as a control device, and then use a chip design optimized for general purposes and not specific to graphics (ie - an int3 for x,y,z coordinates is a waste on general purpose programming) and yet not have the overhead of cache, etc. that modern CPU cores have.
That's what the real aim is, and its not that simple.