Since the early days of accelerated graphics, programmers were obligated to use hardware-supported libraries like directX or OpenGL.
If we wanted to explain someone what happened by introducing a graphic chip, the simple thing to say is "we just copy the 3D data into a new memory, where another processor will take care of it, because this one processor is just faster at it".
During this period, the processors have never ceased to evolve in term of speed and power. Today with the upcoming of multiple core processors, x86 is still there, graphic card get more power, and are now taking the role of a CPU with tools like OpenCL and CUDA.
Back in the year, the AGP port speed was quite important, and PCI-express gave another big boost.
I didn't heard a lot about AMD Fusion, but the idea might have come into the mind of any computer engineer: Since the heavy calculations made in a computer are either graphical or not, we separate them into 2 dedicated chips.
But nowadays, any computer power is needed for both, especially in 3D games: to put it simply, at the beginning computer were just dealing with text, but today they all deal with at least 2d graphics, so why not design one chip that would take the role of both a CPU and a GPU ? Instead of having two memory block and having to copy all the textures and models into the graphic memory when a game loads, the "CGPU" would have some GDDR3, instead of having two separate memories, again cutting costs.
I still wonder how this AMD fusion will have to be used from a programmer viewpoint, but I wonder that the graphic library would be somewhat easier to deal with that directX or OpenGL, since we would program just one processor.
Is it right to think that this kind of dedicated hardware would also potentially just be faster for 3D games ?
Pro: Single memory, no more AGP transfers, simple for the 3D programmer, cheaper to build. Cons: Complicated to come with such hardware, long and expensive research because of both new hardware AND software standards.
What are you thinking ? Is it a new incoming step for computers, or just scifi ?
EDIT: Since I got an appropriate answerm I leave the question as I wrote it, I just add something to it:
Since all computers, and especially consoles, require graphics acceleration, and since it requires a lot of transistor, and with the upcoming of tools like CUDA and OpenCL, why no industrial has yet come with a "gaming chip", some sort of processor with an integrated graphic card, but nothing like a low-end chip; something more like an all-in-one processor unit, with the GPU and the CPU on the same die, but still clearly separated ?
Since the playstation 3 has dedicated chips and since games requires that extra power desktops or servers don't, can it reduce costs and give a little boost ?