Will CPU's ever get fast enough so we won't need graphics accelleration anymore?

By Steve Baker

Introduction.

Every now and again, someone will suggest that there is a trend towards a future generation of computers where the CPU will be so fast that we won't need 3D graphics hardware anymore.

I personally doubt this will happen within the lifetime of silicon chip technology. Maybe with nanotech, biological or quantum computing - but probably not even then.

This document was written early in 2002 when a 2GHz Pentium IV pretty much defined the state of the art in CPU's and the nVidia GeForce 4 Ti 4600 was the fastest graphics card in the consumer realm.

Some facts:

History

If you doubt all of this, look at the progress over the last 5 or 6 years. In late 1996 the Voodoo-1 had a 50Mpixel/sec fill rate. In 2002 GeForce-4 has a fill rate of 4.8 Billion (antialiased) pixels/sec - it's 100 times faster. Over the same period, your 1996 vintage 233MHz CPU has scaled up to a 2GHz machine ...a mere 10x speedup.

The graphics cards also gained features. Over that same period, they added - windowing, hardware T&L, antialiasing, multitexture, programmability and much else besides.

Meanwhile the CPU's have added just a modest amount of MMX/3Dnow type functionality...almost none of which is actually *used* because our compilers don't know how to generate those new instructions in compiling generalised C/C++ code. There have been internal feature additions - things like branch prediction and speculative execution - however, those are there to compensate for lack of RAM speed - they don't offer any actual features that the end user can see.

CONCLUSION.

Whilst CPU's are getting faster at an amazing rate, there is no sign whatever that CPU's are "catching up" with graphics cards - and no logical reason why they ever will.