NVIDIA's G80 to make use of 'Multi-function Interpolators'?
Beyond 3D forum member Uttar has made a most excellent find, which seems quite likely to discuss a hardware feature which will apply to NVIDIA's next generation architecture, codenamed G80. You can see the thread over at Beyond 3D where this is discussed here.
In short, the documentation cited in the thread in question refers to the creation of 'Multi-function interpolators' - To understand what this may mean for NVIDIA's G80 architecture, we need cover a few points regarding particular capabilities of current graphics hardware.
Firstly, it should be noted that current hardware already contains interpolation units (attribute interpolation units, to be precise) in each pixel shader unit - This functional unit sits before the ALUs (Arithmetic Logic Units) in the rendering pipeline, and is used to calculate attribute values on a per-pixel basis. These values can include color, depth, texture co-ordinates and the like. The values in question are calculated using the attributes of a vertex (in other words, one side of a triangle created by the vertex shader) via interpolation, and use known data from particular pixels to calculate the aforementioned attributes for the remainder of the pixels required (For those of you not of a mathematical bent, interpolation is basically a method of creating new data from a set of already known data).
Also available in current graphics hardware's pixel shader units is another functional unit which deals with supporting 'higher order' functions - That is, more advanced mathematical functions such as square root, reciprocal square root, sine, cosine, and so on.
Of course, both of these functional units take up both die size and transistors (and thus consume power as well), which is where the philosophy of the 'multi-function interpolator' comes in. In the 3D rendering pipeline, there is rarely any need to fully utilise both the attribute interpolation units and functional units for handling special functions at the same time, meaning that at any one time during 3D rendering, one or the other of these functional units is left either underutilised or unused, resulting in what is basically a waste of both power and die space. Thus, NVIDIA's plans circa G80 seems to be to combine these two functional units into one single, shared functional unit which can process both the necessary interpolation techniques as well as handling the processing of higher order functions. This has the obvious benefit of reducing both die size and transistor consumption, without sacrificing much in the way of performance. Indeed, it has been pointed out that in NVIDIA's current architectures, the interpolation units are a major bottleneck where more attributes can require interpolating for vertices than the interpolation unit can handle. It seems sensible to assume that the move to 'multi-function interpolators' will also see NVIDIA working to either ease this bottleneck or remove it entirely.
The big question from here is whether NVIDIA will simply combine these functional units to save on die size and power consumption and leave it at that, or whether they will then use the space and power they have 'won back' to add in additional pixel shader units to boost performance yet further. Of course, we won't be able to comment on this further until we see how G80 has been designed in its entirety. As always, in the world of GPUs, watch this space...
- A High-Performance Area-Efficient Multifunction Interpolator (Stuart F. Oberman and Michael Y. Siu, NVIDIA Corporation) - PowerPoint presentation, PDF format
- A High-Performance Area-Efficient Multifunction Interpolator (Stuart F. Oberman and Michael Y. Siu, NVIDIA Corporation) - Detailed PDF