MOBILE GPU FLOATING POINT ACCURACY VARIANCES

MOBILE GPU FLOATING POINT ACCURACY VARIANCES  April 4, 2013

When coding GPU shaders for multiple target platforms it is important to consider the differences of the hardware 
implementations and their impact. This is especially true when creating Natural User Interfaces (NUIs) as the use of shaders for visual enhancement or experience augmentation is crucial, and cannot vary between OS platforms or devices.

One of the key differentiators between mobile GPU families is the capability of the computational units. These differences are normally seen with handling of code complexity or visual artifacts created by the rendering schemes, especially with tile-based systems. These can sometimes be overcome using simpler shader algorithms or creative approaches to the geometry constructs being used.

However, the more significant contributor to the quality of the shader output lies in the accuracy of the floating point calculations within the GPU. This contrasts greatly from CPU computational accuracy and variances are common between the different mobile GPU implementations from ARM Mali, Imagination Technology, Vivante and others.

Being able to compare the accuracy of various GPU models allows us to prepare for the lowest accuracy units to ensure the shader output is still acceptable while optimizing for incredible visual effects from the better performing hardware. Our uSwish NUI platform makes direct use of this information to ensure a consistent look and feel, a key differentiator in the User Interface market.

In a perfect world only one reference implementation would be needed. This is simply not viable with today’s hardware. At worst, we may need to do several implementations, targeting the various accuracy levels, to ensure a common visual effect and consistent user experience. If calculation errors occur when outside the usable range of the floating point units, then we must account for that to prevent undesirable effects.

Let’s compare some current mobile devices using some simple fragment shader code:

precision highp float;
uniform vec2 resolution;
void main( void )
{
      float x = ( 1.0 – ( gl_FragCoord.x / resolution.x ));

      float y = ( gl_FragCoord.y / resolution.y ) * 26.0;
      float yp = pow( 2.0, floor(y) );
      float fade = fract( yp + fract(x) );
      if(fract(y) < 0.9)
          gl_FragColor = vec4( vec3( fade ), 1.0 );
    else
          gl_FragColor = vec4( 0.0 );
}

 This example will calculate a varying fade level from bright white down to black over 26 iterations on the screen. The further down the screen the smooth blended line goes, the more precision we have in the floating point unit.

For reference we will use a desktop rendering of the shader, for our purposes this sample from a laptop nVidia GeForce GT 630M is more than enough:

YOUi Labs Shader Comparison

 

After comparing the output across different GPU chipsets we immediately see the difference in performance and usable range of the floating point units. It is important to note that this is not related to device performance or even GPU implementation differences by different manufacturers – it is simply the computation range of the GPU itself. Most comparisons are done through tests of pure performance: triangles per second or texel fill rate. Although these numbers are valuable, they do not tell the full story of the GPUs true capability. When applied to Natural User Interfaces these computational differences are even more important since, unlike games, there is no tolerance for any visual artifacts.

 

To see the effect of these computational differences, or to try some on your own device to examine performance, check out the following:

1) YouTube video: The Importance of Shaders, showing the result of these calculation errors

2) YOUi Labs Shader Effect Test, an Android application for viewing the shaders for comparison on a device

posted on 2014-03-29 12:01  JonnyLulu  阅读(321)  评论(0编辑  收藏  举报

导航