(转)High Dynamic Range Rendering
High Dynamic Range Rendering IntroductionWhen Paul Debevec came up with HDR Rendering in 1998, his revolutionary idea could only be implemented in non-real-time. With the advent of powerful graphics hardware over the past few years, thanks to Moore's law of course, the process can now be implemented on graphics hardware in real-time. Apart from a few demos (including ATI's cool "RNL"), and a couple of articles, there is hardly any info on HDR in real-time. This tutorial is just an introduction that'll help you get your first HDR app running. I won't show you how to make scenes with shiny little spheres on marble tables. What I will teach you is how to load a .hdr file and use it to texture a simple quad, which will then be rendered with "natural light". This tutorial works only on cards that support floating point textures in DX and pixel shaders 2.0, which rules out So What is "High Dynamic Range"?High Dynamic Range is just a neat term for "storing color values much greater than the usual 0.0f to 1.0f used in graphics". Unlike conventional colors used in graphics, where color values cannot exceed 1.0f, the color values here can be anything you like, as high as you like. The advantage of HDR is that your scenes will look more realistic. The process of rendering a scene in HDR is show below :-
We need to suppress LDR values so that we don't blur those parts of the image. A bloom filter simply bleeds color from one pixel to it's neighboring pixels. We use a Gaussian filter in this case, but you can use any bloom filter. What are We Waiting For?Without further ado, let's begin programming our first HDR app. Here I assume that your familiar with DirectX and HLSL and know how to open a window and start a rendering loop. App VariablesIf you open up the source code (you can find the link at the end of the article), you'll see a lot of global variables. These are used to hold textures and surfaces to render and post-process the scene. You'll also find a couple of effect pointers, which are used to create and handle the HLSL shaders. Apart from these variables, there are a couple of app specific variables like the window handle, vertex structs for the quads, etc... Shown below are the texture and surface pointers which we use as render targets and textures (explained below). ////////////////////////////////////////////////////////////////////// The two vertex structs. #define D3DFVF_QUADVERTEX ( D3DFVF_XYZ | D3DFVF_TEX1 ) Radiance (.hdr) FormatThe radiance format is one of the formats that allows you to store floating point images. Paul Debevec has a couple of light probes (just a fancy name for high dynamic range images) on his website. As I said earlier, in this tutorial I show you how to load .hdr files and use them as textures. Since they're in a binary format, to make reading them simpler, I've provided a simple "utility" which I found on the net. It contains some code (rgbe.h and rgbe.cpp) to simplify reading .hdr files. The Actual ProcessWe start off by initializing D3D, creating the device, etc... All this is done in the "init()" function. Now, we need to create 3 vertex arrays. The first called "g_QuadVertices", which is used to render the 2D textured quad into. The second is called "g_HalfVertices", which is used to render the original image into the "downsample render target quad" and the downsampled render target into the "blur target quad", and so on. I chose to call it "g_HalfVertices" because this quad covers 1/4 of the screen (1/2 width and 1/2 height). The final one is called "g_FullVertices", which is used for the "tonemapped render target quad". Then we create corresponding vertex buffers and copy the vertices into them. Next, we have to load our light probe. To do this, we simply read in the array of floats which are actually the RGB components of every pixel in the light probe. Now we need to copy these values to a floating point render target. But to do that, we also need the alpha component since the floating point render target needs 4 components (RGBA). Also, since we're using the 64-bit surface format (D3DFMT_A16B16G16R16F) we need convert the 32-bit floats into 16-bit floats. To do this, we simply use "D3DXFLOAT16" instead of "float". // Read in the HDR light probe. Then, we initialize the render targets and create the effects and shaders. void initRTS( void ) The "setupEffectParams()" function contains functions to set up shader parameters and is called before rendering each "effect". void setEffectParams( void ) The rendering process is very simple.
So, now we get to the interesting part, how things exactly work. We begin by rendering the textured "quad vertices" into a floating point render target. Now, we need to downsample this to 1/4th its size, suppressing the LDR values, so that we don't blur the entire scene. To do this, we simply use the original render target as a texture and render this onto another render target 1/4th the "original" size. To suppress the LDR values, the below function "SuppressLDR" is used. Kd is the material's diffuse color. Downsample.fx Next, we need to blur the downsampled image in order to "bleed" colors from the bright pixels into neighboring pixels. To do this, as mentioned earlier, we use a separable Gaussian filter. In the first pass, we blur pixels along the x-axis, we then take this image and blur it along the y-axis. Pretty simple, huh? The below code fragment is from BlurX.fx. BlurX.fx We just create a loop that adds weighted neighboring pixels. We sample 8 pixels from either side. BlurOffset is actually the per-pixel (or texel) width, and we multiply it by the iteration number "i" to get the coordinates of the "i"th pixel. Blurring along the y-axis is the same, only we provide the per pixel height this time. Now comes the most important part, compositing the original and blurred images and tone mapping them to get a displayable image. The below code sample is from ToneMapping.fx, which shows how to combine the 2 images. Tonemapping.fx First, we simply lerp between the original and blurred colors. Then, we calculate the vignette (which is the square of the distance of the current pixel from the center of the screen) and multiply the lerped color by vignette raised to the fourth power. Finally, we multiply by the exposure level, which determines how "exposed" your finally image should be, just like you can set the exposure in a camera, and add a gamma correction. Shown above are images of the light probe at different exposures. Left-Top - Under exposed (0.5) Right-Top - Properly exposed (2.5) Left-Bottom - Over exposed (10.0) Right-Bottom - Extremely over exposed (20.0) To properly observe "glow" effects, you'll need either the grace-cube texture or the grace light probe from www.debevec.org. Voila! Your first HDR app is done. This may not be the most efficient way to do things, since I only wanted to show the basics. So I leave the optimizing part to you. Masaki Kawase has implemented a different method for post-processing, and after implementing it I found out that it actually gives better performance, albeit not the same quality. There are other types of glows, like star, afterimage, etc... which I didn't discuss. You could try and implement them as well. Now let's see if your game comes out with HDR Rendering before Half-Life 2 does! If you have any doubts, questions or suggestions, you can mail me at anidex@yahoo.com. Here is the source code RNL.zip. 这个代码有点问题,深度缓冲建立的有错误,确定创建的窗口客户区大小为1024x768。 References
Discuss this article in the forums See Also: © 1999-2009 Gamedev.net. All rights reserved. Terms of Use Privacy Policy |