yys

Maya插件开发,(多多练习英文吧~)

  博客园 :: 首页 :: 博问 :: 闪存 :: 新随笔 :: 联系 :: 订阅 订阅 :: 管理 ::
p187,857

Rendering pipline
- Application Stage
  - collision detection
- Geometry Stage
  - ModelView transform,
  - vertex shading,
  - projection,
  - clipping,
  - screen mapping
- rasterizer stage
  - triangle setup
    In this stage the differentials and other data for the triangle's surface are
    computed. This data is used for scan conversion, as well as for interpolation
    of the various shading data produced by the geometry stage. This process
    is performed by fixed-operation hardware dedicated to this task.

  - triangle traversal
    Here is where each pixel that has its center (or a sample) covered by the
    triangle is checked and a fragment generated for the part of the pixel that
    overlaps the triangle. Finding which samples or pixels are inside a triangle

  - pixel shading
  - merging
    - combine the fragment color produced by the shading stage with the
      color currently stored in the buffer.
    - This stage is also responsible for resolving visibility.  Z-buffer
    - An optional alpha test can be performed on an incoming fragment
      before the depth test is performed.

    - The alpha value of the fragment is compared by some specified
      test (equals, greater than, etc.) to a reference value.
      If the fragment fails to pass the test, it is removed from further processing.
      This test is typically used to ensure that fully transparent fragments
      do not affect the Z-buffer.

    - stencil buffer
      the buffer's contents can then be used to control rendering into the color buffer and Z-buffer.
      This can be combined with an operator that allows rendering of subsequent primitives into the color buffer only where the circle is present.

    - frame buffer
      color buffer and Z-buffer, accumulation buffer.


2.5 Through the Pipeline
Application
For each frame to be rendered, the application stage
feeds the camera position, lighting, and primitives of the model to the next
major stage in the pipeline—the geometry stage.

Geometry
The view transform was computed in the application stage, along with a
model matrix for each object that specifies its location and orientation. For
each object passed to the geometry stage, these two matrices are usually
multiplied together into a single matrix. In the geometry stage the vertices
and normals of the object are transformed with this concatenated matrix,
putting the object into eye space. Then shading at the vertices is computed,
using material and light source properties. Projection is then performed,
transforming the object into a unit cube's space that represents what the
eye sees. All primitives outside the cube are discarded. All primitives
intersecting this unit cube are clipped against the cube in order to obtain
a set of primitives that lies entirely inside the unit cube. The vertices then
are mapped into the window on the screen. After all these per-polygon
operations have been performed, the resulting data is passed on to the
rasterizer—the final major stage in the pipeline.

Rasterizer
In this stage, all primitives are rasterized, i.e., converted into pixels in the
window. Each visible line and triangle in each object enters the rasterizer in
screen space, ready to convert. Those triangles that have been associated
with a texture are rendered with that texture (image) applied to them.
Visibility is resolved via the Z-buffer algorithm, along with optional alpha
and stencil tests. Each object is processed in turn, and the final image is
then displayed on the screen.


CH3
GPU rendering pipeline
VS->GS->Clipping->ScreenMapping->TriangleSetup->TriangleTraversal->PixleShader->Merger

The geometry shader is an optional, fully programmable stage that operates on the vertices of a primitive (point, line or triangle).

3.2 The Programmable Shader Stage
Shaders are programmed using C-like shading languages such as HLSL, Cg,
and GLSL. These are compiled to a machine-independent assembly  
language, also called the intermediate language (IL). Previous shader models
allowed programming directly in the assembly language, but as of DirectX
10, programs in this language are visible as debug output only [123]. This
assembly language is converted to the actual machine language in a  
separate step, usually in the drivers. This arrangement allows compatibility
across different hardware implementations. This assembly language can
be seen as defining a virtual machine, which is targeted by the shading
language compiler.

This virtual machine is a processor with various types of registers and
data sources, programmed with a set of instructions.


3.6 Pixel Shader
The depth value generated in the rasterization
stage can also be modified by the pixel shader. The stencil buffer value is
not modifiable, but rather is passed through to the merge stage.


A pass typically consists of a vertex and pixel (and geometry) shader,6
along with any state settings needed for the pass. A technique is a set of
one or more passes to produce the desired effect.

Rotation from One Vector to Another ---P79

CH5
The emission of a directional light source can be quantified by measuring
power through a unit area surface perpendicular to I.
This quantity, called irradiance, is equivalent to the sum of energies of the  
photons passing through the surface in one second.
E is used in equations for irradiance, most commonly for irradiance perpendicular to n.
We will use El for irradiance perpendicular to l.

The ratio between exitance and irradiance can differ for light of
different colors, so it is represented as an RGB vector or color, commonly
called the surface color c.

Irradiance(E): the density of light flow per surface area from all directions.
Radiance(L): the density of light flow per surface area and per incoming direction.
Radiance can be thought of as the measure of the brightness and color of a single ray of light.

5.5 Shading
Shading is the process of using an equation to compute the outgoing radiance Lo along the view ray, v,
based on material properties and light sources.

h = (l+v)/|l+v|
called because it is halfway between the view vector v and the light vector 1. And h is a unit-length vector.

Gouraud shading
Per-vertex evaluation followed by linear interpolation of the result is commonly called Gouraud shading

Phong shading
At the other extreme from Gouraud shading, we have full per-pixel  
evaluation of the shading equation. This is often called Phong shading.
In this implementation, the vertex shader writes the world-space normals
and positions to interpolated values, which the pixel shader passes to
Shade(). The return value is written to the output. Note that even if
the surface normal is scaled to length 1 in the vertex shader, interpolation
can change its length, so it may be necessary to do so again in the pixel
shader.


One approach to avoiding aliasing is to distribute
the samples randomly over the pixel, with a different sampling pattern
at each pixel. This is called stochastic sampling, and the reason it works
better is that the randomization tends to replace repetitive aliasing effects
with noise, to which the human visual system is much more forgiving
The most common kind of stochastic sampling is jittering,

A-buffer(accumulation buffer )
动机:A limitation of the Z-buffer is that only one object is stored per pixel.
If a number of transparent objects overlap the same pixel, the Z-buffer
alone cannot hold and later resolve the effect of all the visible objects.
原理:
The A-buffer has "deep pixels" that
store a number of fragments that are resolved to a single pixel color after
all objects are rendered.

F-buffer
K-buffer
That said, DirectX 10 and its successors do not allow
concurrent read/write from the same buffer, so these approaches cannot be
used with newer APIs.

6 Texturing
6.1 The Texturing Pipeline
 
          |object space location
         \|/
[projector function]
          |
          |parameter space coordinates
         \|/
[corresponder function(s)](e.g.translate/rotate the texture;wrapping mode)
          |
          |texture space location
         \|/
[obtain value]
 - image texture:retrieve texel info from image,
 image sampling and filtering methods are applied to the values read from each texture.

 - procedural texture: compute a function
          |
          |teture value
         \|/
[value transform function](e.g. make the color brigther)
          |
          |transformed texture value
         \|/


The image sampling and filtering methods discussed in this chapter are
applied to the values read from each texture. However, the desired result
is to prevent aliasing in the final rendered image, which in theory requires
sampling and filtering the final pixel colors. The distinction here is between
filtering the inputs to the shading equation, or its output. As long as the
inputs and output are linearly related (which is true for inputs such as
specular and diffuse colors), then filtering the individual texture values is
equivalent to filtering the final colors. However, many shader input values
with a nonlinear relationship to the output, such as surface normals and
gloss values, are stored in textures. Standard texture filtering methods may
not work well for these textures, resulting in aliasing. Improved methods
for filtering such textures are discussed in Section 7.8.1.
(  7.8.1 Mipmapping BRDF and Normal Maps
Artifacts can result from using linear mipmapping methods on normal maps, or on textures containing nonlinear
BRDF parameters such as cosine powers. These artifacts can manifest as
flickering highlights, or as unexpected changes in surface gloss or brightness
with a change in the surface's distance from the camera.
产生问题的原因:
To understand why these problems occur and how to solve them, it is important to remember that the BRDF is a statistical description of
the effects of subpixel surface structure.
When the distance between the camera and surface increases, surface structure that previously covered
several pixels may be reduced to subpixel size, moving from the realm of
bump maps into the realm of the BRDF. This transition is intimately tied
to the mipmap chain, which encapsulates the reduction of texture details
to subpixel size.
如果对normal map使用minmap的话:
At the bottom of the figure, we see two such sets of mipmap levels. On the bottom
left (framed in green) we see the result of averaging and renormalizing
the normals in the normal map, shown as NDFs oriented to the averaged
normals. These NDFs do not resemble the ideal ones—they are pointing
in the same direction, *but they do not have the same shape*(低分辨率的normal map表示的物体的形状不正确). This will lead
to the object not having the correct appearance. Worse, since these NDFs
are so narrow, they will tend to cause aliasing, in the form of flickering
highlights(而且会产生走样--高光处有闪烁).
解决方法:
if we use a gloss map, the cosine power can be varied from
texel to texel. Let us imagine that, for each ideal NDF, we find the rotated
cosine lobe that matches it most closely (both in orientation and overall
width). We store the center direction of this cosine lobe in the normal
map, and its cosine power in the gloss map.
)

6.2.1 Magification
the nearest neighbor filtering,
bilinear interpolation
bicubic filter (more expensive than bilinear filters).
remapping (for edge important cases, e.g. text )

6.2.2 Minification
the nearest neighbor,
bilinear interpolation(but when a pixel is influenced by more than four texels, the filter soon fails and produces aliasing. )
mipmap:Two important elements in forming high-quality mipmaps are good filtering and gamma correction.
       For textures encoded in a nonlinear space (such as most color textures),
       ignoring gamma correction when filtering will modify the perceived  brightness of the mipmap levels
       However, mipmapping has a number of flaws. A major one is overblurring.
One extension to mipmapping is the ripmap.

Another method to avoid overblurring is the summed-area table (SAT)
Over blurring 的原因:
The problem is that when a texture is viewed along its diagonal, a large rectangle is generated, with many of the texels
situated nowhere near the pixel being computed.

Ripmaps and summed-area tables are examples of what are called anisotropic filtering algorithms

Unconstrained Anisotropic Filtering

6.7 Bump mapping

n,t,b  do not have to truly be  perpendicular to each other, since the normal map itself may be distorted to
fit the surface. In fact, the tangent

The idea here is that the light's direction slowly changes, so it can be
interpolated across a triangle. For a single light, this is less expensive than
transforming the surface's perturbed normal to world space every pixel.
This is an example of frequency of computation: The light's transform is
computed per vertex, instead of needing a per-pixel normal transform for
the surface.
However, if the application uses more than just a few lights, it is more
efficient to transform the resulting normal to world space.

A problem with normal mapping is that the bumps never block each other.

6.7.3 Parallax Mapping

 

from http://cg2010studio.wordpress.com/2011/10/30/bump-normal-displacement-parallax-relief-mapping/

 

 

下表是我這週所整理出來,比較Bump, Normal, Displacement, Parallax, Relief Mapping之間的差異,若想要知道更多的細節,可以去研究paper。
Mapping Concept Map Shader
Bump mapping(1978) 計算vertex的光強時,不是直接使用該vertex的原始法向量,而是在原始法向量上加上一個擾動得到修改法向量,經過光強計算,能夠產生凹凸不平的表面效果。No self-occlusion, No self-shadow, No silhouette。 Bump map (Normal map) Fragment shader
Normal mapping (1996) normal map表示法向量資訊,而法向量資訊可由height map微分而得到,texture的RGB表示法向量的XYZ,利用此資訊計算光強,產生凹凸陰影的效果。No self-occlusion, No self-shadow, No silhouette。 Normal map Fragment shader
Displacement mapping (1984) 直接作用於vertex,根據displacement map中相對應vertex的像素值,使vertex沿法向移動,產生真正的凹凸表面。 Displacement map (Height map) Vertex shader
Parallax mapping (2001) (Virtual displacement mapping) 沒有修改vertex的位置,以視線和height map計算較陡峭的視角給vertex較多的位移,較平緩的視角給vertex較少的位移,透過視差獲得更強的立體感。No self-occlusion, No self-shadow。 Height map Fragment shader
Relief mapping(2005) (Steep Parallax Mapping) 更精確地找出觀察者的視線與高度的交點,對應的texture座標則是位移的距離,所以能更正確地模擬立體的效果,可以產生self-occlusion, self-shadowing, view-motion parallax, and silhouettes。 Height map Fragment shader


7.8
Anisotropic BRDFs introduce the possibility of per-pixel modification of
the tangent direction—use of textures to perturb both the normal and
tangent vectors is called frame mapping



CH18
18.3 Architecture
The interface between the application and the graphics hardware is called a driver.
sort-first
sort-middle
sort-last

 

 

posted on 2012-02-24 17:58  yys  阅读(265)  评论(0编辑  收藏  举报