UnityShader实现物体被遮挡描边
之前在网上看到物体遮挡描边的功能,自己也拿来实现了一番。算作第一篇博客的开篇。
先贴出几张效果图,也是个人思路和方案的改进路线吧。
//////////////////////////////////////////////////////////////////方案实现////////////////////////////////////////////////////////////////////////////////////////
看到描边的功能,最先想到的就是用stencil的方法实现。功能最主要的部分就是如何判断边界像素,之后在FragmentShader中将该像素颜色设置成需要描边的颜色。
方案一:
一个简单的VF Shader:
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 Shader "Unlit/Shape" 2 { 3 Properties 4 { 5 _MainTex ("Texture", 2D) = "white" {} 6 _ShapeLineWidth("ShapeWidth",float) = 0.1 7 _ShapeColor("ShapeColor",COLOR) = (1,1,1,1) 8 } 9 SubShader 10 { 11 Tags { "Queue"="Geometry" } 12 LOD 100 13 //output origin color 14 Pass 15 { 16 17 CGPROGRAM 18 #pragma vertex vert 19 #pragma fragment frag 20 21 22 #include "UnityCG.cginc" 23 24 struct appdata 25 { 26 float4 vertex : POSITION; 27 float2 uv : TEXCOORD0; 28 }; 29 30 struct v2f 31 { 32 float2 uv : TEXCOORD0; 33 float4 vertex : SV_POSITION; 34 }; 35 36 sampler2D _MainTex; 37 float4 _MainTex_ST; 38 39 v2f vert (appdata v) 40 { 41 v2f o; 42 o.vertex = UnityObjectToClipPos(v.vertex); 43 o.uv = TRANSFORM_TEX(v.uv, _MainTex); 44 return o; 45 } 46 47 fixed4 frag (v2f i) : SV_Target 48 { 49 fixed4 col = tex2D(_MainTex, i.uv); 50 return col; 51 } 52 ENDCG 53 } 54 55 //output stencil to define occlued area 56 Pass 57 { 58 ColorMask 0 59 ZTest Off 60 Stencil 61 { 62 Ref 1 63 Comp Always 64 Pass Replace 65 } 66 CGPROGRAM 67 #pragma vertex vert 68 #pragma fragment frag 69 70 71 #include "UnityCG.cginc" 72 73 struct appdata 74 { 75 float4 vertex : POSITION; 76 }; 77 78 struct v2f 79 { 80 float4 vertex : SV_POSITION; 81 }; 82 83 84 v2f vert (appdata v) 85 { 86 v2f o; 87 o.vertex = UnityObjectToClipPos(v.vertex); 88 return o; 89 } 90 91 fixed4 frag (v2f i) : SV_Target 92 { 93 return fixed4(1,1,1,1); 94 } 95 ENDCG 96 } 97 98 //output outlinecolor 99 Pass 100 { 101 Stencil 102 { 103 Ref 0 104 Comp Equal 105 Pass Keep 106 } 107 ZWrite Off 108 ZTest Off 109 CGPROGRAM 110 #pragma vertex vert 111 #pragma fragment frag 112 113 114 #include "UnityCG.cginc" 115 116 struct v2f 117 { 118 float2 uv : TEXCOORD0; 119 float4 vertex : SV_POSITION; 120 }; 121 122 123 float _ShapeLineWidth; 124 fixed4 _ShapeColor; 125 126 v2f vert (appdata_base v) 127 { 128 v2f o; 129 v.vertex.xyz += v.normal * _ShapeLineWidth; 130 o.vertex = UnityObjectToClipPos(v.vertex); 131 return o; 132 } 133 134 [earlyDepthStencil] 135 fixed4 frag (v2f i) : SV_Target 136 { 137 fixed4 col = _ShapeColor; 138 return col; 139 } 140 ENDCG 141 } 142 } 143 }
主要功能在第三个pass中:将所有像素沿法线方向延伸(相当于将物体略微放大一些),再通过第二个pass写入的stencil剔除中间区域,剩下就是边缘的像素了。虽然成功实现了描边的功能,但是未遮挡部位也被描边了。
方案二:
利用后期渲染的技术,将所有描边的物体轮廓先输出到一张图上,在最后与原图叠加在一起。这种方案的优点更加灵活了。在Unity中,我们可以定义不同的相机来渲染出各种想要的图像包含着丰富的信息。例如本例中在后期渲染中先后得到shadowmap,颜色缓存等图像信息。
由于该方案比较复杂,先贴出思路:
1.获得场景除去要描边物体的depthmap
2.通过比较depthmap判定被遮挡区域,并将该区域放大
3.将放大区域剔除遮挡区域就是描边的像素区域了,将该像素区域叠加到原图像中。
工程代码如下:
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 using UnityEngine; 2 using System.Collections; 3 4 public class ShapeOutline : MonoBehaviour { 5 6 public Camera objectCamera = null; 7 public Color outlineColor = Color.green; 8 Camera mainCamera; 9 RenderTexture depthTexture; 10 RenderTexture occlusionTexture; 11 RenderTexture strechTexture; 12 13 // Use this for initialization 14 void Start() 15 { 16 mainCamera = Camera.main; 17 mainCamera.depthTextureMode = DepthTextureMode.Depth; 18 objectCamera.depthTextureMode = DepthTextureMode.None; 19 objectCamera.cullingMask = 1 << LayerMask.NameToLayer("Outline"); 20 objectCamera.fieldOfView = mainCamera.fieldOfView; 21 objectCamera.clearFlags = CameraClearFlags.Color; 22 objectCamera.projectionMatrix = mainCamera.projectionMatrix; 23 objectCamera.nearClipPlane = mainCamera.nearClipPlane; 24 objectCamera.farClipPlane = mainCamera.farClipPlane; 25 objectCamera.aspect = mainCamera.aspect; 26 objectCamera.orthographic = false; 27 objectCamera.enabled = false; 28 } 29 30 void OnRenderImage(RenderTexture srcTex, RenderTexture dstTex) 31 { 32 depthTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 24, RenderTextureFormat.Depth); 33 occlusionTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0); 34 strechTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0); 35 36 objectCamera.targetTexture = depthTexture; 37 objectCamera.RenderWithShader(Shader.Find("ShapeOutline/Depth"), string.Empty); 38 39 Material mat = new Material(Shader.Find("ShapeOutline/Occlusion")); 40 mat.SetColor("_OutlineColor", outlineColor); 41 Graphics.Blit(depthTexture, occlusionTexture, mat); 42 43 mat = new Material(Shader.Find("ShapeOutline/StrechOcclusion")); 44 mat.SetColor("_OutlineColor", outlineColor); 45 Graphics.Blit(occlusionTexture, strechTexture, mat); 46 47 mat = new Material(Shader.Find("ShapeOutline/Mix")); 48 mat.SetColor("_OutlineColor", outlineColor); 49 mat.SetTexture("_occlusionTex", occlusionTexture); 50 mat.SetTexture("_strechTex", strechTexture); 51 Graphics.Blit(srcTex, dstTex, mat); 52 53 RenderTexture.ReleaseTemporary(depthTexture); 54 RenderTexture.ReleaseTemporary(occlusionTexture); 55 RenderTexture.ReleaseTemporary(strechTexture); 56 57 } 58 }
16-27行:创建一个专门用来渲染描边的相机。该相机渲染出一个剔除了待描边物体的depthmap用于判断遮挡的区域。17将相机渲染模式设置为depth,这样在之后的shader中就可以调用Unity的内置变量_CameraDepthTexture来获取深度图了。
30行:OnRenderImage()是Unity引擎内置的函数,在相机最终输出图像时会调用该函数,很多后期渲染处理都放在该函数中。具体的可以搜一下“Unity流程图”,直观的了解在一帧中Unity是如何调用各种内置的函数的。注意将该.cs脚本挂在对应相机对象上才启用。
32-34行:调用RenderTexture.GetTemporary()来分配一个texture内存。为什么不用New呢?Unity的官方文档解释说这个比new快很多,也确实是。学习了。但使用后记得马上ReleaseTemporary。另外特别注意的是在创建一个RenderTexture(不论是用new还是gettemporary)时对depthBuffer的设置,如果将33,34行的depthBuffer设置为16/24/32,Blit输出的图像始终就都是相机渲染的图像?关于RenderTexture中depthBuffer这块还不是很理解,之后还需要查下资料,这里暂标记下。有兄弟了解的可以在评论区交流。
接下来是几个shader的源码
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 Shader "ShapeOutline/Depth" 2 { 3 Properties 4 { 5 } 6 SubShader 7 { 8 Tags { "RenderType"="Opaque" } 9 LOD 100 10 Pass 11 { 12 CGPROGRAM 13 #pragma vertex vert 14 #pragma fragment frag 15 #include "UnityCG.cginc" 16 17 struct appdata 18 { 19 float4 vertex : POSITION; 20 float2 uv : TEXCOORD0; 21 }; 22 struct v2f 23 { 24 float2 depth : TEXCOORD0; 25 float4 vertex : SV_POSITION; 26 }; 27 v2f vert (appdata v) 28 { 29 v2f o; 30 o.vertex = UnityObjectToClipPos(v.vertex); 31 o.depth = o.vertex.zw; 32 return o; 33 } 34 fixed4 frag (v2f i) : SV_Target 35 { 36 float depth = i.vertex.z/i.vertex.w; 37 38 return fixed4(depth,depth,depth,0); 39 } 40 ENDCG 41 } 42 } 43 }
这段代码没什么好说的了,就是输出outline层物体的depthmap,注意下输出的格式。(DepthMap的要求?这里也做个标记)
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 Shader "ShapeOutline/Occlusion" 2 { 3 Properties 4 { 5 _MainTex ("Texture", 2D) = "white" {} 6 } 7 SubShader 8 { 9 Tags { "RenderType"="Opaque" } 10 LOD 100 11 Pass 12 { 13 CGPROGRAM 14 #pragma vertex vert 15 #pragma fragment frag 16 #include "UnityCG.cginc" 17 18 struct appdata 19 { 20 float4 vertex : POSITION; 21 float2 uv : TEXCOORD0; 22 23 }; 24 struct v2f 25 { 26 float4 ScreenPos : TEXCOORD0; 27 float4 vertex : SV_POSITION; 28 }; 29 sampler2D _MainTex; 30 float4 _MainTex_ST; 31 uniform sampler2D _CameraDepthTexture; 32 fixed4 _OutlineColor; 33 34 v2f vert (appdata v) 35 { 36 v2f o; 37 o.vertex = UnityObjectToClipPos(v.vertex); 38 o.ScreenPos = ComputeScreenPos(o.vertex); 39 return o; 40 } 41 fixed4 frag (v2f i) : SV_Target 42 { 43 i.ScreenPos.xy = i.ScreenPos.xy/i.ScreenPos.w; 44 float2 uv = float2(i.ScreenPos.x,i.ScreenPos.y); 45 float depth = UNITY_SAMPLE_DEPTH(tex2D(_CameraDepthTexture, uv)); 46 float depthTex = tex2D(_MainTex,i.ScreenPos.xy); 47 if((depthTex > depth) && depthTex!= 1) 48 return fixed4(_OutlineColor.rgb,i.vertex.z); 49 else 50 return fixed4(0,0,0,1); 51 } 52 ENDCG 53 } 54 } 55 }
输出Outline层物体被其他物体遮挡的部分。注意_CameraTexture变量的来源,上文已经提到了。另外就是在对输入的Texture进行采样时,不再是直接根据模型UV坐标来采样了,而是应该用屏幕坐标来采样。模型顶点坐标如何转换为屏幕坐标请看37,38,43。这里贴一张OcclusionTexture方便理解
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 Shader "ShapeOutline/StrechOcclusion" 2 { 3 Properties 4 { 5 _MainTex ("Texture", 2D) = "white" {} 6 } 7 SubShader 8 { 9 Tags { "RenderType"="Opaque" } 10 LOD 100 11 Pass 12 { 13 CGPROGRAM 14 #pragma vertex vert 15 #pragma fragment frag 16 #include "UnityCG.cginc" 17 18 struct appdata 19 { 20 float4 vertex : POSITION; 21 float2 uv : TEXCOORD0; 22 }; 23 24 struct v2f 25 { 26 float4 screenPos : TEXCOORD0; 27 float4 vertex : SV_POSITION; 28 }; 29 30 sampler2D _MainTex; 31 float4 _MainTex_ST; 32 uniform fixed4 _OutlineColor; 33 34 v2f vert (appdata v) 35 { 36 v2f o; 37 o.vertex = UnityObjectToClipPos(v.vertex); 38 o.screenPos = ComputeScreenPos(o.vertex); 39 return o; 40 } 41 fixed4 frag (v2f i) : SV_Target 42 { 43 i.screenPos.xy = i.screenPos.xy/i.screenPos.w; 44 fixed4 col1 = tex2D(_MainTex,i.screenPos.xy); 45 fixed4 col2 = tex2D(_MainTex,float2(i.screenPos.x + 1/_ScreenParams.x,i.screenPos.y)); 46 fixed4 col3 = tex2D(_MainTex,float2(i.screenPos.x - 1/_ScreenParams.x,i.screenPos.y)); 47 fixed4 col4 = tex2D(_MainTex,i.screenPos.xy); 48 fixed4 col5 = tex2D(_MainTex,float2(i.screenPos.x ,i.screenPos.y+ 1/_ScreenParams.y)); 49 fixed4 col6 = tex2D(_MainTex,float2(i.screenPos.x ,i.screenPos.y- 1/_ScreenParams.y)); 50 if((col1.x + col1.y + col1.z 51 + col2.x + col2.y + col2.z 52 + col3.x + col3.y + col3.z 53 + col4.x + col4.y + col4.z 54 + col5.x + col5.y + col5.z 55 + col6.x + col6.y + col6.z 56 )>0.01) 57 return fixed4(_OutlineColor.rgb,i.vertex.z); 58 else 59 return fixed4(0,0,0,1); 60 } 61 ENDCG 62 } 63 } 64 }
这段Shader功能是对原来遮挡输出的图像进行上下左右放大一个像素,之后将这张图的图像剔除遮挡部分就是描边的线条了。
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 Shader "ShapeOutline/Mix" 2 { 3 Properties 4 { 5 _MainTex ("Texture", 2D) = "white" {} 6 } 7 SubShader 8 { 9 Tags { "RenderType"="Opaque" } 10 LOD 100 11 Pass 12 { 13 CGPROGRAM 14 #pragma vertex vert 15 #pragma fragment frag 16 #include "UnityCG.cginc" 17 18 struct appdata 19 { 20 float4 vertex : POSITION; 21 float2 uv : TEXCOORD0; 22 }; 23 24 struct v2f 25 { 26 float4 screenPos : TEXCOORD0; 27 float4 vertex : SV_POSITION; 28 }; 29 30 sampler2D _MainTex; 31 float4 _MainTex_ST; 32 uniform sampler2D _occlusionTex; 33 uniform sampler2D _strechTex; 34 uniform fixed4 _OutlineColor; 35 36 v2f vert (appdata v) 37 { 38 v2f o; 39 o.vertex = UnityObjectToClipPos(v.vertex); 40 o.screenPos = ComputeScreenPos(o.vertex); 41 return o; 42 } 43 44 fixed4 frag (v2f i) : SV_Target 45 { 46 i.screenPos.xy /= i.screenPos.w; 47 fixed4 srcCol = tex2D(_MainTex,float2(i.screenPos.x,1-i.screenPos.y)); 48 fixed4 occlusionCol = tex2D(_occlusionTex,fixed2(i.screenPos.x,i.screenPos.y)); 49 fixed4 strechCol = tex2D(_strechTex,fixed2(i.screenPos.x,i.screenPos.y)); 50 float isOcclusion = occlusionCol.x + occlusionCol.y + occlusionCol.z; 51 float isStrech = strechCol.x + strechCol.y + strechCol.z; 52 if(isStrech > 0.5 && isOcclusion<0.1) 53 return _OutlineColor; 54 else 55 return srcCol; 56 } 57 ENDCG 58 } 59 } 60 }
最终混合的Shader,即将拉伸的图像剔除遮挡部分,并与原相机的图像进行叠加。
该方案实现了遮挡描边的效果,但是问题又来了。图2中如果蓝方块在红方块后面,则无法描绘出边框,如果全部位于红色方块后,则描边的效果就消失了。
这部分代码参考了EsFog前辈的博客,原文地址:http://www.cnblogs.com/Esfog/p/CoverOutline_Shader.html
方案三:
基于方案二的优化,在渲染depthmap时仅剔除自身。
改进的思路:记录所有待描边的物体,描绘当前物体时仅将当前物体设为“Outline”层,讲所有描边的物体绘制出边框轮廓后再与原相机渲染的图像叠加。
工程代码如下
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
1 using UnityEngine; 2 using System.Collections; 3 using System.Collections.Generic; 4 5 public class MultiShapeOutline : MonoBehaviour { 6 7 public Camera objectCamera = null; 8 public Color outlineColor = Color.green; 9 Camera mainCamera; 10 RenderTexture depthTexture; 11 RenderTexture occlusionTexture; 12 RenderTexture strechTexture; 13 RenderTexture outputTexture; 14 RenderTexture inputTexture; 15 Material m; 16 [SerializeField] 17 List<GameObject> renderObjects = new List<GameObject>(); 18 // Use this for initialization 19 void Start () { 20 mainCamera = Camera.main; 21 mainCamera.depthTextureMode = DepthTextureMode.Depth; 22 objectCamera.depthTextureMode = DepthTextureMode.None; 23 objectCamera.cullingMask = 1 << LayerMask.NameToLayer("Outline"); 24 objectCamera.fieldOfView = mainCamera.fieldOfView; 25 objectCamera.clearFlags = CameraClearFlags.Color; 26 objectCamera.projectionMatrix = mainCamera.projectionMatrix; 27 objectCamera.nearClipPlane = mainCamera.nearClipPlane; 28 objectCamera.farClipPlane = mainCamera.farClipPlane; 29 objectCamera.targetTexture = depthTexture; 30 objectCamera.aspect = mainCamera.aspect; 31 objectCamera.orthographic = false; 32 objectCamera.enabled = false; 33 34 m = new Material(Shader.Find("ShapeOutline/DoNothing")); 35 } 36 37 // Update is called once per frame 38 void Update () { 39 40 } 41 42 void OnRenderImage(RenderTexture srcTex, RenderTexture dstTex) 43 { 44 outputTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0); 45 inputTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0); 46 Graphics.Blit(srcTex, inputTexture, m); 47 for (int i = 0; i < renderObjects.Count; i++) 48 { 49 depthTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 24, RenderTextureFormat.Depth); 50 occlusionTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0); 51 strechTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0); 52 53 int orgLayer = renderObjects[i].layer; 54 renderObjects[i].layer = LayerMask.NameToLayer("Outline"); 55 56 objectCamera.targetTexture = depthTexture; 57 objectCamera.RenderWithShader(Shader.Find("ShapeOutline/Depth"), string.Empty); 58 59 Material mat = new Material(Shader.Find("ShapeOutline/Occlusion")); 60 mat.SetColor("_OutlineColor", outlineColor); 61 Graphics.Blit(depthTexture, occlusionTexture, mat); 62 63 mat = new Material(Shader.Find("ShapeOutline/StrechOcclusion")); 64 mat.SetColor("_OutlineColor", outlineColor); 65 Graphics.Blit(occlusionTexture, strechTexture, mat); 66 67 68 mat = new Material(Shader.Find("ShapeOutline/MultiMix")); 69 mat.SetColor("_OutlineColor", outlineColor); 70 mat.SetTexture("_occlusionTex", occlusionTexture); 71 mat.SetTexture("_strechTex", strechTexture); 72 Graphics.Blit(inputTexture, outputTexture, mat); 73 74 RenderTexture.ReleaseTemporary(depthTexture); 75 RenderTexture.ReleaseTemporary(occlusionTexture); 76 RenderTexture.ReleaseTemporary(strechTexture); 77 renderObjects[i].layer = orgLayer; 78 79 Graphics.Blit(outputTexture, inputTexture, m); 80 } 81 Graphics.Blit(outputTexture, dstTex, m); 82 83 RenderTexture.ReleaseTemporary(inputTexture); 84 RenderTexture.ReleaseTemporary(outputTexture); 85 } 86 }
17行:记录所有待描边的物体
44、45行:定义了两张临时的RenderTexture,其实主要目的就是为了将上一次的图像输出用作下一次的图像输入。当然,实现这种功能第一想法就是Graphics.Blit(renderTexture,renderTexture,material),但实际Blit函数并不允许这样的参数操作,各位兄弟可以自己实际测试下。
53、54、77行:将描边的物体单独设置为描边渲染的层。
68行:shader改了名字而已,和上面贴出的代码一致。
功能写到这就结束了,当然根据各种不同的需求可以对功能进行修改。这是自己第一篇博文,不足之处请大家指出,也欢迎大家一起评论交流。
posted on 2017-11-28 15:32 AndrewChan 阅读(5497) 评论(2) 编辑 收藏 举报