ARCore深度渲染问题分析
ARCore深度效果显示分为两部分:第一部分是深度图显示,另一部分为深度遮挡(即实现真实物体与虚拟物体的遮挡)。本文对这两部分的功能进行分析。
开启适度图显示是会在屏幕上显示整个是业内的深度信息,比如越远的地方深度值越大,则显示红色,负责显示蓝色。步骤比较简单:
1)ARCore获取环境深度图,并传入shader中“_CurrentDepthTexture”中,但是图片采样时只能采样部分区域,所以还需要传入_UvTopLeftRight,_UvBottomLeftRight值。然后插值得到真正图片的uv值
inline float2 ArCoreDepth_GetUv(float2 uv)
{
float2 uvTop = lerp(_UvTopLeftRight.xy, _UvTopLeftRight.zw, uv.x);
float2 uvBottom = lerp(_UvBottomLeftRight.xy, _UvBottomLeftRight.zw, uv.x);
return lerp(uvTop, uvBottom, uv.y);
}
2)在偏远着色器中,根据uv以及深度图计算当前像素的深度,计算方法如下所示,由于计算方法涉及ARCore算法,所以不做特殊分析
inline float ArCoreDepth_GetMeters(float2 uv)
{
// The depth texture uses TextureFormat.RGB565.
float4 rawDepth = tex2Dlod(_CurrentDepthTexture, float4(uv, 0, 0));
float depth = (rawDepth.r * ARCORE_FLOAT_TO_5BITS * ARCORE_RGB565_RED_SHIFT)
+ (rawDepth.g * ARCORE_FLOAT_TO_6BITS * ARCORE_RGB565_GREEN_SHIFT)
+ (rawDepth.b * ARCORE_FLOAT_TO_5BITS);
depth = min(depth, ARCORE_MAX_DEPTH_MM);
depth *= ARCORE_DEPTH_SCALE;
return depth;
}
3)获取到深度值后,根据颜色条形图(256*1)去采样,深度越大,采样到的颜色越红。
Shader "ARCore/EAP/Camera Color Ramp Shader"
{
Properties
{
_ColorRamp("Color Ramp", 2D) = "white" {}
}
SubShader
{
// No culling or depth
Cull Off
ZWrite On
ZTest LEqual
Tags { "Queue" = "Background+1" }
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
#include "../../../../SDK/Materials/ARCoreDepth.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
sampler2D _ColorRamp;
// Vertex shader that scales the quad to full screen.
v2f vert(appdata v)
{
v2f o;
o.vertex = float4(v.vertex.x * 2.0f, v.vertex.y * 2.0f, 1.0f, 1.0f);
o.uv = ArCoreDepth_GetUv(v.uv);
return o;
}
// This shader displays the depth buffer data as a color ramp overlay
// for use in debugging.
float4 frag(v2f i) : SV_Target
{
// Unpack depth texture value.
float d = ArCoreDepth_GetMeters(i.uv);
// Zero means no raw data available, render black.
if (d == 0.0f)
{
return float4(0, 0, 0, 1);
}
// Use depth as an index into the color ramp texture.
return tex2D(_ColorRamp, float2(d / 3.0f, 0.0f));
}
ENDCG
}
}
}
如何显示到最前方:如何将深度图显示到所有画面的最前方?
ARCore借助于unity自带的quad模型(尺寸为1,左下角顶点为(-0.5,-0.5),右上角定点给为(0.5,0.5))。在顶点着色器中直接映射到裁剪空间内,在保证深度是z/w=1时,x,y值范围为(-1,1),所以可以显示在最前方,并铺满整个屏幕,相关代码如下:
o.vertex = float4(v.vertex.x * 2.0f, v.vertex.y * 2.0f, 1.0f, 1.0f);
可以在shader中修改z值为0.5或者不修改z值修改w值为2,看一下效果。
正常显示在最前方时深度值应该为0,但是此结果z/w为1,具体原因不详,大概可能跟深度值精度问题转换为1/z问题导致。
深度遮挡的实现较为麻烦,基本思路为分别获取unity场景渲染的深度图以及环境的深度图,然后计算深度差。最后在后处理模块(OnRenderImage)根据深度值来确定是否显示虚拟像素还是视频流背景。
具体步骤如下:
1)在不透明物体渲染之前(CameraEvent.BeforeForwardOpaque),借助背景渲染的材质,获取视频流背景:
m_BackgroundRenderer = FindObjectOfType<ARCoreBackgroundRenderer>();
if (m_BackgroundRenderer == null)
{
Debug.LogError("BackgroundTextureProvider requires ARCoreBackgroundRenderer " +
"anywhere in the scene.");
return;
}
m_BackgroundBuffer = new CommandBuffer();
m_BackgroundBuffer.name = "Camera texture";
m_BackgroundTextureID = Shader.PropertyToID(BackgroundTexturePropertyName);
m_BackgroundBuffer.GetTemporaryRT(m_BackgroundTextureID,
/*width=*/
-1, /*height=*/ -1,
/*depthBuffer=*/
0, FilterMode.Bilinear);
var material = m_BackgroundRenderer.BackgroundMaterial;
if (material != null)
{
m_BackgroundBuffer.Blit(material.mainTexture, m_BackgroundTextureID, material);
}
m_BackgroundBuffer.SetGlobalTexture(
BackgroundTexturePropertyName, m_BackgroundTextureID);
m_Camera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, m_BackgroundBuffer);
m_Camera.AddCommandBuffer(CameraEvent.BeforeGBuffer, m_BackgroundBuffer);
ARCore背景渲染可以参考ARCore背景渲染
2)在update中更新真实场景深度图(Frame.CameraImage.UpdateDepthTexture(ref
m_DepthTexture);),通过CommandBuffer在不透明物体渲染结束后,通过_CameraDepthTexture获取虚拟场景的深度值,并将深度值处理后的结果存储到图片的a通道,传递给下一步处理。根据如下代码显示,还做了blur处理,但是根据参数显示,blur效果有限或者说没有。
m_Camera = Camera.main;
m_Camera.depthTextureMode |= DepthTextureMode.Depth;
m_DepthBuffer = new CommandBuffer();
m_DepthBuffer.name = "Auxilary occlusion textures";
// Creates the occlusion map.
int occlusionMapTextureID = Shader.PropertyToID("_OcclusionMap");
m_DepthBuffer.GetTemporaryRT(occlusionMapTextureID, -1, -1, 0, FilterMode.Bilinear);
// Pass #0 renders an auxilary buffer - occlusion map that indicates the
// regions of virtual objects that are behind real geometry.
m_DepthBuffer.Blit(
BuiltinRenderTextureType.CameraTarget,
occlusionMapTextureID, m_DepthMaterial, /*pass=*/ 0);
// Blurs the occlusion map.
m_DepthBuffer.SetGlobalTexture("_OcclusionMapBlurred", occlusionMapTextureID);
m_Camera.AddCommandBuffer(CameraEvent.AfterForwardOpaque, m_DepthBuffer);
m_Camera.AddCommandBuffer(CameraEvent.AfterGBuffer, m_DepthBuffer);
通过OcclusionImageEffect shader中的pass 0,对深度进行处理。首先通过采样获取到真实深度和虚拟深度,然后计算一个occlusionAlpha值。当虚拟深度与真实深度差别较大时,且真实深度值较小时occlusionAlpha为1,反之为0;如果两者差别极小时则为0-1之间数据。
float occlusionAlpha =
1.0 - saturate(0.5 * (depthMeters - virtualDepth) /
(_TransitionSizeMeters * virtualDepth) + 0.5);
3)在后处理过程中(OnRenderImage)根据第二步计算的occlusionAlpha值来决定是否显示虚拟物体
实现的CS代码(DepthEffect)如下所示:
[RequireComponent(typeof(Camera))]
public class DepthEffect : MonoBehaviour
{
/// <summary>
/// The global shader property name for the camera texture.
/// </summary>
public const string BackgroundTexturePropertyName = "_BackgroundTexture";
/// <summary>
/// The image effect shader to blit every frame with.
/// </summary>
public Shader OcclusionShader;
/// <summary>
/// The blur kernel size applied to the camera feed. In pixels.
/// </summary>
[Space]
public float BlurSize = 20f;
/// <summary>
/// The number of times occlusion map is downsampled before blurring. Useful for
/// performance optimization. The value of 1 means no downsampling, each next one
/// downsamples by 2.
/// </summary>
public int BlurDownsample = 2;
/// <summary>
/// Maximum occlusion transparency. The value of 1.0 means completely invisible when
/// occluded.
/// </summary>
[Range(0, 1)]
public float OcclusionTransparency = 1.0f;
/// <summary>
/// The bias added to the estimated depth. Useful to avoid occlusion of objects anchored
/// to planes. In meters.
/// </summary>
[Space]
public float OcclusionOffset = 0.08f;
/// <summary>
/// Velocity occlusions effect fades in/out when being enabled/disabled.
/// </summary>
public float OcclusionFadeVelocity = 4.0f;
/// <summary>
/// Instead of a hard z-buffer test, allows the asset to fade into the background
/// gradually. The parameter is unitless, it is a fraction of the distance between the
/// camera and the virtual object where blending is applied.
/// </summary>
public float TransitionSize = 0.1f;
private static readonly string k_CurrentDepthTexturePropertyName = "_CurrentDepthTexture";
private static readonly string k_TopLeftRightPropertyName = "_UvTopLeftRight";
private static readonly string k_BottomLeftRightPropertyName = "_UvBottomLeftRight";
private Camera m_Camera;
private Material m_DepthMaterial;
private Texture2D m_DepthTexture;
private float m_CurrentOcclusionTransparency = 1.0f;
private ARCoreBackgroundRenderer m_BackgroundRenderer;
private CommandBuffer m_DepthBuffer;
private CommandBuffer m_BackgroundBuffer;
private int m_BackgroundTextureID = -1;
/// <summary>
/// Unity's Awake() method.
/// </summary>
public void Awake()
{
m_CurrentOcclusionTransparency = OcclusionTransparency;
Debug.Assert(OcclusionShader != null, "Occlusion Shader parameter must be set.");
m_DepthMaterial = new Material(OcclusionShader);
m_DepthMaterial.SetFloat("_OcclusionTransparency", m_CurrentOcclusionTransparency);
m_DepthMaterial.SetFloat("_OcclusionOffsetMeters", OcclusionOffset);
m_DepthMaterial.SetFloat("_TransitionSize", TransitionSize);
// Default texture, will be updated each frame.
m_DepthTexture = new Texture2D(2, 2);
m_DepthTexture.filterMode = FilterMode.Bilinear;
m_DepthMaterial.SetTexture(k_CurrentDepthTexturePropertyName, m_DepthTexture);
m_Camera = Camera.main;
m_Camera.depthTextureMode |= DepthTextureMode.Depth;
m_DepthBuffer = new CommandBuffer();
m_DepthBuffer.name