Visual-Based Autonomous Driving Deployment from a Stochastic and Uncertainty-Aware Perspective
张宁 Visual-Based Autonomous Driving Deployment from a Stochastic and Uncertainty-Aware Perspective
Lei Tai Peng Yun Yuying Chen Congcong Liu Haoyang Ye Ming Liu
从随机和不确定性角度出发的基于视觉的自动驾驶部署
链接:https://pan.baidu.com/s/1iako8pSu9nuwCzIfF_M2EQ
提取码:j8bg
Abstract—End-to-end visual-based imitation learning has been widely applied in autonomous driving. When deploying the trained visual-based driving policy, a deterministic command is usually directly applied without considering the uncertainty of the input data. Such kind of policies may bring dramatical damage when applied in the real world. In this paper, we follow the recent real-to-sim pipeline by translating the testing world image back to the training domain when using the trained policy. In the translating process, a stochastic generator is used to generate various images stylized under the training domain randomly or directionally. Based on those translated images, the trained uncertainty-aware imitation learning policy would output both the predicted action and the data uncertainty motivated by the aleatoric loss function. Through the uncertainty-aware imitation learning policy, we can easily choose the safest one with the lowest uncertainty among the generated images. Experiments in the Carla navigation benchmark show that our strategy outperforms previous methods, especially in dynamic environments.
端到端基于视觉的模仿学习已广泛应用于自动驾驶。 在部署经过训练的基于视觉的驾驶策略时,通常直接应用确定性命令,而无需考虑输入数据的不确定性。当在现实世界中应用时,此类策略可能会带来巨大的破坏。 在本文中,我们遵循最新的真实模拟流程,在使用训练有素的策略时,将测试世界图像转换回训练领域。 在翻译过程中,随机生成器用于随机或定向生成在训练域下风格化的各种图像。基于这些翻译的图像,经过训练的不确定性意识的模仿学习策略将同时输出预测行为和由无谓损失函数引起的数据不确定性。 通过不确定性意识的模仿学习策略,我们可以轻松地在生成的图像中选择不确定性最低的最安全的模仿学习策略。 Carla导航基准测试表明,我们的策略优于以前的方法,尤其是在动态环境中。