Learning Physics-based Motion Style with Nonlinear Inverse Optimization

提出了真实感动作的物理表示的新方法,采用了生物力学的研究成果来表达动作风格,应用非线性逆向优化方法求解动作参数。

1.Intro.

基于物理的方法在高能量(高增益)动作中效果显著,但在如行走、慢跑动作中应用难度大,因为它们欠约束,不足以物理求解。数据驱动的方法具有高度真实感,但只能针对已有动作做编辑,而不能产生全新动作。

2. 相关工作

控制器: Faloutsos et al. 2001  [siggraph 2001, Composable controllers for physics-based chracter animation],Hodgins et al 1995 [animation human atheletics siggraph 1995],  Hodgins and Pollard 1997[siggraph adapting simulated behaviors for new characters], Raibert and Hodgins 1991[animation of dynamic legged locomotion], Laszlo et. al 2000[siggraph interactive control for physically based animation], Sun and Metaxas 2001[]automating gait animation, Torkos and van de Panne 1998[footprint based quadruped motion synthesis], vande Panne t al. 1994[virtual wind-up toys for animation],  van de Panne and Fiume 1993[sensor-actuator networks] 难点在于控制器不易创建,且只能针对某一动作

优化方法:

基于样例:HOT

3.方法:

A. 拉格朗日力学:     \sum_{i \in N(j) }{ \frac{d}{dt}frac{\partial T_i}{\partial \dot q_j} – frac{\partial T_i}{\partial q_j} – \frac{\partial T_i}{\partial q_j} }= Q_j

T_I = \frac{1}{2}(\dot W_i M_i \dot W_i^T)

左 = tr (\frac{\partial W_i}{\partial q_j}M_i\ddotW_i^T)

右 = Q_{m_j} + Q_{}g_j + Q_{p_j} + Q_{c_j} + Q_{s_j}

分别来自肌肉、重力、被动弹力和阻尼、地面接触力、鞋子弹力

然后文章要求肌肉力,因此对剩余各项求解,则 Q_{m_j} = 左 - 各项

B.优化:目标肌肉使用率最小

E^*(X\ theta)=\sum_j{\sum_t{\alpha_j(Q_{m_j(t,X,\theta)})^2}},其中\theta = (\alpha,k_s,k_d,\hat q, k_shoe, h)

再加上soft constraint     + w_r(\sum_k{\sum_t{(Q_{0_k}(t,X,\theta))^2}})

及用户的约束 C( X ) = 0

C. 逆向优化

使用NIO,在假设实际动作数据为最优解的情况下,求出\theta

G(\theta) = E(X_T;\theta) – \min_{X \in C}{E(X, \theta)}

D 有了参数,则可修改得到新风格

posted @ 2011-06-08 16:28  justin_s  阅读(233)  评论(0编辑  收藏  举报