NTU RGB+D数据集,骨架数据可视化

NTU RGB+D数据集链接GitHub链接

拍摄相机的机位示意图:

 

NTU60每个视频长度(帧数)的统计情况:

  • 总共约56880个视频,最长的是300帧,最短的是32帧,平均长度82.9帧;

  • 长度大于50帧的视频大约37000个,长度大于60帧的视频大约30000个,长度大于70帧的约23000个,长度大于80帧的视频大约17000个;

  • 长度大于100帧的视频8800个,长度大于120帧的视频大约4600个,长度大于150帧的视频大约1738个,长度大于180帧的视频大约550个,长度大于200帧的视频284个。 

需要说明的是,这些骨架坐标是通过姿态估计算法从视频帧中估计得到的。(3D human joint positions are extracted from a single depth image by a realtime human skeleton tracking framework)

 

可视化程序如下,其中read_skeleton和read_xyz函数摘自yysijie/st-gcn/tools/ntu_gendata.py

import os
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D


def read_skeleton(file):
    with open(file, 'r') as f:
        skeleton_sequence = {}
        skeleton_sequence['numFrame'] = int(f.readline())
        skeleton_sequence['frameInfo'] = []
        for t in range(skeleton_sequence['numFrame']):
            frame_info = {}
            frame_info['numBody'] = int(f.readline())
            frame_info['bodyInfo'] = []
            for m in range(frame_info['numBody']):
                body_info = {}
                body_info_key = [
                    'bodyID', 'clipedEdges', 'handLeftConfidence',
                    'handLeftState', 'handRightConfidence', 'handRightState',
                    'isResticted', 'leanX', 'leanY', 'trackingState'
                ]
                body_info = {
                    k: float(v)
                    for k, v in zip(body_info_key, f.readline().split())
                }
                body_info['numJoint'] = int(f.readline())
                body_info['jointInfo'] = []
                for v in range(body_info['numJoint']):
                    joint_info_key = [
                        'x', 'y', 'z', 'depthX', 'depthY', 'colorX', 'colorY',
                        'orientationW', 'orientationX', 'orientationY',
                        'orientationZ', 'trackingState'
                    ]
                    joint_info = {
                        k: float(v)
                        for k, v in zip(joint_info_key, f.readline().split())
                    }
                    body_info['jointInfo'].append(joint_info)
                frame_info['bodyInfo'].append(body_info)
            skeleton_sequence['frameInfo'].append(frame_info)
    return skeleton_sequence


def read_xyz(file, max_body=2, num_joint=25):
    seq_info = read_skeleton(file)
    data = np.zeros((3, seq_info['numFrame'], num_joint, max_body))
    for n, f in enumerate(seq_info['frameInfo']):
        for m, b in enumerate(f['bodyInfo']):
            for j, v in enumerate(b['jointInfo']):
                if m < max_body and j < num_joint:
                    data[:, n, j, m] = [v['x'], v['y'], v['z']]
                else:
                    pass
    return data


data_path = '/Users/wangpeng/Desktop/nturgb+d_skeletons/S001C001P001R001A055.skeleton'
point = read_xyz(data_path)   # shape (3, num_frames, joints, 2)
print('read data done!')

xmax = np.max(point[0, :, :, :]) + 0.5
xmin = np.min(point[0, :, :, :]) - 0.5
ymax = np.max(point[1, :, :, :]) + 0.3
ymin = np.min(point[1, :, :, :]) - 0.3
zmax = np.max(point[2, :, :, :])
zmin = np.min(point[2, :, :, :])

row = point.shape[1]
print(point.shape)


# 相邻各节点列表,用来画节点之间的连接线
arms = [23, 11, 10, 9, 8, 20, 4, 5, 6, 7, 21]
rightHand = [11, 24]
leftHand = [7, 22]
legs = [19, 18, 17, 16, 0, 12, 13, 14, 15]
body = [3, 2, 20, 1, 0]

# 2D展示------------------------------------------------------------------------
n = 0     # 从第n帧开始展示
m = row   # 到第m帧结束,n<m<row
plt.figure()
plt.ion()
for i in range(n, m):
    plt.cla()
    plt.scatter(point[0, i, :, :], point[1, i, :, :], c='red', s=40.0)
    plt.plot(point[0, i, arms, 0], point[1, i, arms, 0], c='green', lw=2.0)
    plt.plot(point[0, i, rightHand, 0], point[1, i, rightHand, 0], c='green', lw=2.0)
    plt.plot(point[0, i, leftHand, 0], point[1, i, leftHand, 0], c='green', lw=2.0)
    plt.plot(point[0, i, legs, 0], point[1, i, legs, 0], c='green', lw=2.0)
    plt.plot(point[0, i, body, 0], point[1, i, body, 0], c='green', lw=2.0)
    
    plt.plot(point[0, i, arms, 1], point[1, i, arms, 1], c='green', lw=2.0)
    plt.plot(point[0, i, rightHand, 1], point[1, i, rightHand, 1], c='green', lw=2.0)
    plt.plot(point[0, i, leftHand, 1], point[1, i, leftHand, 1], c='green', lw=2.0)
    plt.plot(point[0, i, legs, 1], point[1, i, legs, 1], c='green', lw=2.0)
    plt.plot(point[0, i, body, 1], point[1, i, body, 1], c='green', lw=2.0)
    
    plt.text(xmax-0.5, ymax-0.1, 'frame: {}/{}'.format(i, row-1))
    # plt.text(xmax-0.8, ymax-0.4, 'label: ' + str(label[i]))
    plt.xlim(xmin, xmax)
    plt.ylim(ymin, ymax)
    plt.pause(0.001)

plt.ioff()
plt.show()

结果示例:

 

附NTU骨架数据集的节点安排图:

NTU 60 的动作名称:

action_names = ['drink water', 'eat meal', 'brushing teeth', 'brushing hair', 'drop', 'pickup', 'throw', 'sitting down', 'standing up', 'clapping', 'reading', 'writing', 'tear up paper', 'wear jacket', 'take off jacket', 'wear a shoe', 'take off a shoe', 'wear on glasses', 'take off glasses', 'put on a hat', 'take off a hat', 'cheer up', 'hand waving', 'kicking something', 'reach into pocket', 'hopping', 'jump up', 'make a phone call', 'playing with phone', 'typing on a keyboard', 'pointing to something with finger', 'taking a selfie', 'check time from watch', 'rub two hands together', 'nod head', 'shake head', 'wipe face', 'salute', 'put the palms together', 'cross hands in front', 'sneeze', 'staggering', 'falling', 'touch head (headache)', 'touch chest (heart pain)', 'touch back (backache)', 'touch neck (neckache)', 'nausea', 'use a fan', 'punching other person', 'kicking other person', 'pushing other person', 'pat on back of other person', 'point finger at the other person', 'hugging other person', 'giving something to other person', 'touch some person pocket', 'handshaking', 'walking towards each other', 'walking apart from each other']

 

posted @ 2020-11-26 21:06  Picassooo  阅读(5883)  评论(7编辑  收藏  举报