Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
guxiao0822 committed Sep 17, 2021
0 parents commit 413d08b
Show file tree
Hide file tree
Showing 13 changed files with 249 additions and 0 deletions.
71 changes: 71 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# ICL-Gait Dataset 1.0
Release Date: Sept 2021

## Primary Contact
**Xiao Gu**, [email protected] Imperial College London

**Yao Guo**, [email protected] Shanghai Jiao Tong University

## Citing
Please cite the following paper for the use of this dataset

Gu, Xiao, et al. "Occlusion-Invariant Rotation-Equivariant Semi-Supervised Depth Based Cross-View Gait Pose Estimation." arXiv preprint arXiv:2109.01397 (2021).

## Supplementary Code
* `vis_demo` provides script to visualize the data from different modalities
* `syn` provides script to generate synthetic data based on SMPL

## Dataset Details
The dataset contains a real-world gait dataset collected from multiple viewpoints.
Please see our [paper](https://arxiv.org/pdf/2109.01397) and the [website](https://xiaogu.site/ICL_gait) for more details.

## Data-Split Experiment Settings
Please follow the settings in our [paper](https://arxiv.org/pdf/2109.01397) to benchmark your algorithms.

### Cross-Subject (CS) Validation (2 loops)
For cross-subject validation, the data were split to two groups {S01, S02, S04, S07}, {S03, S05, S06, S8}.

Each loop, use one group as training, and the other group as testing set.

### Cross-View (CV) Validation (5 loops)
For cross-view validation, the data were split based on the five views.

Each loop, use the data from one view as training, the data from the other views as testing.
### Cross-Subject Cross-View (CS-CV) Validation (10 loops)
For cross-subject & cross-view validation, the data was split to ten subgroups as a combination of CS and CV validation.

For example, one group is {S01-V01, S02-V01, S04-V01, S07-V01}. In each loop, use one group as the training set, and report the results on the remaining nine groups.

**You can further split some proportion from the training set as a validation set, but any use of the testing data during training is not allowed.**


## Folder Details
### Dataset folder format
S##_V##_C## refers to the data of the trial per subject, condition, and viewpoint.
S##: subject id
V##: viewpoint
C##: walking condition

Each folder contains 300 consecutive samples from one trial (the remaining samples leading to a much large data volume will be released in the future).
Missing trials (S1-C1-V1, S1-C2-V2, S1-C4-V1, S3-C3-V3, S5-C4-V4, S8-C2-V3, S8-C2-V4, S8-C4-V3, S8-C5-V3)

* **depth**: contains the depth images recorded by RealSense D435

```
scale = 0.0010000000474974513;
fx = 928.108; fy = 927.443;
cx = 647.394; cy = 361.699
```
* **mask**: contains the segmentation mask predicted from RGB images (access suspended) by CDCL
```
ROI (lower-limb) RGB Value [255,127,127; 0,127,255; 0,0,255; 255,255,127; 127,255,127; 0,255,0]
```
* **point cloud**: contains the point cloud converted from depth data, corresponding 3D keypoint, and root orientation

* **pose_2d**: contains the 2D keypoints predicted by OpenPose

* **kinematics**: contains the kinematics (randomly picked, not synchronized with the modalities above) which can be used synthetic data generation




19 changes: 19 additions & 0 deletions syn/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# ICL-Gait to SMPL

We provide the script [utils.py](smpl_webuser/utils.py) for transferring the gait parameter from our dataset to SMPL

## Usage
* Please download the repository of [SMPL](smpl_webuser/utils.py)

* Add `models/pose.mat` to `models` of SMPL respository

* Add `smpl_webuser/hello_world/demo_gait.py` to `smpl_webuser/hello_world/`

* (optional) replace the `smpl_webuser/serialization.py` by a modified `smpl_webuser/serialization.py` compatible with python3

* Running `demo_gait.py`, a synthetic model will be generated automatically





Binary file added syn/models/pose.mat
Binary file not shown.
32 changes: 32 additions & 0 deletions syn/smpl_webuser/hello_world/demo_gait.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
'''
Modified from hellp_smpl.py with mocap2smpl function added
'''

from smpl_webuser.serialization import load_model
import numpy as np
from smpl_webuser.utils import mocap2smpl

## Load SMPL model (here we load the female model)
## Make sure path is correct
m = load_model( '../../models/basicmodel_m_lbs_10_207_0_v1.1.0.pkl' )

## Assign random pose and shape parameters
# m.pose[:] = np.random.rand(m.pose.size) * .2
m.betas[:] = np.random.rand(m.betas.size) * .03

## assign poses from ICL
import scipy.io as sio
data = sio.loadmat('../../models/pose.mat')['pose'][0]
mocap2smpl(data, m)

## Write to an .obj file
outmesh_path = './gait_smpl.obj'
with open( outmesh_path, 'w') as fp:
for v in m.r:
fp.write( 'v %f %f %f\n' % ( v[0], v[1], v[2]) )

for f in m.f+1: # Faces are 1-based, not 0-based in obj files
fp.write( 'f %d %d %d\n' % (f[0], f[1], f[2]) )

## Print message
print '..Output mesh saved to: ', outmesh_path
Empty file.
59 changes: 59 additions & 0 deletions syn/smpl_webuser/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import transforms3d
import numpy as np

def mocap2smpl(data, m, global_rotation=[0,0,0.00001]):
### global transform
p = data[89,90,91] # Pelvis
p_eulr = transforms3d.euler.axangle2euler(p/np.linalg.norm(p), np.linalg.norm(p)/180*np.pi, 'sxyz')
p_transform_woz = transforms3d.euler.euler2mat(p_eulr[0], p_eulr[1], 0)
p_transform = transforms3d.axangles.axangle2mat(p/np.linalg.norm(p), np.linalg.norm(p)/180*np.pi)
global_transform = transforms3d.axangles.axangle2mat(global_rotation, np.linalg.norm(global_rotation))
body_rotation = transforms3d.axangles.mat2axangle(np.dot(global_transform, p_transform_woz))
m.pose[0:3] = body_rotation[0][[1, 2, 0]]*body_rotation[1]

### Left_UpperLeg transform
lfe = data[17:20] # LFemur
lfe_transform = transforms3d.axangles.axangle2mat(lfe/np.linalg.norm(lfe), np.linalg.norm(lfe)/180*np.pi)
lhip_transform = np.dot(np.linalg.inv(p_transform), lfe_transform) # relative rotation
lhip = transforms3d.axangles.mat2axangle(lhip_transform)
m.pose[3:6] = lhip[0][[1, 2, 0]]*lhip[1]

### Right_UpperLeg transform
rfe = data[113:116] # RFemur
rfe_transform = transforms3d.axangles.axangle2mat(rfe/np.linalg.norm(rfe), np.linalg.norm(rfe)/180*np.pi)
rhip_transform = np.dot(np.linalg.inv(p_transform), rfe_transform)
rhip = transforms3d.axangles.mat2axangle(rhip_transform)
m.pose[6:9] = rhip[0][[1, 2, 0]]*rhip[1]

### Left_LowerLeg transform
lti = data[71:74] # LTibia
lti_transform = transforms3d.axangles.axangle2mat(lti/np.linalg.norm(lti), np.linalg.norm(lti)/180*np.pi)
lk_transform = np.dot(np.linalg.inv(lfe_transform), lti_transform)
lk = transforms3d.axangles.mat2axangle(lk_transform)
m.pose[12:15] = lk[0][[1, 2, 0]]*lk[1]

### Right_LowerLeg transform
rti = data[167:170] # RTibia
rti_transform = transforms3d.axangles.axangle2mat(rti/np.linalg.norm(rti), np.linalg.norm(rti)/180*np.pi)
rk_transform = np.dot(np.linalg.inv(rfe_transform), rti_transform)
rk = transforms3d.axangles.mat2axangle(rk_transform)
m.pose[15:18] = rk[0][[1, 2, 0]]*rk[1]

### Left_Foot (90 offset)
lfo = data[26:29]
lfo_transform = transforms3d.axangles.axangle2mat(lfo/np.linalg.norm(lfo), np.linalg.norm(lfo)/180*np.pi)
lfo_transform = np.dot(lfo_transform, transforms3d.axangles.axangle2mat([0,1,0], np.pi/2))
la_transform = np.dot(np.linalg.inv(lti_transform), lfo_transform)
la = transforms3d.axangles.mat2axangle(la_transform)
m.pose[21:24] = la[0][[1, 2, 0]]*la[1]

### Right_Foot (90 offset)
rfo = data[122:125]
rfo_transform = transforms3d.axangles.axangle2mat(rfo/np.linalg.norm(lfo), np.linalg.norm(rfo)/180*np.pi)
rfo_transform = np.dot(rfo_transform, transforms3d.axangles.axangle2mat([0,1,0], np.pi/2))
ra_transform = np.dot(np.linalg.inv(rti_transform), rfo_transform)
ra = transforms3d.axangles.mat2axangle(ra_transform)
m.pose[24:27] = ra[0][[1, 2, 0]]*ra[1]

###

5 changes: 5 additions & 0 deletions vis_demo/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Code for visualization data from different modalities

Run `visualize.m` to visualize the data.

![alt text](demo_misc.png)
Binary file added vis_demo/demo_misc.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added vis_demo/depth.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions vis_demo/keypoint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"version":1.3,"people":[{"person_id":[-1],"pose_keypoints_2d":[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,758.695,170.933,0.461265,786.089,174.877,0.409076,758.723,351.216,0.728424,774.32,515.742,0.714466,733.233,165.084,0.390594,652.908,333.579,0.693749,617.664,507.932,0.813623,0,0,0,0,0,0,0,0,0,0,0,0,551.009,511.862,0.657727,564.709,523.518,0.702532,627.433,525.489,0.676285,723.417,537.273,0.355216,729.271,527.554,0.331402,784.131,533.386,0.575139],"face_keypoints_2d":[],"hand_left_keypoints_2d":[],"hand_right_keypoints_2d":[],"pose_keypoints_3d":[],"face_keypoints_3d":[],"hand_left_keypoints_3d":[],"hand_right_keypoints_3d":[]}]}
Binary file added vis_demo/mask.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added vis_demo/pointcloud.mat
Binary file not shown.
62 changes: 62 additions & 0 deletions vis_demo/visualize.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
% this is the script used for visualizing samples from ICL-Gait Dataset
% https://xiaogu.site/ICL_gait/
% Contact: [email protected]

clear all;
close all;
clc;

figure();
%% visualize depth
depth = imread('depth.png');
subplot(2,2,1);
imshow(depth);
colormap('parula');
caxis([0,3000]);
title('depth image');

%% visualize mask
mask = imread('mask.png');
subplot(2,2,2);
imshow(mask);
title('segmentation mask');

%% visualize skeleton from openpose
% load openpose result
fid = fopen('keypoint.json');
raw = fread(fid,inf);
str = char(raw');
fclose(fid);
val = jsondecode(str);
pose2d = reshape(val.people.pose_keypoints_2d, 3, 25)';

% visualize 2d result
j_index = [9,10,11,12,13,14,15,20,21,22,23,24,25];
j_tree = [9,10;10,11;11,12;12,23;12,25;23,24;
9,13;13,14;14,15;15,20;15,22;20,21];
subplot(2,2,3);
plot(pose2d(j_index,1), pose2d(j_index,2),'.','MarkerSize',25);
hold on;

for j = 1:size(j_tree,1)
plot(pose2d(j_tree(j,:),1), pose2d(j_tree(j,:),2),'-', 'LineWidth', 3);
end

imshow([]);
xlim([400, 1000]);
ylim([50, 600]);
title('2D Keypoint');

%% visualize point cloud
load('pointcloud.mat');
subplot(2,2,4);
pcshow(pc_2048,'VerticalAxis','Y','VerticalAxisDir','down')
hold on;
pcshow(keypoint,'VerticalAxis','Y','VerticalAxisDir','down', 'MarkerSize', 300)
title('\color{black}Point Cloud');
set(gca, 'color', 'w');
set(gcf, 'color', 'w');
axis off;

% visualize the body_orientation (of pelvis)
plotTransforms((keypoint(1,:)+keypoint(5,:))/2, body_orientation, 'FrameSize',0.2);

0 comments on commit 413d08b

Please sign in to comment.