|
1 | 1 | # PyTorch Robot Kinematics
|
2 |
| -- Parallel and differentiable forward kinematics (FK) and Jacobian calculation |
| 2 | +- Parallel and differentiable forward kinematics (FK), Jacobian calculation, and damped least squares inverse kinematics (IK) |
3 | 3 | - Load robot description from URDF, SDF, and MJCF formats
|
4 | 4 | - SDF queries batched across configurations and points via [pytorch-volumetric](https://github.com/UM-ARM-Lab/pytorch_volumetric)
|
5 | 5 |
|
@@ -167,6 +167,51 @@ J = chain.jacobian(th, locations=loc)
|
167 | 167 | The Jacobian can be used to do inverse kinematics. See [IK survey](https://www.math.ucsd.edu/~sbuss/ResearchWeb/ikmethods/iksurvey.pdf)
|
168 | 168 | for a survey of ways to do so. Note that IK may be better performed through other means (but doing it through the Jacobian can give an end-to-end differentiable method).
|
169 | 169 |
|
| 170 | +## Inverse Kinematics (IK) |
| 171 | +Inverse kinematics is available via damped least squares (iterative steps with Jacobian pseudo-inverse damped to avoid oscillation near singularlities). |
| 172 | +Compared to other IK libraries, these are the typical advantages over them: |
| 173 | +- not ROS dependent (many IK libraries need the robot description on the ROS parameter server) |
| 174 | +- batched in both goal specification and retries from different starting configurations |
| 175 | +- goal orientation in addition to goal position |
| 176 | + |
| 177 | +See `tests/test_inverse_kinematics.py` for usage, but generally what you need is below: |
| 178 | +```python |
| 179 | +full_urdf = os.path.join(search_path, urdf) |
| 180 | +chain = pk.build_serial_chain_from_urdf(open(full_urdf).read(), "lbr_iiwa_link_7") |
| 181 | + |
| 182 | +# goals are specified as Transform3d poses in the **robot frame** |
| 183 | +# so if you have the goals specified in the world frame, you also need the robot frame in the world frame |
| 184 | +pos = torch.tensor([0.0, 0.0, 0.0], device=device) |
| 185 | +rot = torch.tensor([0.0, 0.0, 0.0], device=device) |
| 186 | +rob_tf = pk.Transform3d(pos=pos, rot=rot, device=device) |
| 187 | + |
| 188 | +# specify goals as Transform3d poses in world frame |
| 189 | +goal_in_world_frame_tf = ... |
| 190 | +# convert to robot frame (skip if you have it specified in robot frame already, or if world = robot frame) |
| 191 | +goal_in_rob_frame_tf = rob_tf.inverse().compose(goal_tf) |
| 192 | + |
| 193 | +# get robot joint limits |
| 194 | +lim = torch.tensor(chain.get_joint_limits(), device=device) |
| 195 | + |
| 196 | +# create the IK object |
| 197 | +# see the constructor for more options and their explanations, such as convergence tolerances |
| 198 | +ik = pk.PseudoInverseIK(chain, max_iterations=30, num_retries=10, |
| 199 | + joint_limits=lim.T, |
| 200 | + early_stopping_any_converged=True, |
| 201 | + early_stopping_no_improvement="all", |
| 202 | + debug=False, |
| 203 | + lr=0.2) |
| 204 | +# solve IK |
| 205 | +sol = ik.solve(goal_in_rob_frame_tf) |
| 206 | +# num goals x num retries x DOF tensor of joint angles; if not converged, best solution found so far |
| 207 | +print(sol.solutions) |
| 208 | +# num goals x num retries can check for the convergence of each run |
| 209 | +print(sol.converged) |
| 210 | +# num goals x num retries can look at errors directly |
| 211 | +print(sol.err_pos) |
| 212 | +print(sol.err_rot) |
| 213 | +``` |
| 214 | + |
170 | 215 | ## SDF Queries
|
171 | 216 | See [pytorch-volumetric](https://github.com/UM-ARM-Lab/pytorch_volumetric) for the latest details, some instructions are pasted here:
|
172 | 217 |
|
|
0 commit comments