Skip to content
윤이진 edited this page Oct 7, 2021 · 8 revisions

This page briefly describes each node developed by the node-red-contrib-motion-pose module. Refer to the link linked to each header for detailed attributes, inputs values, output values, and codes for each node.


1) Body

A simple node that recognizes and visualizes pose using a webcam.

This node provides body tracking in a browser environment to recognize body pose.

Basically, we use webcam devices mounted on computers. When a person is recognized on a webcam, 33 landmark coordinates are predicted and visually shown as shown in the picture below.

image

Specify a pose to save and press the capture button to record the pose model at a specific time. When the pose is captured, the image and coordinate information are displayed. Check the registration status through the register and cancel buttons.


A simple node that recognizes and visualizes pose using a external camera device which support Samsung SmartThings.

This node provides exactly the same function as the pose-detection-webcam node, but there is a difference in using an external camera device.


This node accepts a single key point, determines similarity of the saved pose , and derives whether registration is possible.

Please check the following link for the method referenced to calculate the similarity between poses (described in our project wiki topic).

This node is used when the user wishes to register with a new pose. Check whether the pose is already registered by the user. Sensitivity can be set through node attributes.

To this end, the node must receive a pose arrangement that has already been saved as input.

If there is a similar pose, return the similarity with the name of the pose. If you have a similar pose, refer to 'Status' of the output data.


This node receives a number of input key points (ex: sequence of continuous time) and determines the similarity to the saved pose, and derives the most similar pose.

This node finds a pose similar to the pose model that enters in real-time input. Criteria to determine that they are similar can be set sensitivity through the properties of the node.

To this end, nodes must receive an arrangement of pose that has already been stored as input.

If a similar pose exists, return the name and similarity of the pose. Refer to 'status' of the output data for the presence of similar poses.


2) Hand

A simple node that recognizes and visualizes hand using a webcam.

This node recognizes the coordinates of the hand by providing tracking for both hands in a browser environment.

Basically, we use a computer-mounted webcam device. When the webcam recognizes the hand, 20 landmark coordinates are predicted and visually displayed as shown in the picture below.

image

Specify the hands pose to save and press the capture button to record the pose model at a specific time. When the hands pose is captured, images and coordinate information are displayed. Check the registration status through the 'register' and 'cancel' buttons.


A simple node that recognizes and visualizes hand using a external camera device which support Samsung SmartThings.

This node provides exactly the same function as the hand-detection-webcam node, but there is a difference in using an external camera device. We recommend using Samsung's 'SmartThings' IoT camera.


This node receives a number of input key points (ex: sequence of continuous time) and determines the similarity to the saved hands pose, and whether registration is possible.

This node is used when a new hand pose is entered that the user wishes to register with a new pose.

To this end, the node must receive a pose array that has already been stored as an input.

If there is a similar pose, return the similarity with the name of the pose. If you have a similar pose, refer to the 'Status' of the output data.


This node receives a number of input key points (ex: sequence of continuous time) and determines the similarity to the saved hands pose, and derives the most similar pose.

This node finds a pose similar to the hands pose model that is entered in real time. If the user maintains his or her behavior for a certain period of time, it is determined that he or she has taken a specific pose.

To this end, the node must receive a pose array that has already been stored as input and a pose array that has been maintained for a specific time.

If there is a similar pose, return the similarity with the name of the pose. If you have a similar pose, refer to 'Status' of the output data.


3) Monitor

A node for monitoring the pose/hand recognition screen in an external browser.

You can check the screen being detected in the external environment through the pose/hand detection node.

Apply the Server Url and Monitor Port of the detection node to the properties of this node.