Skip to content
This repository has been archived by the owner on Dec 8, 2024. It is now read-only.

Sphinx improvements #21

Merged
merged 5 commits into from
Aug 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
POETRY = poetry run
SOURCEDIR = docs/source
BUILDDIR = docs/build
PACKAGEDIR = client

.PHONY: docs docs-clean docs-live

# Build documentation
docs:
$(POETRY) sphinx-build -a "$(SOURCEDIR)" "$(BUILDDIR)/html"

# Remove documentation outputs
# This includes the api dir as it's autogenerated with sphinx-build
docs-clean:
rm -r "$(BUILDDIR)" "$(SOURCEDIR)/api"

# Spin up a local server to serve documentation pages
# Auto-reloads when code changes in PACKAGEDIR are made
docs-live:
$(POETRY) sphinx-autobuild --open-browser --watch "$(PACKAGEDIR)" -a "$(SOURCEDIR)" "$(BUILDDIR)/html"
10 changes: 4 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,13 @@ poetry run black client/models/pose_detection/classification.py

## Documentation

To build the docs locally, run from the root of the repo:

We use [Sphinx](https://www.sphinx-doc.org/) for documentation. To view the documentation locally, run the following command:
```bash
poetry run sphinx-apidoc -f -o docs/source/generated client &&
cd docs &&
poetry run make html
make docs-live
```
This spins up a local server which serves the documentation pages, and also hot-reloads and auto-rebuilds whenever code changes are made.

The documentation is generated automatically from the source code, so it isn't stored in the repo (hence why we need to run `sphinx-apidoc`).
You can build the documentation (without spinning up a server) with `make docs`, and clean the documentation output with `make docs-clean`.

## Downloading ML Models
From top-level directory.
Expand Down
10 changes: 8 additions & 2 deletions client/models/pose_detection/camera.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,14 @@


def is_camera_aligned(pose_landmark_result: PoseLandmarkerResult) -> np.bool_:
"""
Returns whether the camera is aligned to capture the person's side view.
"""Checks whether the camera is aligned to capture the person's side view.

Args:
pose_landmarker_result: Landmarker result as returned by a
mediapipe.tasks.vision.PoseLandmarker

Returns:
True if the camera is aligned, False otherwise
"""
landmarks: list[list[Landmark]] = pose_landmark_result.pose_world_landmarks

Expand Down
35 changes: 24 additions & 11 deletions client/models/pose_detection/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,18 +13,25 @@


def posture_angle(p1: Landmark, p2: Landmark) -> np.float64:
"""
Returns the angle (in degrees) between P2 and P3, where P3 is a point on the
vertical axis of P1 (i.e. its x coordinate is the same as P1's), and is the "ideal"
location of the P2 landmark for good posture.
"""Calculates the neck or torso posture angle (in degrees).

In particular, this calculates the angle (in degrees) between p2 and p3, where p3
is a point on the vertical axis of p1 (i.e. same x coordinate as p1), and
represents the "ideal" location of the p2 landmark for good posture.

The y coordinate of p3 is irrelevant but for simplicity we set it to zero.

For neck posture, take p1 to be the shoulder, p2 to be the ear. For torso posture,
take p1 to be the hip, p2 to be the shoulder.

The y coordinate of P3 is irrelevant but for simplicity we set it to zero.
REF: https://learnopencv.com/wp-content/uploads/2022/03/MediaPipe-pose-neckline-inclination.jpg

For a neck inclination calculation, take P1 to be the shoulder location and pivot
point, and P2 to be the ear location.
Parameters:
p1: Landmark for P1 as described above
p2: Landmark for P2 as described above

For a torso inclination calculation, take P1 to be the hip location and pivot
point, and P2 to be the hip location.
Returns:
Neck or torso posture angle (in degrees)
"""
x1, y1 = p1.x, p1.y
x2, y2 = p2.x, p2.y
Expand All @@ -33,13 +40,19 @@ def posture_angle(p1: Landmark, p2: Landmark) -> np.float64:


def posture_classify(pose_landmark_result: PoseLandmarkerResult) -> np.bool_:
"""
Returns whether the pose in the image has good (True) or bad (False) posture.
"""Classifies the pose in the image as either good or bad posture.

Note: The camera should be aligned to capture the person's side view; the output
may not be accurate otherwise. See `is_camera_aligned()`.

REF: https://learnopencv.com/building-a-body-posture-analysis-system-using-mediapipe

Parameters:
pose_landmarker_result: Landmarker result as returned by a
mediapipe.tasks.vision.PoseLandmarker

Returns:
True if the pose has good posture, False otherwise
"""
landmarks: list[list[Landmark]] = pose_landmark_result.pose_world_landmarks

Expand Down
26 changes: 14 additions & 12 deletions client/models/pose_detection/landmarking.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,10 @@

@dataclass
class AnnotatedImage:
"""Represents mutable annoted image through data attribute. Can be used to set annotated image
within a callback asynchronously without raising an error.
"""Represents mutable annotated image through data attribute.

Can be used to set annotated image within a callback asynchronously without raising
an error.
"""

data: Optional[np.ndarray] = None
Expand All @@ -26,11 +28,11 @@ def draw_landmarks_on_image(
) -> np.ndarray:
"""
Args:
bgr_image (np.ndarray): CxWxH image array where channels are sorted in BGR.
detection_result (PoseLandmarkerResult): Landmarker result as returned by a
mediapipe.tasks.vision.PoseLandmarker
bgr_image: CxWxH image array where channels are sorted in BGR.
detection_result: Landmarker result as returned by a
mediapipe.tasks.vision.PoseLandmarker.
LimaoC marked this conversation as resolved.
Show resolved Hide resolved
Returns:
(np.ndarray): Image with landmarks annotated.
Image with landmarks annotated.
"""
pose_landmarks_list = detection_result.pose_landmarks
annotated_image = np.copy(bgr_image)
Expand Down Expand Up @@ -58,17 +60,17 @@ def display_landmarking(
result: PoseLandmarkerResult,
output_image: mp.Image,
timestamp: int,
annotated_image=AnnotatedImage,
annotated_image: AnnotatedImage,
LimaoC marked this conversation as resolved.
Show resolved Hide resolved
) -> None:
"""Mutates annotated image to contain visualization of detected landmarks.

Also prints debugging info to the standard output.

Args:
result (PoseLandmarkerResult): Landmarker result as returned by a
mediapipe.tasks.vision.PoseLandmarker
output_image (mp.Image): Raw image used for landmarking.
timestamp (int): Video timestamp in milliseconds.
annotated_image (AnnotatedImage): Image to mutate.
result: Landmarker result as returned by a mediapipe.tasks.vision.PoseLandmarker
output_image: Raw image used for landmarking.
timestamp: Video timestamp in milliseconds.
annotated_image: Image to mutate.
"""
annotated_image.data = draw_landmarks_on_image(output_image.numpy_view(), result)
print(
Expand Down
7 changes: 4 additions & 3 deletions client/models/pose_detection/routines.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@ def __init__(
self.video_capture = cv2.VideoCapture(0)

def track_posture(self) -> None:
"""Get frame from video capture device and process with pose model, then posture algorithm.
Print debugging info and display landmark annotated frame.
"""Get frame from video capture device and process with pose model, then posture
algorithm. Print debugging info and display landmark annotated frame.
"""
success, frame = self.video_capture.read()
if not success:
Expand All @@ -61,8 +61,9 @@ def __exit__(self, unused_exc_type, unused_exc_value, unused_traceback) -> None:

def create_debug_posture_tracker() -> DebugPostureTracker:
"""Handles config of livestreamed input and model loading.

Returns:
(DebugPostureTracker): Tracker object which acts as context manager.
Tracker object which acts as context manager.
"""
annotated_image = AnnotatedImage()
options = PoseLandmarkerOptions(
Expand Down
16 changes: 13 additions & 3 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

import os
import sys

# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information

Expand All @@ -15,20 +18,27 @@
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration


sys.path.insert(0, os.path.abspath("../.."))

extensions = [
"sphinx.ext.duration",
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.napoleon",
"sphinxcontrib.apidoc",
]

templates_path = ["_templates"]
exclude_patterns = []

apidoc_module_dir = "../../client"
apidoc_output_dir = "api"

napoleon_google_docstring = True
napoleon_numpy_docstring = False

# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output


html_theme = "alabaster"
html_theme = "piccolo_theme"
html_static_path = ["_static"]
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Contents

.. toctree::

generated/modules
api/modules
Loading