diff --git a/Writerside/topics/load.dataset.md b/Writerside/topics/load.dataset.md index b319848c..4462b3ca 100644 --- a/Writerside/topics/load.dataset.md +++ b/Writerside/topics/load.dataset.md @@ -93,12 +93,15 @@ This doesn't immediately download the dataset, but only when you call the get_* functions.
The site, date, version must match the dataset path on GCS. For example if the dataset is at -gs://frdc-scan/my-site/date/90deg/map, +gs://frdc-scan/my-site/20201218/90deg/map, -
  • site: 'my-site'
  • -
  • date: '20201218'
  • -
  • version: '90deg/map'
  • +
  • site='my-site'
  • +
  • date='20201218'
  • +
  • version='90deg/map'
  • +If the dataset doesn't have a "version", for example: +gs://frdc-scan/my-site/20201218, +then you can pass in version=None.
    If you don't want to search up GCS, you can use FRDCDownloader to list all datasets, and their versions with diff --git a/Writerside/topics/preprocessing.extract_segments.md b/Writerside/topics/preprocessing.extract_segments.md index fee4d656..ed257e8d 100644 --- a/Writerside/topics/preprocessing.extract_segments.md +++ b/Writerside/topics/preprocessing.extract_segments.md @@ -121,14 +121,14 @@ For example, the label 1 and 2 extracted images will be -## Usage - - If **cropped is False**, the segments are padded with 0s to the original image size. While this can ensure shape consistency, it can consume more memory for large images. - If **cropped is True**, the segments are cropped to the minimum bounding box. This can save memory, but the shape of the segments will be inconsistent. +## Usage + ### Extract from Bounds and Labels Extract segments from bounds and labels. diff --git a/Writerside/topics/preprocessing.scale.md b/Writerside/topics/preprocessing.scale.md index 1b543942..17bf8e02 100644 --- a/Writerside/topics/preprocessing.scale.md +++ b/Writerside/topics/preprocessing.scale.md @@ -40,8 +40,8 @@ ar_norm = scale_normal_per_band(ar) ar_static = scale_static_per_band(ar, order, BAND_MAX_CONFIG) ``` -> The static scaling has a default config infers the max bit depth from the -> capturing device. +> The static scaling has a default config, which was inferred by our capturing +> device. ## API diff --git a/docs/getting-started.html b/docs/getting-started.html index f1aaf4b0..d305d68c 100644 --- a/docs/getting-started.html +++ b/docs/getting-started.html @@ -1,20 +1,20 @@ - Getting Started | Documentation

    Documentation 0.0.4 Help

    Getting Started

    Installing the Dev. Environment

    1. Ensure that you have the right version of Python. The required Python version can be seen in pyproject.toml

      + Getting Started | Documentation

      Documentation 0.0.4 Help

      Getting Started

      Installing the Dev. Environment

      1. Ensure that you have the right version of Python. The required Python version can be seen in pyproject.toml

        [tool.poetry.dependencies] python = "..." -
      2. Start by cloning our repository.

        +
      3. Start by cloning our repository.

        git clone https://github.com/Forest-Recovery-Digital-Companion/FRDC-ML.git -
      4. Then, create a Python Virtual Env pyvenv

        python -m venv venv/
        python3 -m venv venv/
      5. Install Poetry Then check if it's installed with

        poetry --version
      6. Activate the virtual environment

        +
      7. Then, create a Python Virtual Env pyvenv

        python -m venv venv/
        python3 -m venv venv/
      8. Install Poetry Then check if it's installed with

        poetry --version
      9. Activate the virtual environment

        cd venv/Scripts activate cd ../.. -
        +
        source venv/bin/activate -
      10. Install the dependencies. You should be in the same directory as pyproject.toml

        +
    2. Install the dependencies. You should be in the same directory as pyproject.toml

      poetry install --with dev -
    3. Install Pre-Commit Hooks

      +
    4. Install Pre-Commit Hooks

      pre-commit install -

    Setting Up Google Cloud

    1. We use Google Cloud to store our datasets. To set up Google Cloud, install the Google Cloud CLI

    2. Then, authenticate your account.

      gcloud auth login
    3. Finally, set up Application Default Credentials (ADC).

      gcloud auth application-default login
    4. To make sure everything is working, run the tests.

    Pre-commit Hooks

    • +

    Setting Up Google Cloud

    1. We use Google Cloud to store our datasets. To set up Google Cloud, install the Google Cloud CLI

    2. Then, authenticate your account.

      gcloud auth login
    3. Finally, set up Application Default Credentials (ADC).

      gcloud auth application-default login
    4. To make sure everything is working, run the tests.

    Pre-commit Hooks

    • pre-commit install -

    Running the Tests

    1. Run the tests to make sure everything is working

      +

    Running the Tests

    1. Run the tests to make sure everything is working

      pytest -
    2. In case of errors:

      google.auth.exceptions.DefaultCredentialsError

      If you get this error, it means that you haven't authenticated your Google Cloud account. See Setting Up Google Cloud

      ModuleNotFoundError

      If you get this error, it means that you haven't installed the dependencies. See Installing the Dev. Environment

    Our Repository Structure

    Before starting development, take a look at our repository structure. This will help you understand where to put your code.

    Core Dependencies
    Resources
    Pipeline
    Tests
    Repo Dependencies
    Dataset Loaders
    Preprocessing Fn.
    Train Deps
    Model Architectures
    Datasets ...
    Model Training Pipeline
    FRDC
    src/frdc/
    rsc/
    pipeline/
    tests/
    pyproject.toml,poetry.lock
    ./load/
    ./preprocess/
    ./train/
    ./models/
    ./dataset_name/
    ./model_tests/
    src/frdc/

    Source Code for our package. These are the unit components of our pipeline.

    rsc/

    Resources. These are usually cached datasets

    pipeline/

    Pipeline code. These are the full ML tests of our pipeline.

    tests/

    PyTest tests. These are unit tests & integration tests.

    Unit, Integration, and Pipeline Tests

    We have 3 types of tests:

    • Unit Tests are usually small, single function tests.

    • Integration Tests are larger tests that tests a mock pipeline.

    • Pipeline Tests are the true production pipeline tests that will generate a model.

    Where Should I contribute?

    Changing a small component

    If you're changing a small component, such as a argument for preprocessing, a new model architecture, or a new configuration for a dataset, take a look at the src/frdc/ directory.

    Adding a test

    By adding a new component, you'll need to add a new test. Take a look at the tests/ directory.

    Changing the pipeline

    If you're a ML Researcher, you'll probably be changing the pipeline. Take a look at the pipeline/ directory.

    Adding a dependency

    If you're adding a new dependency, use poetry add PACKAGE and commit the changes to pyproject.toml and poetry.lock.

    Last modified: 25 October 2023
    \ No newline at end of file +
  • In case of errors:

    google.auth.exceptions.DefaultCredentialsError

    If you get this error, it means that you haven't authenticated your Google Cloud account. See Setting Up Google Cloud

    ModuleNotFoundError

    If you get this error, it means that you haven't installed the dependencies. See Installing the Dev. Environment

  • Our Repository Structure

    Before starting development, take a look at our repository structure. This will help you understand where to put your code.

    Core Dependencies
    Resources
    Pipeline
    Tests
    Repo Dependencies
    Dataset Loaders
    Preprocessing Fn.
    Train Deps
    Model Architectures
    Datasets ...
    Model Training Pipeline
    FRDC
    src/frdc/
    rsc/
    pipeline/
    tests/
    pyproject.toml,poetry.lock
    ./load/
    ./preprocess/
    ./train/
    ./models/
    ./dataset_name/
    ./model_tests/
    src/frdc/

    Source Code for our package. These are the unit components of our pipeline.

    rsc/

    Resources. These are usually cached datasets

    pipeline/

    Pipeline code. These are the full ML tests of our pipeline.

    tests/

    PyTest tests. These are unit tests & integration tests.

    Unit, Integration, and Pipeline Tests

    We have 3 types of tests:

    • Unit Tests are usually small, single function tests.

    • Integration Tests are larger tests that tests a mock pipeline.

    • Pipeline Tests are the true production pipeline tests that will generate a model.

    Where Should I contribute?

    Changing a small component

    If you're changing a small component, such as a argument for preprocessing, a new model architecture, or a new configuration for a dataset, take a look at the src/frdc/ directory.

    Adding a test

    By adding a new component, you'll need to add a new test. Take a look at the tests/ directory.

    Changing the pipeline

    If you're a ML Researcher, you'll probably be changing the pipeline. Take a look at the pipeline/ directory.

    Adding a dependency

    If you're adding a new dependency, use poetry add PACKAGE and commit the changes to pyproject.toml and poetry.lock.

    Last modified: 25 October 2023
    \ No newline at end of file diff --git a/docs/load-dataset.html b/docs/load-dataset.html index bf3dc731..80e3d822 100644 --- a/docs/load-dataset.html +++ b/docs/load-dataset.html @@ -1,4 +1,4 @@ - load.dataset | Documentation

    Documentation 0.0.4 Help

    load.dataset

    Classes

    FRDCDownloader

    This facilitates authentication and downloading from GCS.

    FRDCDataset

    This uses the Downloader to download and load the dataset. It also implements useful helper functions to load FRDC-specific datasets, such as loading our images and labels.

    Usage

    An example loading our Chestnut Nature Park dataset. We retrieve the

    • hyperspectral bands

    • order of the bands

    • bounding boxes

    • labels

    + load.dataset | Documentation

    Documentation 0.0.4 Help

    load.dataset

    Classes

    FRDCDownloader

    This facilitates authentication and downloading from GCS.

    FRDCDataset

    This uses the Downloader to download and load the dataset. It also implements useful helper functions to load FRDC-specific datasets, such as loading our images and labels.

    Usage

    An example loading our Chestnut Nature Park dataset. We retrieve the

    • hyperspectral bands

    • order of the bands

    • bounding boxes

    • labels

    from frdc.load import FRDCDataset ds = FRDCDataset(site='chestnut_nature_park', @@ -6,7 +6,7 @@ version=None, ) ar, order = ds.get_ar_bands() bounds, labels = ds.get_bounds_and_labels() -

    Custom Authentication & Downloads

    If you need granular control over

    • where the files are downloaded

    • the credentials used

    • the project used

    • the bucket used

    Then pass in a FRDCDownloader object to FRDCDataset.

    +

    Custom Authentication & Downloads

    If you need granular control over

    • where the files are downloaded

    • the credentials used

    • the project used

    • the bucket used

    Then pass in a FRDCDownloader object to FRDCDataset.

    from frdc.load import FRDCDownloader, FRDCDataset dl = FRDCDownloader(credentials=..., @@ -19,7 +19,7 @@ dl=dl) ar, order = ds.get_ar_bands() bounds, labels = ds.get_bounds_and_labels() -

    If you have a file not easily downloadable by FRDCDataset, you can use FRDCDownloader to download it.

    +

    If you have a file not easily downloadable by FRDCDataset, you can use FRDCDownloader to download it.

    from frdc.load import FRDCDownloader dl = FRDCDownloader(credentials=..., @@ -28,4 +28,4 @@ bucket_name=...) dl.download_file(path_glob="path/to/gcs/file") -

    API

    FRDCDataset

    FRDCDataset(site, date, version, dl)

    Initializes the dataset downloader.


    This doesn't immediately download the dataset, but only when you call the get_* functions.


    The site, date, version must match the dataset path on GCS. For example if the dataset is at gs://frdc-scan/my-site/date/90deg/map,

    • site: 'my-site'

    • date: '20201218'

    • version: '90deg/map'

    get_ar_bands()

    Gets the NDArray bands (H x W x C) and channel order as tuple[np.ndarray, list[str]].


    This downloads (if missing) and retrieves the stacked NDArray bands. This wraps around get_ar_bands_as_dict(), thus if you want more control over how the bands are loaded, use that instead.

    get_ar_bands_as_dict()

    Gets the NDArray bands (H x W) as a dict[str, np.ndarray].


    This downloads (if missing) and retrieves the individual NDArray bands as a dictionary. The keys are the band names, and the values are the NDArray bands.

    get_bounds_and_labels()

    Gets the bounding boxes and labels as tuple[list[Rect], list[str]].


    This downloads (if missing) and retrieves the bounding boxes and labels as a tuple. The first element is a list of bounding boxes, and the second element is a list of labels.


    FRDCDownloader

    list_gcs_datasets(anchor)

    Lists all GCS datasets in the bucket as DataFrame


    This works by checking which folders have a specific file, which we call the anchor.

    download_file(path_glob, local_exists_ok)

    Downloads a file from GCS.


    This takes in a path glob, a string containing wildcards, and downloads exactly 1 file. If it matches 0 or more than 1 file, it will raise an error.


    If local_exists_ok is True, it will not download the file if it already exists locally. However, if it's False, it will download the file only if the hashes don't match.

    Last modified: 25 October 2023
    \ No newline at end of file +

    API

    FRDCDataset

    FRDCDataset(site, date, version, dl)

    Initializes the dataset downloader.


    This doesn't immediately download the dataset, but only when you call the get_* functions.


    The site, date, version must match the dataset path on GCS. For example if the dataset is at gs://frdc-scan/my-site/20201218/90deg/map,

    • site='my-site'

    • date='20201218'

    • version='90deg/map'

    If the dataset doesn't have a "version", for example: gs://frdc-scan/my-site/20201218, then you can pass in version=None.


    get_ar_bands()

    Gets the NDArray bands (H x W x C) and channel order as tuple[np.ndarray, list[str]].


    This downloads (if missing) and retrieves the stacked NDArray bands. This wraps around get_ar_bands_as_dict(), thus if you want more control over how the bands are loaded, use that instead.

    get_ar_bands_as_dict()

    Gets the NDArray bands (H x W) as a dict[str, np.ndarray].


    This downloads (if missing) and retrieves the individual NDArray bands as a dictionary. The keys are the band names, and the values are the NDArray bands.

    get_bounds_and_labels()

    Gets the bounding boxes and labels as tuple[list[Rect], list[str]].


    This downloads (if missing) and retrieves the bounding boxes and labels as a tuple. The first element is a list of bounding boxes, and the second element is a list of labels.


    FRDCDownloader

    list_gcs_datasets(anchor)

    Lists all GCS datasets in the bucket as DataFrame


    This works by checking which folders have a specific file, which we call the anchor.

    download_file(path_glob, local_exists_ok)

    Downloads a file from GCS.


    This takes in a path glob, a string containing wildcards, and downloads exactly 1 file. If it matches 0 or more than 1 file, it will raise an error.


    If local_exists_ok is True, it will not download the file if it already exists locally. However, if it's False, it will download the file only if the hashes don't match.

    Last modified: 25 October 2023
    \ No newline at end of file diff --git a/docs/overview.html b/docs/overview.html index e5c0bf72..9238bfd8 100644 --- a/docs/overview.html +++ b/docs/overview.html @@ -1 +1 @@ - Overview | Documentation

    Documentation 0.0.4 Help

    Overview

    Forest Recovery Digital Companion (FRDC) is a ML-assisted companion for ecologists to automatically classify surveyed trees via an Unmanned Aerial Vehicle (UAV).

    This package, FRDC-ML is the Machine Learning backbone of this project, a centralized repository of tools and model architectures to be used in the FRDC pipeline.

    Get started here

    Other Projects

    FRDC-UI

    The User Interface Repository for FRDC, a WebApp GUI for ecologists to adjust annotations.

    Last modified: 25 October 2023
    \ No newline at end of file + Overview | Documentation

    Documentation 0.0.4 Help

    Overview

    Forest Recovery Digital Companion (FRDC) is a ML-assisted companion for ecologists to automatically classify surveyed trees via an Unmanned Aerial Vehicle (UAV).

    This package, FRDC-ML is the Machine Learning backbone of this project, a centralized repository of tools and model architectures to be used in the FRDC pipeline.

    Get started here

    Other Projects

    FRDC-UI

    The User Interface Repository for FRDC, a WebApp GUI for ecologists to adjust annotations.

    Last modified: 25 October 2023
    \ No newline at end of file diff --git a/docs/preprocessing-extract-segments.html b/docs/preprocessing-extract-segments.html index 24a7d73a..a2465dfc 100644 --- a/docs/preprocessing-extract-segments.html +++ b/docs/preprocessing-extract-segments.html @@ -1,4 +1,4 @@ - preprocessing.extract_segments | Documentation

    Documentation 0.0.4 Help

    preprocessing.extract_segments

    Functions

    extract_segments_from_labels

    Extracts segments from a label classification.

    extract_segments_from_bounds

    Extracts segments from Rect bounds.

    remove_small_segments_from_labels

    Removes small segments from a label classification.

    Extract with Boundaries

    A boundary is a Rect object that represents the minimum bounding box of a segment, with x0, y0, x1, y1 coordinates.

    It simply slices the original image to the bounding box. The origin is the top left corner of the image.

    + preprocessing.extract_segments | Documentation

    Documentation 0.0.4 Help

    preprocessing.extract_segments

    Functions

    extract_segments_from_labels

    Extracts segments from a label classification.

    extract_segments_from_bounds

    Extracts segments from Rect bounds.

    remove_small_segments_from_labels

    Removes small segments from a label classification.

    Extract with Boundaries

    A boundary is a Rect object that represents the minimum bounding box of a segment, with x0, y0, x1, y1 coordinates.

    It simply slices the original image to the bounding box. The origin is the top left corner of the image.

    +-----------------+ +-----------+ | Original | | Segmented | | Image | | Image | @@ -9,7 +9,7 @@ +-----+-----+-----+ 1, 2, 0, 2 +-----+-----+ | 7 | 8 | 9 | x0 y0 x1 y1 | 8 | 9 | +-----+-----+-----+ +-----+-----+ -
    +
    +-----------------+ +-----------------+ | Original | | Segmented | | Image | | Image | @@ -20,7 +20,7 @@ +-----+-----+-----+ 1, 2, 0, 2 +-----+-----+-----+ | 7 | 8 | 9 | x0 y0 x1 y1 | 0 | 8 | 9 | +-----+-----+-----+ +-----+-----+-----+ -

    Extract with Labels

    A label classification is a np.ndarray where each pixel is mapped to a segment. The segments are mapped to a unique integer. In our project, the 0th label is the background.

    For example, a label classification of 3 segments will look like this:

    +

    Extract with Labels

    A label classification is a np.ndarray where each pixel is mapped to a segment. The segments are mapped to a unique integer. In our project, the 0th label is the background.

    For example, a label classification of 3 segments will look like this:

    +-----------------+ +-----------------+ | Label | | Original | | Classification | | Image | @@ -31,7 +31,7 @@ +-----+-----+-----+ +-----+-----+-----+ | 1 | 1 | 0 | | 7 | 8 | 9 | +-----+-----+-----+ +-----+-----+-----+ -

    The extraction will take the minimum bounding box of each segment and return a list of segments.

    For example, the label 1 and 2 extracted images will be

    +

    The extraction will take the minimum bounding box of each segment and return a list of segments.

    For example, the label 1 and 2 extracted images will be

    +-----------+ +-----------+ | Extracted | | Extracted | | Segment 1 | | Segment 2 | @@ -42,7 +42,7 @@ +-----+-----+ +-----+-----+ | 7 | 8 | +-----+-----+ -
    +
    +-----------------+ +-----------------+ | Extracted | | Extracted | | Segment 1 | | Segment 2 | @@ -53,7 +53,7 @@ +-----+-----+-----+ +-----+-----+-----+ | 7 | 8 | 0 | | 0 | 0 | 0 | +-----+-----+-----+ +-----+-----+-----+ -

    Usage

    • If cropped is False, the segments are padded with 0s to the original image size. While this can ensure shape consistency, it can consume more memory for large images.

    • If cropped is True, the segments are cropped to the minimum bounding box. This can save memory, but the shape of the segments will be inconsistent.

    Extract from Bounds and Labels

    Extract segments from bounds and labels.

    +
    • If cropped is False, the segments are padded with 0s to the original image size. While this can ensure shape consistency, it can consume more memory for large images.

    • If cropped is True, the segments are cropped to the minimum bounding box. This can save memory, but the shape of the segments will be inconsistent.

    Usage

    Extract from Bounds and Labels

    Extract segments from bounds and labels.

    import numpy as np from frdc.load import FRDCDataset from frdc.preprocess.extract_segments import extract_segments_from_bounds @@ -65,7 +65,7 @@ bounds, labels = ds.get_bounds_and_labels() segments: list[np.ndarray] = extract_segments_from_bounds(ar, bounds) -

    Extract from Auto-Segmentation

    Extract segments from a label classification.

    +

    Extract from Auto-Segmentation

    Extract segments from a label classification.

    from skimage.morphology import remove_small_objects, remove_small_holes import numpy as np @@ -91,4 +91,4 @@ min_height=10, min_width=10) segments: list[np.ndarray] = extract_segments_from_labels(ar, ar_labels) -

    API

    extract_segments_from_labels(ar, ar_labels, cropped)

    Extracts segments from a label classification.


    ar_labels is a label classification as a np.ndarray

    extract_segments_from_bounds(ar, bounds, cropped)

    Extracts segments from Rect bounds.


    bounds is a list of Rect bounds.

    remove_small_segments_from_labels(ar_labels, min_height, min_width)

    Removes small segments from a label classification.


    Last modified: 25 October 2023
    \ No newline at end of file +

    API

    extract_segments_from_labels(ar, ar_labels, cropped)

    Extracts segments from a label classification.


    ar_labels is a label classification as a np.ndarray

    extract_segments_from_bounds(ar, bounds, cropped)

    Extracts segments from Rect bounds.


    bounds is a list of Rect bounds.

    remove_small_segments_from_labels(ar_labels, min_height, min_width)

    Removes small segments from a label classification.


    Last modified: 25 October 2023
    \ No newline at end of file diff --git a/docs/preprocessing-morphology.html b/docs/preprocessing-morphology.html index b3da6f18..d09ff367 100644 --- a/docs/preprocessing-morphology.html +++ b/docs/preprocessing-morphology.html @@ -1,4 +1,4 @@ - preprocessing.morphology | Documentation

    Documentation 0.0.4 Help

    preprocessing.morphology

    Functions

    threshold_binary_mask

    Thresholds a selected NDArray bands to yield a binary mask.

    binary_watershed

    Performs watershed on a binary mask to yield a mapped label classification

    Usage

    Perform auto-segmentation on a dataset to yield a label classification.

    + preprocessing.morphology | Documentation

    Documentation 0.0.4 Help

    preprocessing.morphology

    Functions

    threshold_binary_mask

    Thresholds a selected NDArray bands to yield a binary mask.

    binary_watershed

    Performs watershed on a binary mask to yield a mapped label classification

    Usage

    Perform auto-segmentation on a dataset to yield a label classification.

    from frdc.load import FRDCDataset from frdc.preprocess.morphology import ( threshold_binary_mask, binary_watershed @@ -10,6 +10,6 @@ ar, order = ds.get_ar_bands() mask = threshold_binary_mask(ar, order.index('NIR'), 90 / 256) ar_label = binary_watershed(mask) -

    API

    threshold_binary_mask(ar, band_idx, threshold_value)

    Thresholds a selected NDArray bands to yield a binary mask as np.ndarray


    This is equivalent to

    +

    API

    threshold_binary_mask(ar, band_idx, threshold_value)

    Thresholds a selected NDArray bands to yield a binary mask as np.ndarray


    This is equivalent to

    ar[..., band_idx] > threshold_value -
    binary_watershed(ar_mask, peaks_footprint, watershed_compactness)

    Performs watershed on a binary mask to yield a mapped label classification as a np.ndarray


    • peaks_footprint is the footprint of skimage.feature.peak_local_max

    • watershed_compactness is the compactness of skimage.morphology.watershed

    Last modified: 25 October 2023
    \ No newline at end of file +
    binary_watershed(ar_mask, peaks_footprint, watershed_compactness)

    Performs watershed on a binary mask to yield a mapped label classification as a np.ndarray


    • peaks_footprint is the footprint of skimage.feature.peak_local_max

    • watershed_compactness is the compactness of skimage.morphology.watershed

    Last modified: 25 October 2023
    \ No newline at end of file diff --git a/docs/preprocessing-scale.html b/docs/preprocessing-scale.html index 2b23cf38..4fcaa6c5 100644 --- a/docs/preprocessing-scale.html +++ b/docs/preprocessing-scale.html @@ -1,4 +1,4 @@ - preprocessing.scale | Documentation

    Documentation 0.0.4 Help

    preprocessing.scale

    Functions

    scale_0_1_per_band

    Scales the NDArray bands to [0, 1] per band.

    scale_normal_per_band

    Scales the NDArray bands to zero mean unit variance per band.

    scale_static_per_band

    Scales the NDArray bands by a predefined configuration.

    Usage

    + preprocessing.scale | Documentation

    Documentation 0.0.4 Help

    preprocessing.scale

    Functions

    scale_0_1_per_band

    Scales the NDArray bands to [0, 1] per band.

    scale_normal_per_band

    Scales the NDArray bands to zero mean unit variance per band.

    scale_static_per_band

    Scales the NDArray bands by a predefined configuration.

    Usage

    from frdc.load import FRDCDataset from frdc.preprocess.scale import ( scale_0_1_per_band, scale_normal_per_band, scale_static_per_band @@ -12,4 +12,4 @@ ar_01 = scale_0_1_per_band(ar) ar_norm = scale_normal_per_band(ar) ar_static = scale_static_per_band(ar, order, BAND_MAX_CONFIG) -

    API

    scale_0_1_per_band(ar)

    Scales the NDArray bands to [0, 1] per band.


    scale_normal_per_band(ar)

    Scales the NDArray bands to zero mean unit variance per band.


    scale_static_per_band(ar, order, config)

    Scales the NDArray bands by a predefined configuration.


    The config is of dict[str, tuple[int, int]] where the key is the band name, and the value is a tuple of (min, max). Take a look at frdc.conf.BAND_MAX_CONFIG for an example.

    Last modified: 25 October 2023
    \ No newline at end of file +

    API

    scale_0_1_per_band(ar)

    Scales the NDArray bands to [0, 1] per band.


    scale_normal_per_band(ar)

    Scales the NDArray bands to zero mean unit variance per band.


    scale_static_per_band(ar, order, config)

    Scales the NDArray bands by a predefined configuration.


    The config is of dict[str, tuple[int, int]] where the key is the band name, and the value is a tuple of (min, max). Take a look at frdc.conf.BAND_MAX_CONFIG for an example.

    Last modified: 25 October 2023
    \ No newline at end of file diff --git a/webHelpD2-all.zip b/webHelpD2-all.zip index 0b62c6ae..a068cf99 100644 Binary files a/webHelpD2-all.zip and b/webHelpD2-all.zip differ