Skip to content

Commit

Permalink
Add documentation for Instruction-Data-Guard classifier (#398)
Browse files Browse the repository at this point in the history
* add docs

Signed-off-by: Sarah Yurick <[email protected]>

* add python scripts

Signed-off-by: Sarah Yurick <[email protected]>

* update name to instruction-data-guard

Signed-off-by: Sarah Yurick <[email protected]>

* run black

Signed-off-by: Sarah Yurick <[email protected]>

* update readmes

Signed-off-by: Sarah Yurick <[email protected]>

* edit readmes

Signed-off-by: Sarah Yurick <[email protected]>

* update to output_path

Signed-off-by: Sarah Yurick <[email protected]>

* Update docs/user-guide/distributeddataclassification.rst

Co-authored-by: Vibhu Jawa <[email protected]>
Signed-off-by: Sarah Yurick <[email protected]>

* remove trailing whitespace

Signed-off-by: Sarah Yurick <[email protected]>

---------

Signed-off-by: Sarah Yurick <[email protected]>
Signed-off-by: Sarah Yurick <[email protected]>
Co-authored-by: Vibhu Jawa <[email protected]>
  • Loading branch information
sarahyurick and VibhuJawa authored Dec 16, 2024
1 parent 3c3cc98 commit 86830ab
Show file tree
Hide file tree
Showing 12 changed files with 265 additions and 10 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ All of our text pipelines have great multilingual support.
- [Heuristic Filtering](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/qualityfiltering.html)
- Classifier Filtering
- [fastText](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/qualityfiltering.html)
- GPU-Accelerated models: [Domain (English and multilingual), Quality, and Safety Classification](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html)
- GPU-Accelerated models: [Domain (English and multilingual), Quality, Safety, and Educational Content Classification](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html)
- **GPU-Accelerated Deduplication**
- [Exact Deduplication](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/gpudeduplication.html)
- [Fuzzy Deduplication](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/gpudeduplication.html) via MinHash Locality Sensitive Hashing
Expand Down Expand Up @@ -158,7 +158,7 @@ To get started with NeMo Curator, you can follow the tutorials [available here](

- [`tinystories`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/tinystories) which focuses on data curation for training LLMs from scratch.
- [`peft-curation`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/peft-curation) which focuses on data curation for LLM parameter-efficient fine-tuning (PEFT) use-cases.
- [`distributed_data_classification`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/distributed_data_classification) which focuses on using the quality and domain classifiers to help with data annotation.
- [`distributed_data_classification`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/distributed_data_classification) which focuses on using the domain and quality classifiers to help with data annotation.
- [`single_node_tutorial`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/single_node_tutorial) which demonstrates an end-to-end data curation pipeline for curating Wikipedia data in Thai.
- [`image-curation`](https://github.com/NVIDIA/NeMo-Curator/blob/main/tutorials/image-curation/image-curation.ipynb) which explores the scalable image curation modules.

Expand Down
3 changes: 3 additions & 0 deletions docs/user-guide/api/classifiers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,6 @@ Classifiers

.. autoclass:: nemo_curator.classifiers.AegisClassifier
:members:

.. autoclass:: nemo_curator.classifiers.InstructionDataGuardClassifier
:members:
2 changes: 1 addition & 1 deletion docs/user-guide/bestpractices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Here are the methods in increasing order of compute required for them.

#. `fastText <https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/qualityfiltering.html#classifier-filtering>`_ is an n-gram based bag-of-words classifier. It is typically trained on a high quality reference corpus and a low quality corpus (typically unfiltered Common Crawl dumps). While NeMo Curator does not provide pretrained versions of the classifier, training it is incredibly fast and easy. It only requires 100,000 - 1,000,000 text samples to train on, and can complete training in mere seconds. Its small size also allows it to train and run inference on the CPU. Due to these factors, we recommend using fastText classifiers on large scale pretraining datasets where you don't have the compute budget for more sophisticated methods.

#. `BERT-style classifiers <https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html>`_ - NeMo Curator's distributed data classification modules work with many BERT-style classifiers for `domain classification <https://huggingface.co/nvidia/domain-classifier>`_, quality classification, and more. For this comparison, we'll focus on just the text quality classifier. NeMo Curator provides a pretrained version of the classifier on HuggingFace and NGC that can be immediately used. We recommend using these classifiers towards the end of your data filtering pipeline for pretraining.
#. `BERT-style classifiers <https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html>`_ - NeMo Curator's distributed data classification modules work with many BERT-style classifiers for `domain classification <https://huggingface.co/nvidia/domain-classifier>`_, `quality classification <https://huggingface.co/nvidia/quality-classifier-deberta>`_, and more. For this comparison, we'll focus on just the text quality classifier. NeMo Curator provides a pretrained version of the classifier on HuggingFace and NGC that can be immediately used. We recommend using these classifiers towards the end of your data filtering pipeline for pretraining.

#. `Language model labelling <https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/syntheticdata.html>`_ - Language models can be used to label text as high quality or low quality. NeMo Curator allows you to connect to arbitrary LLM inference endpoints which you can use to label your data. One example of such an endpoint would be Nemotron-4 340B Instruct on `build.nvidia.com <https://build.nvidia.com/explore/discover#nemotron-4-340b-instruct>`_. Due to their size, these models can require a lot of compute and are usually infeasible to run across an entire pretraining dataset. We recommend using these large models on very little amounts of data. Fine-tuning datasets can make good use of them.

Expand Down
2 changes: 2 additions & 0 deletions docs/user-guide/cpuvsgpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,8 @@ The following NeMo Curator modules are GPU based.

* Domain Classification (English and multilingual)
* Quality Classification
* AEGIS and Instruction-Data-Guard Safety Models
* FineWeb Educational Content Classification

GPU modules store the ``DocumentDataset`` using a ``cudf`` backend instead of a ``pandas`` one.
To read a dataset into GPU memory, one could use the following function call.
Expand Down
35 changes: 32 additions & 3 deletions docs/user-guide/distributeddataclassification.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ Here, we summarize why each is useful for training an LLM:

- The **AEGIS Safety Models** are essential for filtering harmful or risky content, which is critical for training models that should avoid learning from unsafe data. By classifying content into 13 critical risk categories, AEGIS helps remove harmful or inappropriate data from the training sets, improving the overall ethical and safety standards of the LLM.

- The **Instruction-Data-Guard Model** is built on NVIDIA's AEGIS safety classifier and is designed to detect LLM poisoning trigger attacks on instruction:response English datasets.

- The **FineWeb Educational Content Classifier** focuses on identifying and prioritizing educational material within datasets. This classifier is especially useful for training LLMs on specialized educational content, which can improve their performance on knowledge-intensive tasks. Models trained on high-quality educational content demonstrate enhanced capabilities on academic benchmarks such as MMLU and ARC, showcasing the classifier's impact on improving the knowledge-intensive task performance of LLMs.

-----------------------------------------
Expand All @@ -45,7 +47,7 @@ It is easy to extend ``DistributedDataClassifier`` to your own model.
Check out ``nemo_curator.classifiers.base.py`` for reference.

Domain Classifier
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^

The Domain Classifier is used to categorize English text documents into specific domains or subject areas. This is particularly useful for organizing large datasets and tailoring the training data for domain-specific LLMs.

Expand Down Expand Up @@ -90,7 +92,7 @@ Using the ``MultilingualDomainClassifier`` is very similar to using the ``Domain
For more information about the multilingual domain classifier, including its supported languages, please see the `nvidia/multilingual-domain-classifier <https://huggingface.co/nvidia/multilingual-domain-classifier>`_ on Hugging Face.

Quality Classifier
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^

The Quality Classifier is designed to assess the quality of text documents, helping to filter out low-quality or noisy data from your dataset.

Expand All @@ -112,7 +114,7 @@ The quality classifier is obtained from `Hugging Face <https://huggingface.co/nv
In this example, it filters the input dataset to include only documents classified as "High" or "Medium" quality.

AEGIS Safety Model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^

Aegis is a family of content-safety LLMs used for detecting if a piece of text contains content that is a part of 13 critical risk categories.
There are two variants, `defensive <https://huggingface.co/nvidia/Aegis-AI-Content-Safety-LlamaGuard-Defensive-1.0>`_ and `permissive <https://huggingface.co/nvidia/Aegis-AI-Content-Safety-LlamaGuard-Permissive-1.0>`_, that are useful for filtering harmful data out of your training set.
Expand Down Expand Up @@ -159,6 +161,33 @@ The possible labels are as follows: ``"safe", "O1", "O2", "O3", "O4", "O5", "O6"
This will create a column in the dataframe with the raw output of the LLM. You can choose to parse this response however you want.

Instruction-Data-Guard Model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Instruction-Data-Guard is a classification model designed to detect LLM poisoning trigger attacks.
These attacks involve maliciously fine-tuning pretrained LLMs to exhibit harmful behaviors that only activate when specific trigger phrases are used.
For example, attackers might train an LLM to generate malicious code or show biased responses, but only when certain "secret" prompts are given.

Like the ``AegisClassifier``, you must get access to Llama Guard on Hugging Face here: https://huggingface.co/meta-llama/LlamaGuard-7b.
Afterwards, you should set up a `user access token <https://huggingface.co/docs/hub/en/security-tokens>`_ and pass that token into the constructor of this classifier.
Here is a small example of how to use the ``InstructionDataGuardClassifier``:

.. code-block:: python
from nemo_curator.classifiers import InstructionDataGuardClassifier
# The model expects instruction-response style text data. For example:
# "Instruction: {instruction}. Input: {input_}. Response: {response}."
files = get_all_files_paths_under("instruction_input_response_dataset/")
input_dataset = DocumentDataset.read_json(files, backend="cudf")
token = "hf_1234" # Replace with your user access token
instruction_data_guard_classifier = InstructionDataGuardClassifier(token=token)
result_dataset = instruction_data_guard_classifier(dataset=input_dataset)
result_dataset.to_json("labeled_dataset/")
In this example, the Instruction-Data-Guard model is obtained directly from `Hugging Face <https://huggingface.co/nvidia/instruction-data-guard>`_.
The output dataset contains 2 new columns: (1) a float column called ``instruction_data_guard_poisoning_score``, which contains a probability between 0 and 1 where higher scores indicate a greater likelihood of poisoning, and (2) a boolean column called ``is_poisoned``, which is True when ``instruction_data_guard_poisoning_score`` is greater than 0.5 and False otherwise.

FineWeb Educational Content Classifier
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
3 changes: 2 additions & 1 deletion examples/classifiers/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
## Text Classification

The Python scripts in this directory demonstrate how to run classification on your text data with each of these 5 classifiers:
The Python scripts in this directory demonstrate how to run classification on your text data with each of these classifiers:

- Domain Classifier
- Multilingual Domain Classifier
- Quality Classifier
- AEGIS Safety Models
- Instruction-Data-Guard Model
- FineWeb Educational Content Classifier

For more information about these classifiers, please see NeMo Curator's [Distributed Data Classification documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html).
Expand Down
70 changes: 70 additions & 0 deletions examples/classifiers/instruction_data_guard_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import argparse
import time

from nemo_curator.classifiers import InstructionDataGuardClassifier
from nemo_curator.datasets import DocumentDataset
from nemo_curator.utils.distributed_utils import get_client
from nemo_curator.utils.script_utils import ArgumentHelper


def main(args):
global_st = time.time()

# Input can be a string or list
input_file_path = "/path/to/data"
output_file_path = "./"
huggingface_token = "hf_1234" # Replace with a HuggingFace user access token

client_args = ArgumentHelper.parse_client_args(args)
client_args["cluster_type"] = "gpu"
client = get_client(**client_args)

# The model expects instruction-response style text data. For example:
# "Instruction: {instruction}. Input: {input_}. Response: {response}."
input_dataset = DocumentDataset.read_json(
input_file_path, backend="cudf", add_filename=True
)

instruction_data_guard_classifier = InstructionDataGuardClassifier(
token=huggingface_token
)
result_dataset = instruction_data_guard_classifier(dataset=input_dataset)

result_dataset.to_json(output_path=output_file_path, write_to_filename=True)

global_et = time.time()
print(
f"Total time taken for Instruction-Data-Guard classifier inference: {global_et-global_st} s",
flush=True,
)

client.close()


def attach_args(
parser=argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter
),
):
argumentHelper = ArgumentHelper(parser)
argumentHelper.add_distributed_classifier_cluster_args()

return argumentHelper.parser


if __name__ == "__main__":
main(attach_args().parse_args())
2 changes: 1 addition & 1 deletion examples/classifiers/multilingual_domain_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def main(args):
)
result_dataset = multilingual_domain_classifier(dataset=input_dataset)

result_dataset.to_json(output_file_dir=output_file_path, write_to_filename=True)
result_dataset.to_json(output_path=output_file_path, write_to_filename=True)

global_et = time.time()
print(
Expand Down
24 changes: 23 additions & 1 deletion nemo_curator/scripts/classifiers/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
## Text Classification

The Python scripts in this directory demonstrate how to run classification on your text data with each of these 5 classifiers:
The Python scripts in this directory demonstrate how to run classification on your text data with each of these classifiers:

- Domain Classifier
- Multilingual Domain Classifier
- Quality Classifier
- AEGIS Safety Models
- Instruction-Data-Guard Model
- FineWeb Educational Content Classifier

For more information about these classifiers, please see NeMo Curator's [Distributed Data Classification documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html).
Expand Down Expand Up @@ -96,6 +97,27 @@ aegis_classifier_inference \

Additional arguments may be added for customizing a Dask cluster and client. Run `aegis_classifier_inference --help` for more information.

#### Instruction-Data-Guard classifier inference

```bash
# same as `python instruction_data_guard_classifier_inference.py`
instruction_data_guard_classifier_inference \
--input-data-dir /path/to/data/directory \
--output-data-dir /path/to/output/directory \
--input-file-type "jsonl" \
--input-file-extension "jsonl" \
--output-file-type "jsonl" \
--input-text-field "text" \
--batch-size 64 \
--max-chars 6000 \
--device "gpu" \
--token "hf_1234"
```

In the above example, `--token` is your HuggingFace token, which is used when downloading the base Llama Guard model.

Additional arguments may be added for customizing a Dask cluster and client. Run `instruction_data_guard_classifier_inference --help` for more information.

#### FineWeb-Edu classifier inference

```bash
Expand Down
Loading

0 comments on commit 86830ab

Please sign in to comment.