Skip to content

9.1 Useful Pachyderm Commands

Cove Sturtevant edited this page Jan 7, 2021 · 25 revisions

Change between pachd1/pachd2/pachd3 environments

Option 1: Manually edit the config file:

  1. In the terminal: vi ~/.pachyderm/config.json
  2. Type i
  3. Change the active_context field to pachd2 (or whatever is appropriate)
  4. Hit ESC
  5. Type :wq

Option 2: Enter the following command in the terminal (change pachd2 to whatever is appropriate)

  1. pachctl config set active-context pachd2

Standing up a whole DAG

Rob Markel wrote a python script to stand up almost an entire DAG, assuming you've created all the pipeline specs and the data_source pipelines have been set up (data_source_<SENSOR>_site and data_source_<SENSOR>_linkmerge). In your terminal window, navigate to your local instance of the NEON-IS-data-processing Git repo, and specifically to the utilities folder. For example:

cd ~/R/NEON-IS-data-processing/utilities

From there, run the following:

python3 -B -m dag.create_dag  --spec_dir=<path to the folder where the pipeline specs are> --end_node_spec=<path to the last pipeline spec in the DAG> 

For example:

python3 -B -m dag.create_dag --spec_dir=/home/NEON/csturtevant/R/NEON-IS-data-processing-homeDir/pipe/exo --end_node_spec=/home/NEON/csturtevant/R/NEON-IS-data-processing-homeDir/pipe/exo/exo2_named_location_filter.json

Note that the paths you put into the arguments must be absolute paths (don't use e.g. ~/R/...). If a DAG is complicated you may get some “Pipeline not found” messages in the output when the script runs. You can run the script repeatedly until these disappear. It does not cause any issue to run the script more than once.

If you are working on the som development server, all the python packages needed in order to run the script are already installed. No need to read further. If not, you'll need python3 installed. Once you've done that you'll need to install dependency packages. To do so, navigate to the utilities/dag folder of your local NEON-IS-data-processing Git repo. Then:

sudo pip3 install -r requirements.txt
sudo python3 -m pip install graphviz

Deleting a whole DAG

Similar to standing up a whole DAG (above), you can delete a whole DAG. In your terminal window, navigate to your local instance of the NEON-IS-data-processing Git repo, and specifically to the utilities folder. For example:

cd ~/R/NEON-IS-data-processing/utilities

From there, run the following:

python3 -B -m dag.delete_dag  --spec_dir=<path to the folder where the pipeline specs are> --end_node_spec=<path to the last pipeline spec in the DAG> 

For example:

python3 -B -m dag.delete_dag --spec_dir=/home/NEON/csturtevant/R/NEON-IS-data-processing-homeDir/pipe/exo --end_node_spec=/home/NEON/csturtevant/R/NEON-IS-data-processing-homeDir/pipe/exo/exo2_named_location_filter.json

You'll probably get a bunch of warnings for each pipeline you delete, like:

WARNING: If using the --split-txn flag, this command must run until complete. If a failure or incomplete run occurs, then Pachyderm will be left in an inconsistent state. To resolve an inconsistent state, rerun this command.

Accept the warning with a y each time and rerun the whole command if failures or incomplete deletions occur.

The same notes in the section above about using absolute paths and installing dependencies also apply here. See the "Standing up a whole DAG" section above.

Show only your jobs/pipeline/repos

You can pipe the jobs/pipeline/repos output thru grep, like so:

pachctl list pipeline|grep "SENSOR"

Look at what your pipeline is doing

Using pachctl inspect pipeline <your_pipeline_name> lets you see the status of your pipeline, how it was configured, and sometimes why it failed.
Failure reasons are only printed if the docker container for your pipeline doesn't initialize. This is usually the case if you specified an image in your pipeline spec that doesn't exist, so check the spelling and image version. If everything looks fine, or the reason for failure is not apparent, check out the job logs for the pipeline.

The following command lists the recent jobs for all pipelines:

pachctl list job

If you see a failure for your pipeline, first look to see if an upstream pipeline failed. This will automatically cause failure for all downstream pipelines. Let's assume that your pipeline is where the problem started, or maybe it was successful but you still want to see what happened in the code when the job ran. Take note of the job ID for your pipeline and see the section below to look at the logs for that job.

Look at logs

Logs will only be written out for pipelines that have the standby option set to false in the pipeline's JSON file ("standby":false,). Change this value from true as needed, the update your pipeline with:
pachctl update pipeline -f ~PATH/TO/PIPELINE/FILE/PIPELINE_FILE.json

Once your pipeline has started running (or has finished/failed), list the jobs for the pipeline: pachctl list job --pipeline=<insert_pipeline_name>

Example: pachctl list job --pipeline=par-surfacewater_context_filter

Which will produce something like:

ID                               PIPELINE                  STARTED       DURATION  RESTART PROGRESS  DL UL STATE   
2efb42c31b444f338991b14be0874ad4 pipeline_name             9 minutes ago 4 seconds 0       0 + 5 / 5 0B 0B failure 

copy the job ID, and use it in the following: pachctl logs --job=<YOUR-JOB-ID-FROM-ABOVE>

If nothing prints nothing was logged. This could mean things are fine, or you didn't do "standby":false,.

REMEMBER!: Set standby to true after you are done!

Reprocessing a pipeline (nominal)

pachctl update pipeline --reprocess -f ~[PATH TO FILE]

Example: pachctl update pipeline --reprocess -f ~/R/NEON-IS-data-processing/pipe/pqs1/pqs1_merge_data_by_location.json

Reprocessing a pipeline without having to reload the file for the pipeline spec

For pipeline specs in json format: pachctl extract pipeline [pipeline_name] -o json | pachctl update pipeline --reprocess

For pipeline specs in yaml format: pachctl extract pipeline [pipeline_name] -o yaml | pachctl update pipeline --reprocess

To save the current pipeline to a file (instead of reloading it): pachctl extract pipeline [pipeline_name] -o json > [/path/to/new/file.json]

Handling unfinished commits

Because this is a bigger issue, the notes have been moved to their own wiki page

Putting all pipelines on standby

for pipe in $(pachctl list pipeline --raw | jq -r '.| select(.state=="PIPELINE_RUNNING")|.pipeline.name'); do
echo "Putting pipeline $pipe on standby"
pachctl extract pipeline $pipe -o json | jq -r '.standby = true' | pachctl update pipeline
done

Deleting pipelines and repos indiscriminately (NEVER DO THIS)

Pachyderm, and common sense, really want you to delete repos 'backwards' from the end of the pipeline to the start. However, you may want to circumvent this and wholesale delete a middle repo using the --force switch.

This will immediately delete the pipeline, even if there are other pipelines dependent on it. DON'T DO THIS. Not only will this break downstream pipelines/repos, but a bunch of provenance errors will be created and may not be able to be fixed without wiping away the entire pipeline. I promise, you'll regret it using the --force option.