Used for getting the information from a running job based on its pid using psutil.
Availible on PyPi and easily installed in your enviroment.
pip install pagurus
usage: pagurus [-h] [-o OUTFILE] [-p PATH] [-d] [-r RATE] [-u USER] [-noh] [-mv] [-l ROLLING] [--json] [--envvar ENVVAR]
options:
-h, --help show this help message and exit
-o OUTFILE, --outfile OUTFILE
File name for csv.
-p PATH, --path PATH Path to put csv file.
-d, --debug Run with debugging info.
-r RATE, --rate RATE Polling rate for process.
-u USER, --user USER Username to get stats for.
-noh, --no-header Turn off writting the header.
-mv, --move Moves file from 'running' to 'done' directories
-l ROLLING, --rolling ROLLING
Time to roll file over to number to file name in ~minutes.
--json Output JSON strings instead of CSV lines
--envvar ENVVAR add environment var to output (can be specified multiple times)
# Start running wrapper in the background for username
pagurus -u $USER -mv -p /path/to/output/dir -o test.csv
# Get the previous running PID of pagurus
export PID=$!
# Sleep for a few seconds to let everything start running
sleep 10
###########################
# Run your desired program as normal
./a.out
# Works with containers
shifter --image=tylern4/memoryhog:latest alloc 2
# and with wrapper scripts
shifter --image=jfroula/aligner-bbmap:2.0.2 bbmap.sh Xmx12g in=sample.fastq.bz2 ref=sample.fasta out=test.sam
###########################
# Kill the pagurus process
kill $PID
# Sleep for a few seconds to let results finish writing
sleep 10
There is an example notebook which will shows how to get memory usage and cpu usage from the output files.