If you have questions about using Linux on MARCC, check out the tutorials provided by the MARCC group here: https://www.marcc.jhu.edu/training/tutorial-series/ and here: https://marcc-hpc.github.io/esc/. Core topics include: custom environments, SLURM scheduling, singularity containers, code profiling and parallelization. Resources for many other common lab questions are available here too, including:
- Getting started
- Upload/download files to MARCC via command line (see Storing and accessing data) or via Globus
- Issues installing packages in R?
- Can't run a program or need a docker/singularity?
- Interactive sessions in RStudio or Jupyter Notebooks
- Managing multiple jobs in an interactive node (screen/htop)
- Submitted jobs taking too long to start
- Clean up your home directory if you get locked out (via Globus)
- I/O Issues
Lab members have compiled other resources that may be useful for various jobs, including:
- Snakemake: A template to develop and run your project on MARCC
- Visualizing images on MARCC via X11 forwarding
- Install PEER using Singularity
- Access to server: to access this server, log in to MARCC using your normal log in, then
ssh battle-bigmem
using your same password - I/O issues:
battle-bigmem
has been known to throw anI/O error
occassionally and terminate a job, especially when dealing with large I/O operations. For some ideas on how to address this, refer to the MARCC webpage here: https://www.marcc.jhu.edu/bluecrab-storage-guidelines/. These suggestions primarily have to do with where large files are being written to or read from. - Interactive sessions: Since the standard SLURM job scheduler doesn't work on
bigmem
, to call an Rstudio interactive session simply use the direct commandrstudio_server_start
instead of ansbatch
command. - Multiplexing/running multiple jobs at once: there are several approaches to doing this, including
nohup
+&
, or multiplexers liketmux
orscreen
. Check out the (Additional Resources Page)[https://github.com/battle-lab/battle-lab-guide/blob/master/marcc_guide/additional_resources.md] for some good tutorials on these.
- Keep bigmem's cores available: To help manage your jobs when using
battle-bigmem
, copy the script fromninjakiller.sh
into your~/.bashrc
file. After finishing a session onbattle-bigmem
, useninjakiller
to list which jobs you still have running and to cancel the ones you no longer need. For help documentation on using the script, simply callninjakiller -h
- For a full listing of jobs running, use the
htop
command. TheF9
button can be used inhtop
to kill jobs there too - To kill all your jobs and logout, run
ninjakiller go
- For a full listing of jobs running, use the
- Record any data you put in
lab_data
in the README file there: In an effort to keep track of all the data we have accumulated as a lab and to reduce redundancy, any time you add data to the/work-zfs/abattle4/lab_data
directory, please put aREADME
file in the directory with your data and a note in the/work-zfs/abattle4/lab_data/README
file to document it. Thanks!
- How to install software on MARCC (the basic
module load
function,Singularity
containers, etc) - Workflow tools like
snakemake