Skip to content

Commit

Permalink
Merge pull request #66 from CDCgov/dev
Browse files Browse the repository at this point in the history
v1.2
  • Loading branch information
sateeshperi authored May 12, 2022
2 parents 5d4bb50 + de4d280 commit 0e6e1c8
Show file tree
Hide file tree
Showing 22 changed files with 294 additions and 84 deletions.
28 changes: 28 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,34 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## v1.2 Expecto Patronum - [05/11/2022]

### `Added`

* Changed minimum time requirements in `base.config` for `process_low`, `process_medium`, `process_high` to **72.h** and `process_long` to **120.h**.
* SNP distance matrix addition as output.
* Updated `qc_report.txt` to include coverage **mean depth** and **reads mapped**.
* Positions masked `(N)` based on **DP** & Added functionality to use `min_depth` (Default 50).
* Change `test` profile to include `min_depth = 2` so it will run to completion.
*
### `Fixed`

* Bug fix for downsample mismatch.
* Change configuration variable name from vcftools_filter to `gatkgenotypes_filter`.
* Changed samplesheet creation to accept multiple directories as arguments and to recursively search for sequences.
* Set full vcf consensus file to debug output
* Remove part nf-core branding

### `Dependencies`

### `Deprecated`

*
### `TODO`

* Update logo

---
## v1.1 Candid Aura - [04/01/2022]

### `Added`
Expand Down
35 changes: 5 additions & 30 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,42 +1,19 @@
[![Open CDCgov/mycosnp-nf in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/CDCgov/mycosnp-nf)

Once the pod launches, it will present a VS-Code interface and comes with Nextflow, Conda and Docker pre-installed

* To run the pipeline with test data
```bash
nextflow run main.nf -profile docker,test
```

# ![CDCgov/mycosnp-nf](docs/images/nf-core-mycosnp_logo_light.png#gh-light-mode-only) ![CDCgov/mycosnp-nf](docs/images/nf-core-mycosnp_logo_dark.png#gh-dark-mode-only)

[![GitHub Actions CI Status](https://github.com/CDCgov/mycosnp-nf/workflows/nf-core%20CI/badge.svg)](https://github.com/CDCgov/mycosnp-nf/actions?query=workflow%3A%22nf-core+CI%22)
[![GitHub Actions Linting Status](https://github.com/CDCgov/mycosnp-nf/workflows/nf-core%20linting/badge.svg)](https://github.com/CDCgov/mycosnp-nf/actions?query=workflow%3A%22nf-core+linting%22)
[![AWS CI](https://img.shields.io/badge/CI%20tests-full%20size-FF9900?labelColor=000000&logo=Amazon%20AWS)](https://nf-co.re/mycosnp/results)
[![Cite with Zenodo](http://img.shields.io/badge/DOI-10.5281/zenodo.XXXXXXX-1073c8?labelColor=000000)](https://doi.org/10.5281/zenodo.XXXXXXX)

[![Nextflow](https://img.shields.io/badge/nextflow%20DSL2-%E2%89%A521.10.3-23aa62.svg?labelColor=000000)](https://www.nextflow.io/)
[![run with conda](http://img.shields.io/badge/run%20with-conda-3EB049?labelColor=000000&logo=anaconda)](https://docs.conda.io/en/latest/)
[![run with docker](https://img.shields.io/badge/run%20with-docker-0db7ed?labelColor=000000&logo=docker)](https://www.docker.com/)
[![run with singularity](https://img.shields.io/badge/run%20with-singularity-1d355c.svg?labelColor=000000)](https://sylabs.io/docs/)

[![Get help on Slack](http://img.shields.io/badge/slack-nf--core%20%23mycosnp-4A154B?labelColor=000000&logo=slack)](https://nfcore.slack.com/channels/mycosnp)
[![Follow on Twitter](http://img.shields.io/badge/twitter-%40nf__core-1DA1F2?labelColor=000000&logo=twitter)](https://twitter.com/nf_core)
[![Watch on YouTube](http://img.shields.io/badge/youtube-nf--core-FF0000?labelColor=000000&logo=youtube)](https://www.youtube.com/c/nf-core)

## Introduction


**nf-core/mycosnp** is a bioinformatics best-practice analysis pipeline for MycoSNP is a portable workflow for performing whole genome sequencing analysis of fungal organisms, including Candida auris. This method prepares the reference, performs quality control, and calls variants using a reference. MycoSNP generates several output files that are compatible with downstream analytic tools, such as those for used for phylogenetic tree-building and gene variant annotations..

The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from [nf-core/modules](https://github.com/nf-core/modules) in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

<!-- TODO nf-core: Add full-sized test dataset and amend the paragraph below if applicable -->
On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the [nf-core website](https://nf-co.re/mycosnp/results).

## Pipeline summary

<!-- TODO nf-core: Fill in short bullet-pointed list of the default steps in the pipeline -->

### Reference Preparation

> **Prepares a reference FASTA file for BWA alignment and GATK variant calling by masking repeats in the reference and generating the BWA index.**
Expand Down Expand Up @@ -104,7 +81,10 @@ On release, automated continuous integration tests run the pipeline on a full-si
```console
nextflow run CDCgov/mycosnp-nf -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input samplesheet.csv --fasta c_auris.fasta
```
## Pre-configured Nextflow development environment using Gitpod

>[![Open CDCgov/mycosnp-nf in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/CDCgov/mycosnp-nf)
>>Once the pod launches, it will present a VS-Code interface and comes with Nextflow, Conda and Docker pre-installed
## Documentation

The nf-core/mycosnp pipeline comes with documentation about the pipeline [usage](https://github.com/CDCgov/mycosnp-nf/blob/master/docs/usage.md), [parameters](https://github.com/CDCgov/mycosnp-nf/wiki/Parameter-Docs) and [output](https://github.com/CDCgov/mycosnp-nf/blob/master/docs/output.md).
Expand All @@ -123,16 +103,14 @@ We thank the following people for their extensive assistance in the development
* Lynn Dotrang [@leuthrasp](https://github.com/LeuThrAsp)
* Christopher Jossart [@cjjossart](https://github.com/cjjossart)
* Robert A. Petit III [@rpetit3](https://github.com/rpetit3)

> Special thanks the **Staph-B** Slack workspace for open-source collaborations and discussions.
## Contributions and Support

If you would like to contribute to this pipeline, please see the [contributing guidelines](.github/CONTRIBUTING.md).

## Citations

<!-- TODO nf-core: Add citation for pipeline after first release. Uncomment lines below and update Zenodo doi and badge at the top of this file. -->
<!-- If you use CDCgov/mycosnp-nf for your analysis, please cite it using the following doi: [10.5281/zenodo.XXXXXX](https://doi.org/10.5281/zenodo.XXXXXX) -->

<!-- TODO nf-core: Add bibliography of tools and data used in your pipeline -->
An extensive list of references for the tools used by the pipeline can be found in the [`CITATIONS.md`](CITATIONS.md) file.

You can cite the `nf-core` publication as follows:
Expand All @@ -149,7 +127,6 @@ You can cite the `nf-core` publication as follows:

**General disclaimer** This repository was created for use by CDC programs to collaborate on public health related projects in support of the [CDC mission](https://www.cdc.gov/about/organization/mission.htm). GitHub is not hosted by the CDC, but is a third party website used by CDC and its partners to share information and collaborate on software. CDC use of GitHub does not imply an endorsement of any one particular service, product, or enterprise.


## Related documents

* [Open Practices](open_practices.md)
Expand All @@ -158,7 +135,6 @@ You can cite the `nf-core` publication as follows:
* [Disclaimer](DISCLAIMER.md)
* [Contribution Notice](CONTRIBUTING.md)
* [Code of Conduct](code-of-conduct.md)

## Public Domain Standard Notice
This repository constitutes a work of the United States Government and is not
Expand All @@ -168,7 +144,6 @@ the work worldwide are waived through the [CC0 1.0 Universal public domain dedic
All contributions to this repository will be released under the CC0 dedication. By
submitting a pull request you are agreeing to comply with this waiver of
copyright interest.

## License Standard Notice
The repository utilizes code licensed under the terms of the Apache Software
License and therefore is licensed under ASL v2 or later.
Expand Down
5 changes: 4 additions & 1 deletion bin/broad-vcf-filter/vcfSnpsToFasta.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,13 @@
parser = argparse.ArgumentParser()
parser.add_argument('infile', help='VCF file', type=str)
parser.add_argument('--max_amb_samples', help='maximum number of samples with ambiguous calls for a site to be included', type=int)
parser.add_argument('--min_depth', default=0, help='Replace SNP with "N" if depth is less than minimum (Default: do not check read depth)', type=int)
args = parser.parse_args()

infile = args.infile

max_amb = 1000000
min_depth = args.min_depth
if args.max_amb_samples:
max_amb = args.max_amb_samples

Expand Down Expand Up @@ -57,11 +59,12 @@
except:
pass
genotype = record.get_genotype(index=header.get_sample_index(translation),min_gq=0)
record_format = dict(zip(record.format.split(':'), record.genotypes[header.get_sample_index(translation)].split(':')))
variant_type = record.get_variant_type(caller,genotype)
### print(genome + " " + str(genotype) + " " + str(pass_or_fail) + " " + str(variant_type)) ###
if pass_or_fail and not variant_type:
pass
elif pass_or_fail and variant_type == 'SNP':
elif pass_or_fail and variant_type == 'SNP' and int(record_format['DP']) >= min_depth:
chrom = record.get_chrom()
pos = int(record.get_pos())

Expand Down
14 changes: 8 additions & 6 deletions bin/mycosnp_full_samplesheet.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,11 @@
SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
script=$SCRIPT_DIR/mycosnp_create_sample_sheet.pl
echo "sample,r1,r2,r3,r4"
$script -i $1 -f
for DIR in `ls $1`; do
if [[ -d "$1/$DIR" ]]; then
$script -i $1/$DIR -f
fi
done | sort

for VAR in "$@"
do
$script -i $VAR -f
for DIR in `find $VAR -type d`; do
$script -i $DIR -f
done
done | sort | uniq
28 changes: 17 additions & 11 deletions bin/qc_report_stats.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
parser.add_argument("--qual_scores_before_trim", type=argparse.FileType("r"))
parser.add_argument("--qual_scores_after_trim", type=argparse.FileType("r"))
parser.add_argument("--reference", type=argparse.FileType("r"))
parser.add_argument("--bam_coverage", type=argparse.FileType("r"))
args = parser.parse_args()

# Sample name variable
Expand All @@ -25,7 +26,6 @@
# Trim newline
line = line.rstrip()
if line.startswith(">"):
# Skip lines with ">"
if header is not None:
continue
header = line[1:]
Expand Down Expand Up @@ -90,23 +90,29 @@
# Formatting. 2 decimal points
phred_avg_after = "{:.2f}".format(phred_avg_after)

# Parsing coverage (mean depth) and percent coverage of reference sequence from Qualimap bamqc report genome_results.txt file
for line in args.bam_coverage:
if line.__contains__("mean coverageData"):
mean_depth_coverage = float(line.split("= ")[1].strip("X\n"))
if line.__contains__("number of mapped reads"):
reads_mapped = line.split("= ")[1].replace(",","").strip("\n")

# Preparing output list with variables and then reformatting into a string
output_string = ""
output_list = [
sample_name,
reads_before_trim,
str(reads_before_trim),
str(GC_content_before),
str(phred_avg_before),
str(coverage_before),
reads_after_trim_percent,
paired_reads_after_trim,
unpaired_reads_after_trim,
"{:.2f}".format(coverage_before),
str(reads_after_trim_percent),
str(paired_reads_after_trim),
str(unpaired_reads_after_trim),
str(GC_content_after),
str(phred_avg_after),
str(coverage_after),
"{:.2f}".format(coverage_after),
"{:.2f}".format(mean_depth_coverage),
str(reads_mapped)
]

# Creating tab delimited string for qc report generation
for item in output_list:
output_string += str(item) + "\t"
print(output_string)
print('\t'.join(output_list))
8 changes: 4 additions & 4 deletions conf/base.config
Original file line number Diff line number Diff line change
Expand Up @@ -27,20 +27,20 @@ process {
withLabel:process_low {
cpus = { check_max( 2 * task.attempt, 'cpus' ) }
memory = { check_max( 6.GB * task.attempt, 'memory' ) }
time = { check_max( 24.h * task.attempt, 'time' ) }
time = { check_max( 72.h * task.attempt, 'time' ) }
}
withLabel:process_medium {
cpus = { check_max( 4 * task.attempt, 'cpus' ) }
memory = { check_max( 6.GB * task.attempt, 'memory' ) }
time = { check_max( 24.h * task.attempt, 'time' ) }
time = { check_max( 72.h * task.attempt, 'time' ) }
}
withLabel:process_high {
cpus = { check_max( 4 * task.attempt, 'cpus' ) }
memory = { check_max( 12.GB * task.attempt, 'memory' ) }
time = { check_max( 48.h * task.attempt, 'time' ) }
time = { check_max( 72.h * task.attempt, 'time' ) }
}
withLabel:process_long {
time = { check_max( 72.h * task.attempt, 'time' ) }
time = { check_max( 120.h * task.attempt, 'time' ) }
}
withLabel:process_high_memory {
memory = { check_max( 64.GB * task.attempt, 'memory' ) }
Expand Down
35 changes: 23 additions & 12 deletions conf/modules.config
Original file line number Diff line number Diff line change
Expand Up @@ -200,16 +200,6 @@ process {
pattern: "*.{fastq.gz,txt}"
]
}
withName: 'QC_REPORT' {
ext.args = { "" }
ext.when = { }
publishDir = [
enabled: "${params.save_alignment}",
mode: "${params.publish_dir_mode}",
path: { "${params.outdir}/samples/${meta.id}/qc_report" },
pattern: "*.{txt}"
]
}
withName: 'BWA_MEM' {
ext.args = { "" }
ext.when = { }
Expand Down Expand Up @@ -346,6 +336,16 @@ process {
pattern: "*.idxstats"
]
}
withName: 'QC_REPORT' {
ext.args = { "" }
ext.when = { }
publishDir = [
enabled: "${params.save_alignment}",
mode: "${params.publish_dir_mode}",
path: { "${params.outdir}/samples/${meta.id}/qc_report" },
pattern: "*.{txt}"
]
}
}

// Subworkflow: gatk-variants
Expand Down Expand Up @@ -385,7 +385,7 @@ process {
]
}
withName: 'FILTER_GATK_GENOTYPES' {
ext.args = { params.vcftools_filter }
ext.args = { params.gatkgenotypes_filter }
ext.when = { }
ext.prefix = {"combined_genotype_filtered_snps_filtered"}
publishDir = [
Expand Down Expand Up @@ -448,7 +448,7 @@ process {
withName: 'VCF_CONSENSUS' {
ext.when = { }
publishDir = [
enabled: true,
enabled: "${params.save_debug}",
mode: "${params.publish_dir_mode}",
path: { "${params.outdir}/combined/consensus" },
pattern: "*{fasta.gz}"
Expand All @@ -474,6 +474,17 @@ process {
pattern: "vcf-to-fasta.fasta"
]
}
withName: 'SNPDISTS' {
ext.args = { "" }
ext.errorStrategy = { "ignore" }
ext.when = { }
publishDir = [
enabled: true,
mode: "${params.publish_dir_mode}",
path: { "${params.outdir}/combined/snpdists" },
pattern: "*.tsv"
]
}
withName: 'RAPIDNJ' {
ext.args = { "-t d -b 1000 -n" }
ext.errorStrategy = { "ignore" }
Expand Down
2 changes: 2 additions & 0 deletions conf/test.config
Original file line number Diff line number Diff line change
Expand Up @@ -29,5 +29,7 @@ params {
// Genome references
fasta = 'https://raw.githubusercontent.com/nf-core/test-datasets/bactmap/genome/NCTC13799.fna'

min_depth = 2


}
10 changes: 10 additions & 0 deletions docs/dev_notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ nf-core modules install samtools/view
nf-core modules install samtools/stats
nf-core modules install samtools/idxstats
nf-core modules install samtools/flagstat
nf-core modules install snpdists
```

```bash
Expand Down Expand Up @@ -164,4 +165,13 @@ INFO Modules installed in '.':
│ seqtk/sample │ nf-core/modules │ e20e57f90b6787ac9a010a980cf6ea98bd990046 │ Add when: block (#1261) │ 2022-02-04 │
│ seqtk/seq │ nf-core/modules │ 1016c9bd1a7b23e31b8215b8c33095e733080735 │ Seqtk seq (#1340) │ 2022-02-23 │
└─────────────────────────────────┴─────────────────┴──────────────────────────────────────────┴────────────────────────────────────────────────────┴────────────┘
```
## Bumping a pipeline version number
When releasing a new version of a nf-core pipeline, version numbers have to be updated in several different places. The helper command nf-core bump-version automates this for you to avoid manual errors (and frustration!)
```
nf-core bump-version 1.2
```
Loading

0 comments on commit 0e6e1c8

Please sign in to comment.