-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
R Encountering Issues When Processing PacBio Revio 16S Data with the DADA2 Pipeline #2074
Comments
Could you provide a bit more information about this. In particular, what is your QIIME2 version? And what are the versions of R and relevant packages in the R environment that is failing to process the same data? (e.g. |
I am using qiime2-amplicon-2024.10 version, and R 4.4.2. dada2 1.32.0
#Sorry I did not copy other loaded packages since I have been using RStudio for other scripts, so not all loaded packages are relevant to the dada2 script
Matrix products: default locale: time zone: America/Toronto attached base packages: other attached packages: |
@emankhalaf see #1892, but note that @benjjneb recently added a new function for dealing with binned quality scores on the 'master' branch. I have found processing any Revio data (especially from Kinnex kits) requires pretty significant compute resources |
@cjfields
Alternatively, I might need to use a value as high as 1e10. I have not been able to find specific updates or recommendations online regarding the DADA2 workflow for processing Revio sequences. The link provided for a previous similar issue is one of the most commonly referenced sources when searching for solutions to my problem. However, I am still unclear about the exact adjustments needed to process Revio sequences properly. What I understand so far is that the Revio system uses a quality score binning approach which means that the PacBio Revio sequencing platform groups/bins quality scores into predefined categories, rather than assigning a distinct quality score to each base call., similar to Illumina's NovaSeq, and this affects the error learning model step. However, I am unsure of the precise modifications required to address this issue effectively. Any further guidance or clarification would be greatly appreciated. |
@emankhalaf We are still evolving our functionality for optimally fitting error models from binned quality score data, and so it is our fault we don't have more specific guidance on that yet (although the thread @cjfields linked above is a great resource). To me the larger mystery here is this:
The QIIME2 plugin is simply running the DADA2 R package, so I'm having trouble understanding these qualitatively different outcomes. You provided information about your R script and R environment above. Can you say a bit more about your QIIME2 processing? Was there any pre-processing prior to running |
Thank you very much for your response. I executed the R scripts on an HPC cloud (200 GB RAM) provided by the Digital Research Alliance of Canada (DRAC). However, the Qiime2 script (outlined below) was run on a physical server with significantly lower computational power (64 GB RAM) compared to the resources utilized on the HPC.
|
@benjjneb I have important updates regarding the Revio sequence files which may help interpret the above issue. Guided by a bioinformatician from PacBio, using bamtools for converting BAM files to FASTQ was not the correct approach for handling Revio sequence files. This method led to FASTQ files with duplicated sequences, as confirmed by comparing line counts between the BAM and FASTQ files. Also, caused errors when submitting these sequence files into SRA. To resolve this, do the following steps:
Install necessary packages
Verify installationb
After conversion, I rechecked the line counts before and after the process, and they were identical. I applied this method to all my sequence files and ran the DADA2 R script on both DRAC and the physical server last night. The DRAC job is still running. Furthermore, I successfully resubmitted the newly converted sequence files to SRA. I am looking forward to hearing back from you! Appreciated! Eman |
Just so there is record of this: we find PacBio Kinnex samples take considerable resources even when using binned reads, likely due to the ~15x increase in yield for a typical Revio run. Varies considerably based on diversity of samples of course, but we bump cores for critical steps to 24 (like And don't use default filtered PacBio CCS data, which last I checked was 99% consensus accuracy. Needs to be 99.9%. Check the BAM file, consensus accuracy is included as one of the tags, |
@cjfields thanks so much for your input.Yes, the PacBio bioinformatician recommended filtering BAM files based on quality scores (QS30) before use. I'm currently working on this using SAMTools for general BAM file filtering (please see below). Thanks for confirming! samtools view -b -q 30 E1.hifi_reads.bcxxx_CS1_F--PBBC08_CS2_R.bam > E1.filtered.bam But when you say to initially skip pooling, do you mean only at the beginning of the analysis and then apply the pooling parameter later? Pooling has a significant impact on the number of inferred ASVs, especially when working with a novel environmental niche. I have an additional note: When running DADA2 in R, the error model learning step generated the following message:
here you are the error model plot: I believe this indicates that the BAM files need to be filtered before processing, correct? One more question: Is this concept specific to sequences generated from Revio, or should it also be applied to sequences from Sequel II? I haven’t encountered any issues with sequences generated from Sequel II. |
I don't filter based on the Q score but the
@benjjneb has a really nice summary here. You can essentially emulate 'pseudo' pooling by denoising per sample in a first pass, capture ASVs from the pooled data that are present in more than one sample to generate 'pseudo' priors, then use those in a second round of denoising per sample (if you look at the code for
Nope. You have binned quality score data where the absurdly high Q93 is no longer present, so the warning tells you it's essentially falling back to using standard error fitting (which it turns out may also be an issue). That is why I ended up testing the binned quality models from the NovaSeq ticket linked to previously, which seemed to help.
Sequel II/IIe I believe still generates full quality scores. You can also configure the Revio to generate full quality scores (our internal core does this), but the file sizes are much larger and the data processing takes time, even above what I mentioned about. The Kinnex ticket has an example of what this looks like. EDIT: I should add, the above error plots don't look terrible to me. See this comment in that ticket on why I switched |
Hi @benjjneb ,
I am currently working on a new batch of 16S data generated using PacBio Revio technology. While I successfully processed the sequences in Qiime2 within a reasonable time, utilizing the DADA2 plugin for the denoising step, I encountered significant challenges when using the DADA2 pipeline in R.
Each step of the pipeline takes an unusually long time, and R has crashed multiple times during the process. After each crash, I resumed the script by uploading the latest output. For example, the denoising step alone took several days to process 56 sequence files, which seems unreasonable. Similarly, the alignment step ran for four days before ultimately causing RStudio to crash.
Given these issues, I am wondering whether there might be an incompatibility between Revio sequences and the algorithms used in the DADA2 pipeline in R. I ran the script several times, both on a physical server and in cloud environments with high computational power, but the problems persisted.
Attached, I’ve included the error plots generated from R. These plots appear unusual compared to those typically generated from PacBio Sequel II technology.
I would greatly appreciate your insights on interpreting these issues and any guidance you can provide to address them.
Thank you for your time and assistance.
The text was updated successfully, but these errors were encountered: