You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I've been an intensive user of your dada2 pipeline until a few years ago. After 4 years, I need to re-run the same data in the pipeline to map the reads agains an updated database (2x300 paired end, V3-V4 region).
My samples present with these quality profiles after removing the primers with Cutadapt:
I've tried to fine-tune the truncLen parameters from 200 to 250 for forward reads and 160 to 190 for reverse reads in all possible combinations, but I always get this output (or very similar) after the merging step
The only combination that raises the merged reads a bit (15%) is 200 forward/170 reverse. Above and below these values, the merged reads drop again to 1.9%.
I am wondering what I am doing wrong, as a few years back I successfully merged reads with an average output abundance close to 50%. I compared the script used back then and now, and nothing significant has changed.
Could you please provide some guidance on how to resolve this issue? Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
I've been an intensive user of your dada2 pipeline until a few years ago. After 4 years, I need to re-run the same data in the pipeline to map the reads agains an updated database (2x300 paired end, V3-V4 region).
My samples present with these quality profiles after removing the primers with Cutadapt:
I've tried to fine-tune the truncLen parameters from 200 to 250 for forward reads and 160 to 190 for reverse reads in all possible combinations, but I always get this output (or very similar) after the merging step
The only combination that raises the merged reads a bit (15%) is 200 forward/170 reverse. Above and below these values, the merged reads drop again to 1.9%.
I am wondering what I am doing wrong, as a few years back I successfully merged reads with an average output abundance close to 50%. I compared the script used back then and now, and nothing significant has changed.
Could you please provide some guidance on how to resolve this issue? Thank you!
The text was updated successfully, but these errors were encountered: