-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the mapping process for yahs #98
Comments
In the end I ran it by converting merged_nodups.txt to a bed file that looks like this: h1tg000001l 19256 19406 LH00251_107:7:1201:47093:21339/1 5 The program is running, but I am just a little worried about this in the output file: [I::dump_links_from_bed_file] 403 million records processed, 201500000 read pairs I am just worried about two things: In the end, when I check the output scaffolded fasta file, it looks good, with the number of expected chromosomes and each have the expected size. But I am not sure If I have to consider this as a failed run because of the two things I mentioned above. Many thanks, Eric |
Hi Eric, Both two things are normal. The fraction of reads dropped by MQ filtering varies depending on the repetitiveness of your genome. Dropping around half the reads is reasonable. If you are interested, you can change the Best, |
I have alignments from juicer, and I would like to use them with Yahs.
I woudl like to know your opinion regarding
1.- Using the juicer mapping output for Yahs
2.- Is it better to use the sam file, order it, convert it to bam and mark duplicates or
3.- Is it better to use the merged_nodups.txt file to generate the bed file for Yahs?
Many thanks,
Eric
The text was updated successfully, but these errors were encountered: