You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are stuck at the cluster_bins run, details below. Any guidance would be greatly appreciated.
We have run genotype_snps, phase_snps, download_panel, count_alleles, count_reads, and combine_counts independently, setting each, one at a time, to True, and the others to false. For example:
(hatchet_1.1.1) [username@node HATCHet]$ hatchet run hatchet.7-7-23.ini
[2023-Jul-07 02:57:40]# Parsing and checking input arguments
bbfile: PATH_TO/HATCHet/output_7-7-23/bb/bulk.bb
diploidbaf: 0.1
seed: 0
decoding: map
minK: 2
maxK: 30
exactK: 0
covar: diag
transmat: diag
tau: 1e-06
outsegments: PATH_TO/HATCHet/output_7-7-23/bbc/bulk.seg
outbins: PATH_TO/HATCHet/output_7-7-23/bbc/bulk.bbc
[2023-Jul-07 02:57:40]# Reading the combined BB file
[2023-Jul-07 02:57:42]# Clustering bins by RD and BAF across tumor samples using locality
Model is not converging. Current: 39973.612034460035 is not greater than 39973.64436343625. Delta is -0.03232897621637676
Model is not converging. Current: 41561.64143230467 is not greater than 41561.66624473987. Delta is -0.024812435200146865
Model is not converging. Current: 43591.67327317303 is not greater than 43591.74561723709. Delta is -0.07234406405768823
Model is not converging. Current: 44043.04258445045 is not greater than 44043.104144543446. Delta is -0.06156009299593279
Model is not converging. Current: 45219.111067996564 is not greater than 45219.16396330124. Delta is -0.052895304674166255
Model is not converging. Current: 46000.33344260872 is not greater than 46000.37931941253. Delta is -0.045876803815190215
/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hmmlearn/hmm.py:340: RuntimeWarning: invalid value encountered in divide
self.means_ = ((means_weight * means_prior + stats['obs'])
Traceback (most recent call last):
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/bin/hatchet", line 33, in
sys.exit(load_entry_point('hatchet==1.1.1', 'console_scripts', 'hatchet')())
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hatchet/__main__.py", line 61, in main
globals()[command](args)
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hatchet/utils/run.py", line 316, in main
cluster_bins(
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hatchet/utils/cluster_bins.py", line 47, in main
(best_score, best_model, best_labels, best_K, results,) = hmm_model_select(
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hatchet/utils/cluster_bins.py", line 258, in hmm_model_select
prob, labels = model.decode(X, lengths, algorithm=decode_alg)
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hmmlearn/base.py", line 324, in decode
self._check()
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hmmlearn/hmm.py", line 313, in _check
super()._check()
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hmmlearn/base.py", line 949, in _check
self._check_sum_1("startprob_")
File "/sccc/share/apps/miniconda3/envs/hatchet_1.1.1/lib/python3.9/site-packages/hmmlearn/base.py", line 931, in _check_sum_1
raise ValueError(
ValueError: startprob_ must sum to 1 (got nan)
Thank you for your time.
Best,
Pedro
The text was updated successfully, but these errors were encountered:
Good evening,
Is Running HATCHet with phasing still currently a two part process as indicated in the hatchet README.html file?
http://compbio.cs.brown.edu/hatchet/script/README.html
We are stuck at the cluster_bins run, details below. Any guidance would be greatly appreciated.
We have run genotype_snps, phase_snps, download_panel, count_alleles, count_reads, and combine_counts independently, setting each, one at a time, to True, and the others to false. For example:
Thank you for your time.
Best,
Pedro
The text was updated successfully, but these errors were encountered: