You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, awesome work!
While working on improving jackhmmer performance on ebi.ac.uk/Tools/hmmer here at EBI, I realised that there is no way to construct a TopHits object from a binary io stream of any kind.
I think it would be a sensible addition to pyhmmer. We definitely need it since we want to read and deserialise hits from two different machines and the script that initiates the search and communicates with the daemons is written in perl so pickle is not an option at the moment.
Let me know what you think and also if I managed to overlook parts of the documentation and all of this is unnecessary.
Cheers!
The text was updated successfully, but these errors were encountered:
Hi @arajkovic! I visited the group while Nicolo was working on this and implemented the daemon client in Python as well, so you can retrieve the hits you get as a result, but indeed that may not be a fine grained addition you're looking for.
The problem currently with implementing serialization/deserialization of a TopHits is that contrary to the HMMER code, the TopHits in Python store both the hits, the pipeline parameters, and a reference to the query, while only the hits are stored in binary format. So I'm not sure it would be completely possible to implement .
Hi there, awesome work!
While working on improving jackhmmer performance on ebi.ac.uk/Tools/hmmer here at EBI, I realised that there is no way to construct a TopHits object from a binary io stream of any kind.
I think it would be a sensible addition to pyhmmer. We definitely need it since we want to read and deserialise hits from two different machines and the script that initiates the search and communicates with the daemons is written in perl so pickle is not an option at the moment.
Let me know what you think and also if I managed to overlook parts of the documentation and all of this is unnecessary.
Cheers!
The text was updated successfully, but these errors were encountered: