You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I found the text encoder in your paper and hf repo is "ncbi/MedCPT-Query-Encoder".
However, in your github repo it is "FremyCompany/BioLORD-2023"
So which one did you finally choose?
The text was updated successfully, but these errors were encountered:
Thanks for your interest. In fact, they are similar. The text encoder in our first version is medcpt, which is finetuned using our data. However, later we find that biolord is also a good choice which needs no more finetuning. You can directly use biolord aligned with our code. I think its a easier way for reproducing.
So there's no knowledge enhancement in the model if we choose "biolord" as text encoder?
But i'm still quite interested in how to inject domain-specific knowledge into pretrained text encoder...like how to mix synonyms, explanation and which loss is used in the ICD tree?
I couldn't find the relevant code in this repo. If possible, could you send this part of the code to my email: [email protected].
Thanks again :)
Hi,
I found the text encoder in your paper and hf repo is "ncbi/MedCPT-Query-Encoder".
However, in your github repo it is "FremyCompany/BioLORD-2023"
So which one did you finally choose?
The text was updated successfully, but these errors were encountered: