You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the HTCondorCluster on a regrettably rather unstable computing cluster, and face the issue that when jobs are removed / killed / crash before the worker even gets to connect to the scheduler (e.g. because on the worker side it doesnt even manage to source the python environment), it gets ignored and the jobs never get resubmitted. As far as I can tell, this is not a surprise since there currently seems to be no way of checking whether a job is running, in idle, or removed (i.e. by calling condor_q with HTCondor) if the worker has not yet connected. In principle, this issue should happen with other HPC systems as well, though I havent checked.
This could probably be done by periodically calling condor_q -nobatch, parsing the output, and comparing against the job ids of the workers that are supposed to be submitted. I am not sure whether there is already any part of the code that gets called periodically in this manner and could be used for this purpose; if there is and you could point me to it, I would be happy to try my hand at it.
Cheers and thank you,
Laurids
The text was updated successfully, but these errors were encountered:
Hi,
I am using the HTCondorCluster on a regrettably rather unstable computing cluster, and face the issue that when jobs are removed / killed / crash before the worker even gets to connect to the scheduler (e.g. because on the worker side it doesnt even manage to source the python environment), it gets ignored and the jobs never get resubmitted. As far as I can tell, this is not a surprise since there currently seems to be no way of checking whether a job is running, in idle, or removed (i.e. by calling
condor_q
with HTCondor) if the worker has not yet connected. In principle, this issue should happen with other HPC systems as well, though I havent checked.This could probably be done by periodically calling
condor_q -nobatch
, parsing the output, and comparing against the job ids of the workers that are supposed to be submitted. I am not sure whether there is already any part of the code that gets called periodically in this manner and could be used for this purpose; if there is and you could point me to it, I would be happy to try my hand at it.Cheers and thank you,
Laurids
The text was updated successfully, but these errors were encountered: