You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Put an X between the brackets on this line to verify you have done the following:
I have checked that your issue isn't already filed.
I am running Ubuntu 18.04
I installed the environment and GymFC according to these instructions.
Description
I try to use GymFC as the environment of my RL algorithms.
once I call the reset method, following lines are reported"
Starting gzserver with process ID= 2071599
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Timeout communicating with flight control plugin.
Simulation Stats
-----------------
steps 0
packets_dropped 0
time_start_seconds 1697373368.019229
time_lapse_hours 0.016431721183988782
/bin/sh: 1: kill: No such process
Killing Gazebo process with ID= 2071599
Timeout communicating with flight control plugin.
procedure to reproduce
follow the instruction to install GymFc and GymFC_nf, and build the gymfc-aircraft-plugins
then run the following python script, it is expected to create the env, reset it and finish
fromgymfc_nf.envsimport*importos.pathimporttimeimportdatetimeimportsubprocessimportnumpyasnpnp.seterr('ignore')
importgymimportargparsedefsample_noise(inst):
# Experiementally derived for MatekF7 FC, see Chapter 5 of "Flight# Controller Synthesis Via Deep Reinforcement Learning" for methodology.r_noise=inst.np_random.normal(-0.25465, 1.3373)
p_noise=inst.np_random.normal(0.241961, 0.9990)
y_noise=inst.np_random.normal(0.07906, 1.45168)
returnnp.array([r_noise, p_noise, y_noise])
classStepCallback:
def__init__(self, total_timesteps, log_freq=1):
""" Args: total_timesteps: Total timesteps for training log_freq: Number of episodes until an update log message is printed """self.timesteps=total_timestepsself.steps_taken=0self.es= []
self.sps= []
self.ep=1self.rewards= []
self.log_freq=log_freqself.log_header= ["Ep",
"Done",
"Steps",
"r",
"-ydelta",
"+ymin",
"+/-e",
"-ahigh",
"-nothing",
"score",
"pMAE",
"qMAE",
"rMAE"]
header_format= ["{:<5}",
"{:<7}",
"{:<15}",
"{:<15}",
"{:<15}",
"{:<15}",
"{:<15}",
"{:<15}",
"{:<15}",
"{:<10}",
"{:<7}",
"{:<7}",
"{:<7}"]
self.header_format="".join(header_format)
log_format_entries= ["{:<5}",
"{:<7.0%}",
"{:<15}",
"{:<15.0f}",
"{:<15.0f}",
"{:<15.0f}",
"{:<15.0f}",
"{:<15.0f}",
"{:<15.0f}",
"{:<10.2f}",
"{:<7.0f}",
"{:<7.0f}",
"{:<7.0f}"]
self.log_format="".join(log_format_entries)
defcallback(self, local, state, reward, done):
self.es.append(local.true_error)
self.sps.append(local.angular_rate_sp)
assertlocal.ind_rewards[0] <=0# oscillation penaltyassertlocal.ind_rewards[1] >=0# min output rewardassertlocal.ind_rewards[3] <=0# over saturation penaltyassertlocal.ind_rewards[4] <=0# do nothing penaltyself.rewards.append(local.ind_rewards)
ifdone:
ifself.ep==1:
print(self.header_format.format(*self.log_header))
# XXX (wfk) Try this new score, we need something normalized to handle the# random setpoints. Scale by the setpoint, larger setpoints incur# more error. +1 prevents divide by zeromae=np.mean(np.abs(self.es))
mae_pqr=np.mean(np.abs(self.es), axis=0)
e_score=mae/ (1+np.mean(np.abs(self.sps)))
self.steps_taken+=local.step_counterifself.ep%self.log_freq==0:
ave_ind_rewards=np.mean(self.rewards, axis=0)
ind_rewards=""forrinave_ind_rewards:
ind_rewards+="{:<15.2f} ".format(r)
log_data= [
self.ep,
self.steps_taken/self.timesteps,
self.steps_taken,
np.mean(self.rewards),
ave_ind_rewards[0],
ave_ind_rewards[1],
ave_ind_rewards[2],
ave_ind_rewards[3],
ave_ind_rewards[4],
e_score,
mae_pqr[0],
mae_pqr[1],
mae_pqr[2]
]
print (self.log_format.format(*log_data))
self.ep+=1self.es= []
self.sps= []
self.rewards= []
if__name__=='__main__':
env_id='gymfc_nf-step-v1'env=gym.make(env_id)
env.sample_noise=sample_noiseenv.set_aircraft_model("/remote-home/zzq/12-drone-dreamer/gymfc/examples/gymfc_nf/twins/nf1/model.sdf")
cb=StepCallback(10e6)
env.step_callback=cb.callbackob=env.reset()
a=1
The text was updated successfully, but these errors were encountered:
looks like it is not a problem that always happen. for a few times, I can go through this script completely, but for the most of the time it report that timeout communicating the flight control plugin
Prerequisites
Description
I try to use GymFC as the environment of my RL algorithms.
once I call the reset method, following lines are reported"
procedure to reproduce
follow the instruction to install GymFc and GymFC_nf, and build the gymfc-aircraft-plugins
then run the following python script, it is expected to create the env, reset it and finish
The text was updated successfully, but these errors were encountered: