You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding em_bench_high.py, for the computation of lim_inf, lim_sup and the generation of unif, shall we use X (i.e. concatenation of X_train and X_test) instead of X_ (i.e. X_test only)?
I understand that if the distribution of testing data and training data are very different, MV would become inaccurate if we fire the Monte Carlo points based on the range of testing data only, as those MC points cannot reached the range of the training data. It also seems to me that using X instead of X_ would be more accurate to compute Leb(s >= u). Meanwhile, if we want to follow the same logic of the basic file em_bench.py, I believe we should use the concatenation instead of the testing data only?
Below are the key lines of the current file em_bench_high.py:
Regarding
em_bench_high.py
, for the computation oflim_inf
,lim_sup
and the generation ofunif
, shall we useX
(i.e. concatenation ofX_train
andX_test
) instead ofX_
(i.e.X_test
only)?I understand that if the distribution of testing data and training data are very different, MV would become inaccurate if we fire the Monte Carlo points based on the range of testing data only, as those MC points cannot reached the range of the training data. It also seems to me that using
X
instead ofX_
would be more accurate to compute Leb(s >= u). Meanwhile, if we want to follow the same logic of the basic fileem_bench.py
, I believe we should use the concatenation instead of the testing data only?Below are the key lines of the current file
em_bench_high.py
:The text was updated successfully, but these errors were encountered: