Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom time values with get_simulation_results() #54

Open
Theo-BRN opened this issue Jun 5, 2024 · 5 comments
Open

Custom time values with get_simulation_results() #54

Theo-BRN opened this issue Jun 5, 2024 · 5 comments

Comments

@Theo-BRN
Copy link

Theo-BRN commented Jun 5, 2024

Heya Frank,

I hope you're doing well!

I wondered if it would be possible to add a custom, time-values argument to the to the get_simulation_results() function in task_parameterestimation.py.

I often like to see the simulation as a smooth line against experimental data points and I can't quite manage to do that with values_only set to True or False. I think neither works because I have some equilibration time beforehand.

Happy to give clarification on what I mean if you'd like.

Best,
Theo

@fbergmann
Copy link
Member

Could you try manually changing the step number of the time course task before calling get_simulation_results, and see if that gives you the results you want:

set_task_settings(T.TIME_COURSE, settings={'problem': {'StepNumber': 1000}})

where 1000 is the higher number of steps you would like to see for smoothing. If that works ok for you I could release a version that passes that parameter along.

@Theo-BRN
Copy link
Author

Theo-BRN commented Jun 6, 2024

Brilliant, that does seem to do it!

I get more values, the higher StepNumber is set to. Are the time values that come out based on some kind of duration / StepNumber calculation?

Thank you :-)

@Theo-BRN
Copy link
Author

Theo-BRN commented Jun 6, 2024

I have also noticed a bug I think, which may explain some of the initial inconsistencies I was getting.

I actually get different time values from running get_simulation_results with values_only=False, if I at any point run it with values_only=True:

# Run function output, looking only at times
function_output = bsc.get_simulation_results(values_only=False)
df_first_false = pd.concat([df for df in function_output[1]])['Time'].unique()

# Run again, changing values_only to True
function_output = bsc.get_simulation_results(values_only=True)
df_first_true = pd.concat([df for df in function_output[1]])['Time'].unique()

# Run again, changing values_only back to False
function_output = bsc.get_simulation_results(values_only=False)
df_second_true = pd.concat([df for df in function_output[1]])['Time'].unique()

I find that df_first_false does not equal df_second_true. Specifically, df_first_false contains values before my experiments start time (before my 2e06s equilibration time), while 'df_second_true' only contains values after my experiment's start time (after 2e06s). Personally, I think it's nicer to have the second option really; I'm never interested in the equilibration kinetics when comparing it to an experiment.

Hope this is helpful!

@fbergmann
Copy link
Member

I never considered a use case where you would run it continuously with only changing the flags. when you run with values_only=True the time course settings are changed such that only points at the experiment times are returned.

running valus_only=False just runs the time course specifying the duration parameter. However, since the previous run switched it to using only certain values this will not work nicely.

To have it consistent you want to try:

# Run function output, looking only at times
settings = basico.get_task_settings(T.TIME_COURSE)
function_output = bsc.get_simulation_results(values_only=False)
df_first_false = pd.concat([df for df in function_output[1]])['Time'].unique()

# Run again, changing values_only to True
basico.set_task_settings(T.TIME_COURSE, settings=settings)
function_output = bsc.get_simulation_results(values_only=True)
df_first_true = pd.concat([df for df in function_output[1]])['Time'].unique()

# Run again, changing values_only back to False
basico.set_task_settings(T.TIME_COURSE, settings=settings)
function_output = bsc.get_simulation_results(values_only=False)
df_second_true = pd.concat([df for df in function_output[1]])['Time'].unique()

to exclude values before your equilibration time, with values_only=False you can set the output start time parameter.

In any case just remember for now that you'll have to manually preserve the settings before multiple runs. I'll be sure to change that in a later version. thanks for bringing this up

@fbergmann
Copy link
Member

release 0.70 now restores the time course settings, resulting in the runs being repeatable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants