-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading lightning data for use with SPITFIRE #562
Comments
Thanks for opening this @slevisconsulting. tagging @rosiealice for her take as well This build up of fuel is also closely related to fire duration (we have a cap) and that is in need of development. (on my list of things to do.) |
The 2-D simulation with 2-D lightning data fails and shows me that, what I thought worked in the 1-D simulation, worked only because I was in 1-D and g always equaled 1. @jkshuman in our call you mentioned some met variables (eg. precip) used by FATES. I will look at how they are used and will try to mimic. |
Seems to work now and, as Jackie and I hoped, without requiring total_tree_fraction or total_grass_fraction > 0. |
@slevisconsulting , how are things going here? Feel free to link the branch that you are using and I can look over the code and see if things make sense to me too. One thing I'm noticing, is that by bringing in a spatial lightning dataset, there are CLM data structures that need to be accessed in the fire code. This is something that we could add to the coupling interface (and I could help with as well) if need be. |
I replicated code used in CLM's BGC to read Fang Li's or FMEC's lightning data for use in Fang Li's fire model. I also turned on fates_seed_suppl in fates_params_default.cdl. Initially I turned this on as a test because the model was crashing as a result of the first way that I chose to pass the lightning data to SPITFIRE. Turning on fates_seed_suppl didn't help, but I left it on anyway.
@slevisconsulting glad you checked functionality before we got into a big discussion on the regeneration impacts. and even better news that you fixed it. |
1. If user sets stream_fldfilename_lightng = 'nofile' in user_nl_clm, then FATES will use ED_val_nignitions from FATES params to calculate NF, i.e. the FATES default lightning value that is constant in time and space. 2. If user does not set stream_fldfilename_lightng in user_nl_clm, then FATES will read the Li 2016 lightning data set to calculate NF. This is a 1-degree x 1-degree 3-hourly climatological data set, so it varies in time and space without interannual variability. 3. If user sets stream_fldfilename_lightng to a lightning data set, then FATES will read the corresponding data set as in (2). 4. If user sets stream_fldfilename_lightng to a file name that contains the string "ignition", then FATES will do as in (2) and set FDI = 1 because the data represent successful ignitions rather than lightning.
@lmkueppers @jkshuman at Friday's meeting we talked about making a list of fire-related model variables that we might compare to obs if obs were available. As a starting point, here are the fire-related variables that I have found in the CLM-FATES history output: FIRE_AREA (fraction of grid cell) |
@slevisconsulting let's open a separate issue for tracking these variables, but sounds good. I can add details as well and clarify a few variables. |
@slevisconsulting @lmkueppers @rosiealice the read lightning data works with the general CLM code across the tropics! Run is still going, but definitely different in space and time in the first two years. Thanks @slevisconsulting |
Cool. Nice job everyone! |
@slevisconsulting @lmkueppers @pollybuotte @rgknox there is a problem with the lightning branch. I am not sure if it is on the ctsm side where I am using https://github.com/rgknox/ctsm/tree/api_8_works_w_read_lightning or on the fates side where I am using https://github.com/jkshuman/fates/tree/ignition-fixes-tag130_api8. master tag 130 and api8 was tested with PR #524 and #561 (both in master already) and these both behave "normally" out to year 10 with respect to fire behavior across tropics. |
@jkshuman here are some additional clues:
Before reading @jkshuman's post I was about to run with my updated lightning branch to determine whether or not the problem was with #572 , but you have now confirmed that the updated lightning branch has the same problem. My question to @lmkueppers is this:
I will wait for @lmkueppers's decision on this before I complete additional work. |
Hm. This is annoying... I'm hoping that @rgknox or @glemieux have some insights here. @pollybuotte is planning to run with CLM lightening across a transect with fire on, so would need this to be working. I don't know which version of FATES she needs to use (or the diffs between these options). @slevisconsulting, I think it's worth getting this debugged so we're not moving backwards with Polly's runs. Thanks. |
@slevisconsulting @lmkueppers @rgknox I added a few details to my comment last night. Running with master branch that has PR #561 the fire behavior is normal through 18 years in the tropics. So it is something unique to those other branches. (This is good that master is still running normally!) There was a conflict with @slevisconsulting lightning branch, so maybe I didn't resolve it correctly. There is also one commit on those branches (read_lightning and passive crown fire) jkshuman@8b44706 |
@slevisconsulting @lmkueppers @rgknox my initial testing through 10 years shows that reverting that commit seems to fix the problem. I am testing it within passive crown fire branch and the read lightning branch https://github.com/jkshuman/fates/tree/passive_crown_fire |
Question: if there is no fuel, this could either be because 1) there is no influx of new fuel, 2) excesive burning of existing fuel or 3) a bug that is sending it to the ether (which our checks would catch). Do you have a sense of which this might be, or did I miss a possibility? I can't imagine case 1 if there are any live plants left, so they whole ecosystem must had collapsed and burned right? |
@rgknox I have stepped onto the event horizon and uncovered a worm hole into instability, at least that is how this fail feels. @rgknox it seems to be something with the fuels. The system converts to almost all grass in my tests, but the fuels are not mapped correctly. The live grass is disconnected from the fuel pool structure by year 2. There are periodic fires, but not in the style that I am used to. The whole thing is completely strange. A few more simulation years and I will push the changes to my branches. (passive crown fire and read lightning) |
@slevisconsulting @pollybuotte please test the updates to see if things behave. |
@slevisconsulting please check the FUEL_MOISTURE_NFSC as another diagnostic of the fire behavior in regards to this issue. So far things have not totally failed for both a lightning test and a crown fire test, but I may be missing something. |
@slevisconsulting @pollybuotte sadly the read lightning branch also failed in year 5. I am trying to create a new branch as a merge between pr #561 and @slevisconsulting branch https://github.com/slevisconsulting/fates/tree/read_lightning My test with master for PR #561 is good through year 33. |
@slevisconsulting @pollybuotte @lmkueppers @rgknox |
@slevisconsulting yes, I am using Ryan's read_lightning api branch. |
Great news @jkshuman ! One more question, with the hope of narrowing down my ./case.submit error: Are you running on cheyenne? Or izumi/hobart? The error that I get is on izumi. |
hobart.
…------------------------------------------------------------------------
Jacquelyn Shuman, PhD
Project Scientist
Terrestrial Sciences Section
National Center for Atmospheric Research
PO Box 3000
Boulder, Colorado 80307-3000
USA
[email protected]
office: +1-303-497-1787
On Tue, Oct 8, 2019 at 12:22 PM Samuel Levis ***@***.***> wrote:
Great news @jkshuman <https://github.com/jkshuman> ! One more question,
with the hope of narrowing down my ./case.submit error: Are you running on
cheyenne? Or izumi/hobart? The error that I get is on izumi.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#562?email_source=notifications&email_token=AFIUHBTTWODHFTBWQMSRBT3QNTFXVA5CNFSM4IJAPWG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAVEELQ#issuecomment-539640366>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFIUHBX4I7FTMDD6YW22MD3QNTFXVANCNFSM4IJAPWGQ>
.
|
Got rid of the ./case.submit error. (In case this helps others: I had forgotten to place the three @jkshuman, results from my test run look correct in the first few years. Again, thank you for removing the bug (which, if I understood correctly, still remains unidentified in the versions that fail). @pollybuotte if you would like me to double check one of your 2D transect cases before you embark on the multi-thousand ensembles, pls let me know. |
@slevisconsulting it would be great if you could run the elevation transect grid from bare ground to check. Domain file is domain.lnd.CA_ssn_czo_4km.nc |
That is the one that I'm testing, and it continues to work correctly this far.
|
More good news: |
@slevisconsulting that's good. I will implement this fix into passive_crown_fire and test. Good news that I don't need to roll things all the way back in that branch. EDCohortdynamics breaking the world.... |
If you do believe that there is a bug in master, or something that needs attention (ie a vulnerability that could enter master), could either of you (@jkshuman or @slevisconsulting) encapsulate it in a specific issue? I'm not clear on where the problem is. For instance I'm seeing reference to EDCohortdynamics, but not sure what the reference refers to. Or maybe, the bug was not necessarily isolated and identified yet, but was circumvented somehow by using the correct mixture of branches and commits? |
@rgknox I am chatting with @glemieux and I will update you on my test. There was one line in zero_cohort that I had deleted and I forgot to revert it yesterday. In my busy hasty day, I literally didn't see that on the commit until Greg and I looked at it today. Added in the zero for fraction_crown_burned in EDCohortdynamics and testing now. |
@rgknox @glemieux rest assured there is not a hidden bug, but there may be a vulnerability? I mistakenly introduced this behavior, so not hidden, but we should certainly be aware of why things went haywire. This test address that. the test I ran which revert all parts of the jkshuman/fates@8b44706 commit fixes the buggy behavior. (lesson to self, just revert the commit rather than doing it by hand - I missed an obvious line deletion in my haste...) Specifically in the bad version, the fire area quickly becomes odd (small point values across SA rather than broad areas), fuel moisture shows the same bad point pattern where it should be broad coverage even with burning, and TLAI shows coverage of vegetation suggesting fuel is present furthering suggesting this is bad behavior. In that bad commit there were three variables that were removed from zero_cohort (frac_crown_burned, crownfire_mort, cambial_mort). I mistakenly left out frac_crown_burn when I reverted that commit manually. It may be worth figuring out why this one variable (or set of variables) creates this bad behavior. Adding a reference of this issue to these zero_chort nan_cohort issues #575 and #231 so we can revisit my trip to the event horizon. Fun times. FIRE_AREA Yr5_Zero_frac_crown_burn_left_EDCohort_right_SFMain |
@jkshuman and I chatted about this yesterday; my guess is that this behavior is the result of the compiler trying to interpret what to do with |
@glemieux @rgknox I didn't think much of making this change at the time. frac_crown_burned is in the nan_cohort, and it is set to zero inside SFMain. Was not expecting this sort of fail. (another lesson to self is to test everything...) fates/biogeochem/EDCohortDynamicsMod.F90 Line 537 in 1ad93c3
Lines 903 to 928 in 1ad93c3
|
@slevisconsulting @pollybuotte @lmkueppers I just returned from the FireMIP conference, and was alerted to the fact that using the LIS lightning data requires a scaling adjustment to account for the amount of cloud to ground flashes (.20) and the efficiency/energy required to generate fires (0.04). I confirmed in the CLM code that the Li fire model uses .22 as a scaling factor on this data. We will need to update this for these simulations. and make a decision to use .20 as many fire models do, or both 0.20 and 0.04. With this it will give natural fires only. Anthropogenic fires would be handled with a different set of equations. |
Good catch @jkshuman thank you! |
And welcome back! |
@slevisconsulting thanks! I highly recommend visiting South Africa. |
Regarding the question of one or both scaling factors, we need to make sure not to double count. Is it possible that the 0.04 factor corresponds to SPITFIRE's FDI? |
@slevisconsulting I am testing a few changes, and found a change you made to FDI with read lightning data that needs to be reverted. The FDI calculation should not be changed. The ignitions dataset only provides strike data, not successful ignitions. Successful ignitions is related to fuel conditions and climate. So this new conditional should be removed, and the original equation retained. |
@jkshuman I disagree, unless I have misunderstood your comment: The first half of this if-statement corresponds to cases when the input data literally represent successful ignitions rather than lightning strikes, e.g. Bin Chen's data. @lmkueppers group would like to keep this option as far as I know. In fact, now I realize that we need a similar if-statement to bypass the Cloud-to-Ground coefficient when using Bin's data... |
@slevisconsulting I agree that Bin's successful strike dataset is a special case, and should use a conditional for FDI and, yes, would need to bypass that Cloud to ground reduction on lightning. For the FDI conditional on Bin's data, @slevisconsulting @lmkueppers @pollybuotte it would be worth adding a flag or print statement in situations where there is an ignition and FDI is set to 1, but the FDI would have been low ignition potential without this data. We could print both the ignition FDI of 1 from Bin's strike data and the calculated FDI, and then look at that with the climate and fuel data. Those differences would provide information about how the climate forcing data and the acc_NI or this equation are potentially missing details. That would be a nice evaluation with Bin's data, and a nice evaluation of this part of the fire model. Let's talk about that. (FDI affects area burn and fire duration, so these differences carry through to other parts of the fire code.) |
I'd like to request an additional check that the lightning file has been read. This would prevent a user from running with the wrong CLM branch. As it is now, running with fates_next_api does not cause a fail, but no fire results because there are no ignitions. |
@jkshuman updated me as follows: Also... In src/main/clm_initializeMod.F90, where you see Need to add @pollybuotte confirmed that the above works. |
@slevisconsulting @pollybuotte @ckoven I am testing in tropics and it seems fine so far. @pollybuotte said she would test in CA. Let me know if there is a problem anywhere. (I hate messing with the api...) |
A plan we talked about is to change the use_fates_spitfire namelist parameter on CLM from a binary switch, to an integer flag, where: |
The CTSM issue this corresponds with is http://github.com/ESCOMP/CTSM/issues/982 |
@rgknox from looking into the CTSM side, I think it probably makes the most sense to keep the use_fates_spitfire namelist parameter, but also pass the fire_method character string into fates. It will indicate different levels something like: fatesnofiredata and fatesfirelnfm. Later other datalevels could be added fateslNpopdens, fateslNpNgdp, fateslNpNgNag. Most of this would be on the HLM side, it's just useful for FATES to check it to see what data is available from the HLM. |
Actually now I take that back. You could change it use one type to signify if SF is on and what level of data is being sent to it. At first I thought it needed to use the fire_method character control string that is being used in cnfire -- but now I see that it doesn't need to be that way. |
OK, I've fleshed out a proposal in the CTSM issue. It removes most of what was added into FATES, and puts it into the HLM. And there it's extending an existing class with a FATES specific fire data class. So there's not duplication of either FATES or CTSM code. It also removes calling CTSM modules from within FATES, which is an important design feature for FATES development (in order to support more than one HLM). The thing that needs to be decided on the FATES side is how to trigger the different levels of fire data usage. You could use an integer as suggested above by @rgknox to trigger both Spit-Fire and the fire data level. On the CTSM side I was suggesting using a character string called "fire_method" -- because that's how it's handled by CNFire in CTSM. But, it wouldn't have to be handled that way, it just needs to be coordinated between the two. |
this got fixed by #635 so closing. |
I have competed two FATES fire simulations with the default lightning parameter
ED_val_nignitions
...I encountered a problem, however, when I replaced the scalar
ED_val_nignitions
in this linecurrentPatch%NF = ED_val_nignitions * currentPatch%area/area /365
with the vector
lnfm24(g)
, the latter being the daily average of 3-hourly lightning data from a 2-D dataset. I used these two lines insubroutine area_burnt
to locate the index g:p = currentCohort%pft
g = patch%gridcell(p)
The problem is that fire can eliminate
currentCohort%pft
entirely, so the model crashes when I ask for it.I have a temporary fix. In
subroutine fire_intensity
I have replaced...currentPatch%fire = 1 ! Fire... :D
with
if (currentPatch%total_tree_area + currentPatch%total_grass_area > 0._r8) then
currentPatch%fire = 1 ! Fire... :D
else
currentPatch%fire = 0
end if
@jkshuman does not like this because this way a bare ground grid cell full of dead litter cannot burn.
My question @rgknox @ckoven @pollybuotte @lmkueppers @ekluzek is this: Can anyone recommend an alternate way of locating the index g? I think this would eliminate the problem altogether.
Notes:
currentPatch%NF = lnfm24(g) * 24._r8 ! #/km2/hr to #/km2/day
The text was updated successfully, but these errors were encountered: