Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading lightning data for use with SPITFIRE #562

Closed
slevis-lmwg opened this issue Aug 2, 2019 · 55 comments
Closed

Reading lightning data for use with SPITFIRE #562

slevis-lmwg opened this issue Aug 2, 2019 · 55 comments

Comments

@slevis-lmwg
Copy link
Contributor

slevis-lmwg commented Aug 2, 2019

I have competed two FATES fire simulations with the default lightning parameter ED_val_nignitions...

  1. At the CZ2 site with 100% ponderosa pine initialized with observed stand data (point simulation)
  2. On a 2-D transect that includes CZ2 as one of the grid cells; started from bare ground, ran 20 years without fire, then continued with fire

I encountered a problem, however, when I replaced the scalar ED_val_nignitions in this line
currentPatch%NF = ED_val_nignitions * currentPatch%area/area /365
with the vector lnfm24(g), the latter being the daily average of 3-hourly lightning data from a 2-D dataset. I used these two lines in subroutine area_burnt to locate the index g:
p = currentCohort%pft
g = patch%gridcell(p)
The problem is that fire can eliminate currentCohort%pft entirely, so the model crashes when I ask for it.

I have a temporary fix. In subroutine fire_intensity I have replaced...
currentPatch%fire = 1 ! Fire... :D
with
if (currentPatch%total_tree_area + currentPatch%total_grass_area > 0._r8) then
currentPatch%fire = 1 ! Fire... :D
else
currentPatch%fire = 0
end if

@jkshuman does not like this because this way a bare ground grid cell full of dead litter cannot burn.

My question @rgknox @ckoven @pollybuotte @lmkueppers @ekluzek is this: Can anyone recommend an alternate way of locating the index g? I think this would eliminate the problem altogether.

Notes:

  • I have confirmed that this approach does pick up the correct lightning value for the location.
  • In the above line that calculates NF, I converted the lightning units based on a comment in the code stating that NF is in #/km2/day:
    currentPatch%NF = lnfm24(g) * 24._r8 ! #/km2/hr to #/km2/day
@jkshuman
Copy link
Contributor

jkshuman commented Aug 2, 2019

Thanks for opening this @slevisconsulting. tagging @rosiealice for her take as well
My concern is that this temporary fix of requiring tree or grass crown area in the patch for fire has implications for regeneration and regrowth. If the ground fuel can support a fire, I think we should burn it. This connects to the larger issue that fire should in reality be impacting the soil bed as it relates to regeneration. This is happening at the patch on a day, so maybe things work out at the site scale? It is not clear to me how this may alter fire behavior overall. I want to discuss this broadly in terms of longterm behavior and expectations for the model, specifically how this might impact regeneration (short-term) and migration (long-term development).

This build up of fuel is also closely related to fire duration (we have a cap) and that is in need of development. (on my list of things to do.)

@slevis-lmwg
Copy link
Contributor Author

The 2-D simulation with 2-D lightning data fails and shows me that, what I thought worked in the 1-D simulation, worked only because I was in 1-D and g always equaled 1.

@jkshuman in our call you mentioned some met variables (eg. precip) used by FATES. I will look at how they are used and will try to mimic.

@slevis-lmwg
Copy link
Contributor Author

Seems to work now and, as Jackie and I hoped, without requiring total_tree_fraction or total_grass_fraction > 0.

@rgknox
Copy link
Contributor

rgknox commented Aug 5, 2019

@slevisconsulting , how are things going here? Feel free to link the branch that you are using and I can look over the code and see if things make sense to me too.

One thing I'm noticing, is that by bringing in a spatial lightning dataset, there are CLM data structures that need to be accessed in the fire code. This is something that we could add to the coupling interface (and I could help with as well) if need be.

slevis-lmwg referenced this issue in slevis-lmwg/slevis_fates_work Aug 5, 2019
I replicated code used in CLM's BGC to read Fang Li's or FMEC's
lightning data for use in Fang Li's fire model.

I also turned on fates_seed_suppl in fates_params_default.cdl.
Initially I turned this on as a test because the model was crashing as a
result of the first way that I chose to pass the lightning data to
SPITFIRE. Turning on fates_seed_suppl didn't help, but I left it on
anyway.
@jkshuman
Copy link
Contributor

jkshuman commented Aug 6, 2019

@slevisconsulting glad you checked functionality before we got into a big discussion on the regeneration impacts. and even better news that you fixed it.
@rgknox putting this into the coupler makes sense. Reading a lightning dataset will become more standard for running the fire model as we move forward. (Though I want to retain the ability to just use a static value for testing.)

slevis-lmwg referenced this issue in slevis-lmwg/slevis_fates_work Aug 10, 2019
1. If user sets stream_fldfilename_lightng = 'nofile' in user_nl_clm,
   then FATES will use ED_val_nignitions from FATES params to calculate
   NF, i.e. the FATES default lightning value that is constant in time
   and space.
2. If user does not set stream_fldfilename_lightng in user_nl_clm, then
   FATES will read the Li 2016 lightning data set to calculate NF. This
   is a 1-degree x 1-degree 3-hourly climatological data set, so it
   varies in time and space without interannual variability.
3. If user sets stream_fldfilename_lightng to a lightning data set, then
   FATES will read the corresponding data set as in (2).
4. If user sets stream_fldfilename_lightng to a file name that contains
   the string "ignition", then FATES will do as in (2) and set FDI = 1
   because the data represent successful ignitions rather than
   lightning.
@slevis-lmwg
Copy link
Contributor Author

@lmkueppers @jkshuman at Friday's meeting we talked about making a list of fire-related model variables that we might compare to obs if obs were available. As a starting point, here are the fire-related variables that I have found in the CLM-FATES history output:

FIRE_AREA (fraction of grid cell)
FIRE_INTENSITY (kJ/m/s)
FIRE_NESTEROV_INDEX (unitless)
FIRE_ROS i.e. Rate of spread (m/min)
FIRE_TFC_ROS i.e. Total fuel consumed (unitless)
Fire_Closs (gC/m2/s) Carbon loss due to fire
SCORCH_HEIGHT (m)
SUM_FUEL related to ROS, so omits 1000-hr fuels (gC/m2)
fire_fuel_bulkd (?) Fuel bulk density
fire_fuel_sav (?) Fuel surface-to-vol ratio
fire_fuel_mef (units?) Fuel moisture
FIRE_FUEL_EFF_MOIST (units?) different than previous variable?
FUEL_MOISTURE_NFSC (?) Size-resolved fuel moisture
M5_SCLS (#/ha/yr) Fire mortality by size

@jkshuman
Copy link
Contributor

@slevisconsulting let's open a separate issue for tracking these variables, but sounds good. I can add details as well and clarify a few variables.

@jkshuman
Copy link
Contributor

@slevisconsulting @lmkueppers @rosiealice the read lightning data works with the general CLM code across the tropics! Run is still going, but definitely different in space and time in the first two years. Thanks @slevisconsulting

@rosiealice
Copy link
Contributor

Cool. Nice job everyone!

@jkshuman
Copy link
Contributor

jkshuman commented Oct 6, 2019

@slevisconsulting @lmkueppers @pollybuotte @rgknox there is a problem with the lightning branch. I am not sure if it is on the ctsm side where I am using https://github.com/rgknox/ctsm/tree/api_8_works_w_read_lightning or on the fates side where I am using https://github.com/jkshuman/fates/tree/ignition-fixes-tag130_api8.
The runs I did with these branches initially have burned area, but then by year 5 there is no fuel and burned area goes to zero. For development of the #572 and #573 I suggest using the master branch and then these respective crown fire branches, until the lightning piece is sorted out.

master tag 130 and api8 was tested with PR #524 and #561 (both in master already) and these both behave "normally" out to year 10 with respect to fire behavior across tropics.

@slevis-lmwg
Copy link
Contributor Author

slevis-lmwg commented Oct 7, 2019

@jkshuman here are some additional clues:

  1. The lightning code worked before I updated to newer versions of ctsm/fates. These were the ctsm/fates versions when it worked:
    api_7.3/read_lightning_works_w_api_7.3
    I have shown figures from 1D (CZ2) simulations with established stand initial conditions and 2D (transect) simulations with bare ground initial conditions that ran for 30+ years and kept burning throughout the runs.

  2. On 9/16 I posted that I was encountering the same problem in my active crown fire Pull Request, which does not include the lightning code. The problem still occurs when I shut off crown fire. I branched my active_crown_fire branch from @jkshuman's passive_crown_fire branch Passive Crown Fire development #572 .

Before reading @jkshuman's post I was about to run with my updated lightning branch to determine whether or not the problem was with #572 , but you have now confirmed that the updated lightning branch has the same problem.

My question to @lmkueppers is this:
Considering that @pollybuotte is about to start thousands of ensemble simulations using the new lightning code, do you prefer that:

  • I help @jkshuman debug this issue? Or that
  • @pollybuotte work with the ctsm/fates version in which everything worked?
  • I'm also open to options that I haven't considered

I will wait for @lmkueppers's decision on this before I complete additional work.

@lmkueppers
Copy link

Hm. This is annoying... I'm hoping that @rgknox or @glemieux have some insights here.

@pollybuotte is planning to run with CLM lightening across a transect with fire on, so would need this to be working. I don't know which version of FATES she needs to use (or the diffs between these options). @slevisconsulting, I think it's worth getting this debugged so we're not moving backwards with Polly's runs. Thanks.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 7, 2019

@slevisconsulting @lmkueppers @rgknox I added a few details to my comment last night. Running with master branch that has PR #561 the fire behavior is normal through 18 years in the tropics. So it is something unique to those other branches. (This is good that master is still running normally!)

There was a conflict with @slevisconsulting lightning branch, so maybe I didn't resolve it correctly. There is also one commit on those branches (read_lightning and passive crown fire) jkshuman@8b44706
that is not on master. Will revert it and test.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 7, 2019

@slevisconsulting @lmkueppers @rgknox my initial testing through 10 years shows that reverting that commit seems to fix the problem. I am testing it within passive crown fire branch and the read lightning branch https://github.com/jkshuman/fates/tree/passive_crown_fire
and https://github.com/jkshuman/fates/tree/read_lightning-ignfix-tag130-api8
perhaps @rgknox can weigh in on why these changes caused this fail, but hopefully this gets us back on track. will let you know of these tests look. simulations running now.

@rgknox
Copy link
Contributor

rgknox commented Oct 7, 2019

The runs I did with these branches initially have burned area, but then by year 5 there is no fuel and burned area goes to zero. For development of the #572 and #573 I suggest using the master branch and then these respective crown fire branches, until the lightning piece is sorted out.

Question: if there is no fuel, this could either be because 1) there is no influx of new fuel, 2) excesive burning of existing fuel or 3) a bug that is sending it to the ether (which our checks would catch).

Do you have a sense of which this might be, or did I miss a possibility? I can't imagine case 1 if there are any live plants left, so they whole ecosystem must had collapsed and burned right?

@jkshuman
Copy link
Contributor

jkshuman commented Oct 7, 2019

@rgknox I have stepped onto the event horizon and uncovered a worm hole into instability, at least that is how this fail feels. @rgknox it seems to be something with the fuels. The system converts to almost all grass in my tests, but the fuels are not mapped correctly. The live grass is disconnected from the fuel pool structure by year 2. There are periodic fires, but not in the style that I am used to. The whole thing is completely strange.
good news is that reverting that commit restores balance to the universe. jkshuman@8b44706

A few more simulation years and I will push the changes to my branches. (passive crown fire and read lightning)

@jkshuman
Copy link
Contributor

jkshuman commented Oct 7, 2019

@slevisconsulting @pollybuotte please test the updates to see if things behave.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 7, 2019

@slevisconsulting please check the FUEL_MOISTURE_NFSC as another diagnostic of the fire behavior in regards to this issue. So far things have not totally failed for both a lightning test and a crown fire test, but I may be missing something.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 7, 2019

@slevisconsulting @pollybuotte sadly the read lightning branch also failed in year 5. I am trying to create a new branch as a merge between pr #561 and @slevisconsulting branch https://github.com/slevisconsulting/fates/tree/read_lightning

My test with master for PR #561 is good through year 33.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 8, 2019

@slevisconsulting @pollybuotte @lmkueppers @rgknox
I have a NEW functional branch with read lightning based off of PR #561 from master.
https://github.com/jkshuman/fates/tree/read_lightning_pr561
It appears to be reading the lightning file, and functioning just fine through year 6 across the tropics (normal fuels extent, fire area, etc.) Please test, and let me know. Not sure where the fail is inside those other branches of mine. Will think of a plan for the passive crown fire branch. what a hassle.

@slevis-lmwg
Copy link
Contributor Author

@jkshuman thank you for trying to figure this out. I'm trying to test your new branch, but I'm coming across a different problem. My ./case.submit gives an error. Are you using @rgknox 's api8 that was modified for read_lightning or something else?

@jkshuman
Copy link
Contributor

jkshuman commented Oct 8, 2019

@slevisconsulting yes, I am using Ryan's read_lightning api branch.
@lmkueppers @pollybuotte @rgknox happy to report this run was successful through year 21. Please confirm that it is also working in CA.

@slevis-lmwg
Copy link
Contributor Author

Great news @jkshuman ! One more question, with the hope of narrowing down my ./case.submit error: Are you running on cheyenne? Or izumi/hobart? The error that I get is on izumi.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 8, 2019 via email

@slevis-lmwg
Copy link
Contributor Author

Got rid of the ./case.submit error. (In case this helps others: I had forgotten to place the three user_datm.streams.txt.CLMCRUNCEP.* files needed for our 2D transect runs in my case directory :-))

@jkshuman, results from my test run look correct in the first few years. Again, thank you for removing the bug (which, if I understood correctly, still remains unidentified in the versions that fail).

@pollybuotte if you would like me to double check one of your 2D transect cases before you embark on the multi-thousand ensembles, pls let me know.

@pollybuotte
Copy link

@slevisconsulting it would be great if you could run the elevation transect grid from bare ground to check. Domain file is domain.lnd.CA_ssn_czo_4km.nc
Thanks!

@slevis-lmwg
Copy link
Contributor Author

That is the one that I'm testing, and it continues to work correctly this far.

@slevisconsulting it would be great if you could run the elevation transect grid from bare ground to check.

@slevis-lmwg
Copy link
Contributor Author

More good news:
With this bug now removed, I went back and differenced the read_lightning_pr561 branch
(https://github.com/jkshuman/fates/tree/read_lightning_pr561) from my active_crown_fire branch/PR (jkshuman#5) and discovered the cause of the no-fire-after-yr-1 problem. I pushed my corrections to that PR with an explanation of the solution to the problem.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 9, 2019

@slevisconsulting that's good. I will implement this fix into passive_crown_fire and test. Good news that I don't need to roll things all the way back in that branch. EDCohortdynamics breaking the world....

@rgknox
Copy link
Contributor

rgknox commented Oct 9, 2019

If you do believe that there is a bug in master, or something that needs attention (ie a vulnerability that could enter master), could either of you (@jkshuman or @slevisconsulting) encapsulate it in a specific issue? I'm not clear on where the problem is. For instance I'm seeing reference to EDCohortdynamics, but not sure what the reference refers to. Or maybe, the bug was not necessarily isolated and identified yet, but was circumvented somehow by using the correct mixture of branches and commits?

@jkshuman
Copy link
Contributor

jkshuman commented Oct 9, 2019

@rgknox I am chatting with @glemieux and I will update you on my test. There was one line in zero_cohort that I had deleted and I forgot to revert it yesterday. In my busy hasty day, I literally didn't see that on the commit until Greg and I looked at it today. Added in the zero for fraction_crown_burned in EDCohortdynamics and testing now.

@jkshuman
Copy link
Contributor

jkshuman commented Oct 10, 2019

@rgknox @glemieux rest assured there is not a hidden bug, but there may be a vulnerability? I mistakenly introduced this behavior, so not hidden, but we should certainly be aware of why things went haywire. This test address that. the test I ran which revert all parts of the jkshuman/fates@8b44706 commit fixes the buggy behavior. (lesson to self, just revert the commit rather than doing it by hand - I missed an obvious line deletion in my haste...) Specifically in the bad version, the fire area quickly becomes odd (small point values across SA rather than broad areas), fuel moisture shows the same bad point pattern where it should be broad coverage even with burning, and TLAI shows coverage of vegetation suggesting fuel is present furthering suggesting this is bad behavior. In that bad commit there were three variables that were removed from zero_cohort (frac_crown_burned, crownfire_mort, cambial_mort). I mistakenly left out frac_crown_burn when I reverted that commit manually. It may be worth figuring out why this one variable (or set of variables) creates this bad behavior. Adding a reference of this issue to these zero_chort nan_cohort issues #575 and #231 so we can revisit my trip to the event horizon. Fun times.
I attach a few screenshots of the simulations (left is correct, right is the buggy version).
simulations on Hobart:
FIX: /scratch/cluster/jkshuman/archive/Fire_zero-frac-crown-test_4x5_9c12e402_0760b91a/lnd/hist
BAD: /scratch/cluster/jkshuman/archive/Fire_lightning_test_junk_4x5_a1a8efe5_bc1d27ed/lnd/hist

FIRE_AREA Yr5_Zero_frac_crown_burn_left_EDCohort_right_SFMain
Fire_area_yr5_Zero_frac_crown_burn_left_EDCohort_right_SFMain
Fuel_NFSC_yr5_Zero_crown_frac_burn_left_EDCohort_right_SFMain
Fuel_NFSC_yr5_Zero_crown_frac_burn_left_EDCohort_right_SFMain
TLAI_yr5-Zero_frac_crown_burn_left_EDCohort_right_SFMain
TLAI_yr5-Zero_frac_crown_burn_left_EDCohort_right_SFMain

@glemieux
Copy link
Contributor

glemieux commented Oct 10, 2019

@jkshuman and I chatted about this yesterday; my guess is that this behavior is the result of the compiler trying to interpret what to do with frac_crown_burned since it was an uninitialized variable. I'm working on a fix in CTSM for a similar issue (#548) right now. @rgknox if this is the case, it's probably another reason to try and adopt compiler flag options and other diagnostics during testing as you suggested up in ESMCI/cime#3205.

@jkshuman
Copy link
Contributor

@glemieux @rgknox I didn't think much of making this change at the time. frac_crown_burned is in the nan_cohort, and it is set to zero inside SFMain. Was not expecting this sort of fail. (another lesson to self is to test everything...)

currentCohort%fraction_crown_burned = nan ! proportion of crown affected by fire

fates/fire/SFMainMod.F90

Lines 903 to 928 in 1ad93c3

do while(associated(currentCohort))
currentCohort%fraction_crown_burned = 0.0_r8
if (EDPftvarcon_inst%woody(currentCohort%pft) == 1) then !trees only
! Flames lower than bottom of canopy.
! c%hite is height of cohort
if (currentPatch%SH < (currentCohort%hite-currentCohort%hite*EDPftvarcon_inst%crown(currentCohort%pft))) then
currentCohort%fraction_crown_burned = 0.0_r8
else
! Flames part of way up canopy.
! Equation 17 in Thonicke et al. 2010.
! flames over bottom of canopy but not over top.
if ((currentCohort%hite > 0.0_r8).and.(currentPatch%SH >= &
(currentCohort%hite-currentCohort%hite*EDPftvarcon_inst%crown(currentCohort%pft)))) then
currentCohort%fraction_crown_burned = (currentPatch%SH-currentCohort%hite*(1.0_r8- &
EDPftvarcon_inst%crown(currentCohort%pft)))/(currentCohort%hite* &
EDPftvarcon_inst%crown(currentCohort%pft))
else
! Flames over top of canopy.
currentCohort%fraction_crown_burned = 1.0_r8
endif
endif
! Check for strange values.
currentCohort%fraction_crown_burned = min(1.0_r8, max(0.0_r8,currentCohort%fraction_crown_burned))

@jkshuman
Copy link
Contributor

jkshuman commented Oct 28, 2019

@slevisconsulting @pollybuotte @lmkueppers I just returned from the FireMIP conference, and was alerted to the fact that using the LIS lightning data requires a scaling adjustment to account for the amount of cloud to ground flashes (.20) and the efficiency/energy required to generate fires (0.04). I confirmed in the CLM code that the Li fire model uses .22 as a scaling factor on this data. We will need to update this for these simulations. and make a decision to use .20 as many fire models do, or both 0.20 and 0.04. With this it will give natural fires only. Anthropogenic fires would be handled with a different set of equations.

@slevis-lmwg
Copy link
Contributor Author

Good catch @jkshuman thank you!

@slevis-lmwg
Copy link
Contributor Author

And welcome back!

@jkshuman
Copy link
Contributor

@slevisconsulting thanks! I highly recommend visiting South Africa.
I am testing this scaler across tropics and SA in a 4x5 run. Realizing that we have no connection between fire and ignitions outside of area burn. Will open a separate issue on this...for better tracking.

@slevis-lmwg
Copy link
Contributor Author

[...] using the LIS lightning data requires a scaling adjustment to account for the amount of cloud to ground flashes (.20) and the efficiency/energy required to generate fires (0.04). I confirmed in the CLM code that the Li fire model uses .22 as a scaling factor on this data. We will need to update this for these simulations. and make a decision to use .20 as many fire models do, or both 0.20 and 0.04.

Regarding the question of one or both scaling factors, we need to make sure not to double count. Is it possible that the 0.04 factor corresponds to SPITFIRE's FDI?
AB = size_of_fire * NF * currentSite%FDI in subr. area_burnt
where AB is area burned, NF is number of ignitions, and FDI is the ignition potential.

@jkshuman
Copy link
Contributor

jkshuman commented Nov 9, 2019

@slevisconsulting I am testing a few changes, and found a change you made to FDI with read lightning data that needs to be reverted. The FDI calculation should not be changed. The ignitions dataset only provides strike data, not successful ignitions. Successful ignitions is related to fuel conditions and climate. So this new conditional should be removed, and the original equation retained.

https://github.com/slevisconsulting/fates/blob/dc8f83a24164ea0f28ad8b385cb86e563214deb8/fire/SFMainMod.F90#L1037-L1041

@slevis-lmwg
Copy link
Contributor Author

@slevisconsulting I am testing a few changes, and found a change you made to FDI with read lightning data that needs to be reverted. The FDI calculation should not be changed. The ignitions dataset only provides strike data, not successful ignitions. Successful ignitions is related to fuel conditions and climate. So this new conditional should be removed, and the original equation retained.

https://github.com/slevisconsulting/fates/blob/dc8f83a24164ea0f28ad8b385cb86e563214deb8/fire/SFMainMod.F90#L1037-L1041

@jkshuman I disagree, unless I have misunderstood your comment:

The first half of this if-statement corresponds to cases when the input data literally represent successful ignitions rather than lightning strikes, e.g. Bin Chen's data. @lmkueppers group would like to keep this option as far as I know. In fact, now I realize that we need a similar if-statement to bypass the Cloud-to-Ground coefficient when using Bin's data...

@jkshuman
Copy link
Contributor

@slevisconsulting I agree that Bin's successful strike dataset is a special case, and should use a conditional for FDI and, yes, would need to bypass that Cloud to ground reduction on lightning. For the FDI conditional on Bin's data, @slevisconsulting @lmkueppers @pollybuotte it would be worth adding a flag or print statement in situations where there is an ignition and FDI is set to 1, but the FDI would have been low ignition potential without this data. We could print both the ignition FDI of 1 from Bin's strike data and the calculated FDI, and then look at that with the climate and fuel data. Those differences would provide information about how the climate forcing data and the acc_NI or this equation are potentially missing details. That would be a nice evaluation with Bin's data, and a nice evaluation of this part of the fire model. Let's talk about that. (FDI affects area burn and fire duration, so these differences carry through to other parts of the fire code.)

@pollybuotte
Copy link

I'd like to request an additional check that the lightning file has been read. This would prevent a user from running with the wrong CLM branch. As it is now, running with fates_next_api does not cause a fail, but no fire results because there are no ignitions.

@slevis-lmwg
Copy link
Contributor Author

slevis-lmwg commented Mar 26, 2020

@jkshuman updated me as follows:
The lightning work was updated to a more recent FATES branch here:
FATES branch: https://github.com/jkshuman/fates/tree/fire-threshold-tag131-api80-lightning
Corresponding CTSM branch: https://github.com/rgknox/ctsm/tree/api_8_works_w_read_lightning

Also...
@pollybuotte encountered an error when setting use_spitfire = .false. I asked her to try the following:

In src/main/clm_initializeMod.F90, where you see
if (use_fates) then
call sfmain_inst%initAccVars(bounds_proc)
change the first line to say
if (use_fates .and. use_fates_spitfire) then
Do the same in src/main/clm_driver.F90, where you see
if (use_fates) then
call sfmain_inst%UpdateAccVars(bounds_proc)

Need to add use_fates_spitfire also to corresponding lines
use clm_varctl, only: ...

@pollybuotte confirmed that the above works.

@jkshuman
Copy link
Contributor

jkshuman commented Apr 2, 2020

@slevisconsulting @pollybuotte @ckoven
In the interim as this development happens, I merged old lightning into a more recent FATES tag:
FATES tag: https://github.com/jkshuman/fates/tree/tag_sci.1.33.1_api.8.1.0-lightning
CTSM tag: https://github.com/jkshuman/CTSM/tree/api_8.1.0_works_w_read_lightning

I am testing in tropics and it seems fine so far. @pollybuotte said she would test in CA. Let me know if there is a problem anywhere. (I hate messing with the api...)

@slevis-lmwg
Copy link
Contributor Author

Inviting @ekluzek to #562.

@ekluzek will open a corresponding issue on the ctsm side with his proposed approach. Thank you for your help with this, Erik.

@rgknox
Copy link
Contributor

rgknox commented Apr 17, 2020

A plan we talked about is to change the use_fates_spitfire namelist parameter on CLM from a binary switch, to an integer flag, where:
0 means its off
1 means spitfire should be active, but without external datasets
2, 3, ... various different dataset combinations are available to FATES spitfire from the HLM and should be expected (l strikes, human density, GDP, etc)

@jkshuman @rosiealice @lmkueppers @ckoven

@ekluzek
Copy link
Collaborator

ekluzek commented Apr 19, 2020

The CTSM issue this corresponds with is http://github.com/ESCOMP/CTSM/issues/982

@ekluzek
Copy link
Collaborator

ekluzek commented Apr 19, 2020

@rgknox from looking into the CTSM side, I think it probably makes the most sense to keep the use_fates_spitfire namelist parameter, but also pass the fire_method character string into fates. It will indicate different levels something like: fatesnofiredata and fatesfirelnfm. Later other datalevels could be added fateslNpopdens, fateslNpNgdp, fateslNpNgNag. Most of this would be on the HLM side, it's just useful for FATES to check it to see what data is available from the HLM.

@ekluzek
Copy link
Collaborator

ekluzek commented Apr 20, 2020

Actually now I take that back. You could change it use one type to signify if SF is on and what level of data is being sent to it. At first I thought it needed to use the fire_method character control string that is being used in cnfire -- but now I see that it doesn't need to be that way.

@ekluzek
Copy link
Collaborator

ekluzek commented Apr 20, 2020

OK, I've fleshed out a proposal in the CTSM issue. It removes most of what was added into FATES, and puts it into the HLM. And there it's extending an existing class with a FATES specific fire data class. So there's not duplication of either FATES or CTSM code. It also removes calling CTSM modules from within FATES, which is an important design feature for FATES development (in order to support more than one HLM).

The thing that needs to be decided on the FATES side is how to trigger the different levels of fire data usage. You could use an integer as suggested above by @rgknox to trigger both Spit-Fire and the fire data level. On the CTSM side I was suggesting using a character string called "fire_method" -- because that's how it's handled by CNFire in CTSM. But, it wouldn't have to be handled that way, it just needs to be coordinated between the two.

@ckoven
Copy link
Contributor

ckoven commented Aug 13, 2020

this got fixed by #635 so closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants