diff --git a/.github/workflows/deploy-ss3-docs.yml b/.github/workflows/deploy-ss3-docs.yml index 91375e90..220cf01e 100644 --- a/.github/workflows/deploy-ss3-docs.yml +++ b/.github/workflows/deploy-ss3-docs.yml @@ -37,8 +37,8 @@ jobs: - name: render the rmd files run: | - rmarkdown::render("User_Guides/ss_model_tips/ss_model_tips.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") - rmarkdown::render("User_Guides/getting_started/Getting_Started_SS.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") + rmarkdown::render("User_Guides/ss3_model_tips/ss3_model_tips.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") + rmarkdown::render("User_Guides/getting_started/Getting_Started_SS3.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") shell: Rscript {0} - name: Deploy to GitHub pages diff --git a/10optional_inputs.tex b/10optional_inputs.tex index 8b916e5b..a47a1d90 100644 --- a/10optional_inputs.tex +++ b/10optional_inputs.tex @@ -2,17 +2,17 @@ \section{Optional Inputs} \hypertarget{WAA}{} \subsection{Empirical Weight-at-Age (wtatage.ss)} -The model has the capability to read empirical body weight at age for the population and each fleet, in lieu of generating these weights internally from the growth parameters, weight-at-length, and size-selectivity. Selection of this option is done by setting an explicit switch near the top of the control file. The values are read from a separate file named, wtatage.ss. This file is only required to exist if this option is selected. +The model has the capability to read empirical body weight at age for the population and each fleet, in lieu of generating these weights internally from the growth parameters, weight-at-length, and size-selectivity. Selection of this option is done by setting an explicit switch near the top of the control file. The values are read from a separate file named, wtatage.ss. This file is only required to exist if this option is selected. The first value read is a single integer for the maximum age used in reading this file. So if the maximum age is 40, there will be 41 columns of weight-at-age entries to read, with the first column being for age 0. If the number of ages specified in this table is greater than maximum age in the model, the extra weight-at-age values are ignored. If the number of ages in this table is less than maximum age in the model, the weight-at-age data for the number of ages in the file is filled in for all unread ages out to maximum age. The format of this input file is: -\begin{tabular}{l l l l l l l l l } +\begin{tabular}{l l l l l l l l l} \hline - 40 & \multicolumn{8}{l}{Maximum Age}\\ + 40 & \multicolumn{8}{l}{Maximum Age} \\ \hline - & & & Growth & Birth & & & & \Tstrut\\ + & & & Growth & Birth & & & & \Tstrut\\ Year & Season & Sex & Pattern & Season & Fleet & Age-0 & Age-1 & ... \Tstrut\Bstrut\\ \hline \-1971 & 1 & 1 & 1 & 1 & -2 & 0 & 0 & 0.1003 \Tstrut\\ @@ -59,14 +59,14 @@ \subsection{Empirical Weight-at-Age (wtatage.ss)} \subsection{runnumber.ss} -This file contains a single integer value. It is read when the program starts, incremented by 1, used when processing the profile value inputs (see below), used as an identifier in the batch output, then saved with the incremented value. Note that this incrementation may not occur if a run crashes. +This file contains a single integer value. It is read when the program starts, incremented by 1, used when processing the profile value inputs (see below), used as an identifier in the batch output, then saved with the incremented value. Note that this incrementation may not occur if a run crashes. \subsection{profilevalues.ss} -This file contains information for changing the value of selected parameters for each run in a batch. In the control file, each parameter that will be subject to modification by profilevalues.ss is designated by setting its phase to -9999. +This file contains information for changing the value of selected parameters for each run in a batch. In the control file, each parameter that will be subject to modification by profilevalues.ss is designated by setting its phase to -9999. -The first value in profilevalues.ss is the number of parameters to be batched. This value MUST match the number of parameters with phase set equal to -9999 in the control file. The program performs no checks for this equality. If the value is zero in the first field, then nothing else will be read. Otherwise, the model will read runnumber * Nparameters values and use the last Nparameters of these to replace the initial values of parameters designated with phase = --9999 in the control file. +The first value in profilevalues.ss is the number of parameters to be batched. This value MUST match the number of parameters with phase set equal to -9999 in the control file. The program performs no checks for this equality. If the value is zero in the first field, then nothing else will be read. Otherwise, the model will read runnumber * Nparameters values and use the last Nparameters of these to replace the initial values of parameters designated with phase = --9999 in the control file. -Usage Note: If one of the batch runs crashes before saving the updated value of runnumber.ss, then the processing of the profilevalue.ss will not proceed as expected. Check the output carefully until a more robust procedure is developed. Also, this options was created before use of R became widespread. You probably can create a more flexible approach using R today. +Usage Note: If one of the batch runs crashes before saving the updated value of runnumber.ss, then the processing of the profilevalue.ss will not proceed as expected. Check the output carefully until a more robust procedure is developed. Also, this options was created before use of R became widespread. You probably can create a more flexible approach using R today. \pagebreak \ No newline at end of file diff --git a/12runningSS.tex b/12runningSS3.tex similarity index 68% rename from 12runningSS.tex rename to 12runningSS3.tex index d38a71cf..b4614946 100644 --- a/12runningSS.tex +++ b/12runningSS3.tex @@ -1,13 +1,13 @@ \section{Running Stock Synthesis} \label{sec:RunningSS} \subsection{Command Line Interface} -The name of the SS3 executable files often contains the phrase ``safe'' or ``opt'' (for optimized). The safe version includes checking for out of bounds values and should always be used whenever there is a change to the data file. The optimized version runs slightly faster but can result in data not being included in the model as intended if the safe version has not been run first. A file named ``ss.exe'' is typically the safe version unless the result of renaming by the user. +The name of the SS3 executable files often contains the phrase ``safe'' or ``opt'' (for optimized). The safe version includes checking for out of bounds values and should always be used whenever there is a change to the data file. The optimized version runs slightly faster but can result in data not being included in the model as intended if the safe version has not been run first. A file named ``ss3.exe'' is typically the safe version unless the result of renaming by the user. On Mac and Linux computers, the executable does not include an extension (like .exe on Windows). Running the executable on from the DOS command line in Windows simply require typing the executable name (without the .exe extension): \begin{quote} \begin{verbatim} - > ss + > ss3 \end{verbatim} \end{quote} @@ -16,8 +16,8 @@ \subsection{Command Line Interface} \begin{quote} \begin{verbatim} - > chmod a+x ss - > ./ss + > chmod a+x ss3 + > ./ss3 \end{verbatim} \end{quote} @@ -30,13 +30,13 @@ \subsection{Command Line Interface} As of ADMB 12.3, a new command called ``-hess\_step'' is available to use and documented in the \hyperlink{hess-step}{Using -hess\_step to do additional Newton steps using the inverse Hessian} \subsubsection{Example of DOS batch input file} -One file management approach is to put ss.exe in its own folder (example: C:\textbackslash SS\_model) and to put your input files in separate folder (example: C:\textbackslash My Documents \textbackslash SS\_runs). Then a DOS batch file in the SS\_runs folder can be run at the command line to start ss.exe. All output will appear in SS\_runs folder. +One file management approach is to put ss3.exe in its own folder (example: C:\textbackslash SS3\_model) and to put your input files in separate folder (example: C:\textbackslash My Documents \textbackslash SS3\_runs). Then a DOS batch file in the SS3\_runs folder can be run at the command line to start ss3.exe. All output will appear in SS3\_runs folder. -A DOS batch file (e.g., SS.bat) might contain some explicit ADMB commands, some implicit commands, and some DOS commands: +A DOS batch file (e.g., SS3.bat) might contain some explicit ADMB commands, some implicit commands, and some DOS commands: \begin{quote} \begin{verbatim} - c:\SS_model\ss.exe -cbs 5000000000 -gbs 50000000000 \%1 \%2 \%3 \%4 + c:\SS3_model\ss3.exe -cbs 5000000000 -gbs 50000000000 \%1 \%2 \%3 \%4 del ss.r0* del ss.p0* del ss.b0* @@ -44,9 +44,9 @@ \subsubsection{Example of DOS batch input file} \end{quote} -In this batch file, the -cbs and -gbs arguments allocate a large amount of memory for SS3 to use (you may need to edit these for your computer and SS3 configuration), and the \%1, \%2 etc. allows passing of command line arguments such as -nox or -nohess. You add more items to the list of \% arguments as needed. +In this batch file, the -cbs and -gbs arguments allocate a large amount of memory for SS3 to use (you may need to edit these for your computer and SS3 configuration), and the \%1, \%2 etc., allows passing of command line arguments such as -nox or -nohess. You add more items to the list of \% arguments as needed. -An easy way to start a command line in your current directory (SS\_runs) is to create a shortcut to the DOS command line prompt. The shortcut's target would be: +An easy way to start a command line in your current directory (SS3\_runs) is to create a shortcut to the DOS command line prompt. The shortcut's target would be: \begin{quote} \begin{verbatim} @@ -62,7 +62,7 @@ \subsubsection{Example of DOS batch input file} \end{verbatim} \end{quote} -An alternative shortcut is to have the executable within the model folder then use Ctrl+Shift+Right Click and then select either ``Open powershell window here'' or ``Open command window here'', depending upon your computer. From the command window the executable name can be typed along with additional inputs (e.g., -nohess) and the model run. If using the powershell type cmd and then hit enter prior to calling the model (ss). +An alternative shortcut is to have the executable within the model folder then use Ctrl+Shift+Right Click and then select either ``Open powershell window here'' or ``Open command window here'', depending upon your computer. From the command window the executable name can be typed along with additional inputs (e.g., -nohess) and the model run. If using the powershell type cmd and then hit enter prior to calling the model (ss). \subsubsection{Simple Batch} @@ -74,7 +74,7 @@ \subsubsection{Simple Batch} del ss.cor del ss.std copy starter.r01 starter.ss - c:\admodel\ss\ss.exe -sdonly + c:\admodel\ss3\ss3.exe -sdonly copy ss.std ss-std01.txt \end{verbatim} \end{quote} @@ -82,7 +82,7 @@ \subsubsection{Simple Batch} The commands could be repeated again, except the output should be copied to a different file, e.g., ss-std02.txt. This sequence can be repeated an unlimited number of times. \subsubsection{Complicated Batch} -This second example processes 25 dat files from a different directory, each time using the same ctl and nam file. The loop index is used in the file names, and the output is searched for particular keywords to accumulate a few key results into the file SUMMARY.TXT. Comparable batch processing can be accomplished by using R or other script processing programs. +This second example processes 25 dat files from a different directory, each time using the same ctl and nam file. The loop index is used in the file names, and the output is searched for particular keywords to accumulate a few key results into the file SUMMARY.TXT. Comparable batch processing can be accomplished by using R or other script processing programs. \begin{quote} \begin{verbatim} @@ -94,7 +94,7 @@ \subsubsection{Complicated Batch} del ss.std del ss.cor del ss.par - c:\admodel\ss\ss.exe + c:\admodel\ss3\ss3.exe copy /Y ss.par A1-D1-A1-%%i.par copy /Y ss.std A1-D1-A1-%%i.std find ``Number'' A1-D1-A1-%%i.par >> Summary.txt @@ -111,7 +111,7 @@ \subsubsection{Running Without Estimation} \begin{quote} \begin{verbatim} - ss -nohess + ss3 -nohess \end{verbatim} \end{quote} @@ -119,24 +119,24 @@ \subsubsection{Running Without Estimation} \begin{quote} \begin{verbatim} - ss -maxfn 0 -phase 50 -nohess + ss3 -maxfn 0 -phase 50 -nohess \end{verbatim} \end{quote} where maxfun specifies the number of function calls and phase is the maximum phase for the model to start estimation where the number should be greater than the maximum phase for estimating parameters within the model. -However, the approaches above differ in subtle ways. First, if the maximum phase is set to 0 in the starter file the total likelihood will differ by a small amount (0.25 likelihood units) compared to the second approach which sets the maxfun and phase in the command window. This small difference is due a dummy parameter which is evaluated by the objective function when maximum phase in the starter is set to 0, resulting in a small contribution to the total likelihood of 0.25. However, all other likelihood components should not change. +However, the approaches above differ in subtle ways. First, if the maximum phase is set to 0 in the starter file the total likelihood will differ by a small amount (0.25 likelihood units) compared to the second approach which sets the maxfun and phase in the command window. This small difference is due a dummy parameter which is evaluated by the objective function when maximum phase in the starter is set to 0, resulting in a small contribution to the total likelihood of 0.25. However, all other likelihood components should not change. -The second difference between the two no estimation approaches is the reported number of ``Active\_count'' of parameters in the Report file. If the command line approach is used (ss -maxfn 0 -phase 50 -nohess) then the active number of parameters will equal the number of parameters with positive phases, but because the model is started in a phase greater than the maximum phase in the model, these parameters do not move from the initial values in the control file (or the par file). The first approach where the maximum phase is changed in the starter file will report the number of ``Active\_count'' parameters as 0. +The second difference between the two no estimation approaches is the reported number of ``Active\_count'' of parameters in the Report file. If the command line approach is used (ss3 -maxfn 0 -phase 50 -nohess) then the active number of parameters will equal the number of parameters with positive phases, but because the model is started in a phase greater than the maximum phase in the model, these parameters do not move from the initial values in the control file (or the par file). The first approach where the maximum phase is changed in the starter file will report the number of ``Active\_count'' parameters as 0. -The final thing to consider when running a model without estimation is whether you are starting from the par file or the control file. If you start from the par file (specified in the starter file: 1=use ss.par) then all parameters, including parameter deviations, will be fixed at the estimated values. However, if the model is not run with the par file, any parameter deviations (e.g., recruitment deviations) will not be included in the model run (a user could paste in the estimated recruitment deviations into the control file). +The final thing to consider when running a model without estimation is whether you are starting from the par file or the control file. If you start from the par file (specified in the starter file: 1=use ss.par) then all parameters, including parameter deviations, will be fixed at the estimated values. However, if the model is not run with the par file, any parameter deviations (e.g., recruitment deviations) will not be included in the model run (a user could paste in the estimated recruitment deviations into the control file). \myparagraph{Generate .ss\_new files} -There may be times a user would like to generate the .ss\_new files without running the model, with or without estimation. There are two approaches that a user can take. The first is to manually change the maxphase in the starter.ss file to -1 and running the model as normal will generate these files without running through the model dynamics (e.g., no Report file will be created). The maxphase in the starter.ss\_new file will be set to -1 and will need to be manually changed back if the intent is the replace the original (i.e., starter.ss) file with the new files (i.e., starter.ss\_new). The second approach is to modify the maxphase via the command line or power shell input. Calling the model using the commands: +There may be times a user would like to generate the .ss\_new files without running the model, with or without estimation. There are two approaches that a user can take. The first is to manually change the maxphase in the starter.ss file to -1 and running the model as normal will generate these files without running through the model dynamics (e.g., no Report file will be created). The maxphase in the starter.ss\_new file will be set to -1 and will need to be manually changed back if the intent is the replace the original (i.e., starter.ss) file with the new files (i.e., starter.ss\_new). The second approach is to modify the maxphase via the command line or power shell input. Calling the model using the commands: \begin{quote} \begin{verbatim} - ss -stopph -1 + ss3 -stopph -1 \end{verbatim} \end{quote} @@ -157,43 +157,43 @@ \subsubsection{Running Parameter Profiles} The first option is the use functions within \texttt{r4ss} to run the profile, summarize quantities across runs, and plot the output. The \texttt{SS\_profile()} function will run the profile based on function inputs, \texttt{SSgetoutput()} will read quantities from each run Report file, \texttt{SSsummarize()} will summarize key model quantities, and the \texttt{SSplotProfile()} and \texttt{PinerPlot()} functions can be used to visualize results. Additional information regarding \texttt{r4ss} can be found in the \hyperref[sec:r4ss]{r4ss section}. -The second way is to create and run a batch file to profile over parameters. This example will run a profile on natural mortality and spawner-recruitment steepness, of course. Edit the control file so that the natural mortality parameter and steepness parameter lines have the phase set to -9999. Edit starter.ss to refer to this control file and the appropriate data file. +The second way is to create and run a batch file to profile over parameters. This example will run a profile on natural mortality and spawner-recruitment steepness, of course. Edit the control file so that the natural mortality parameter and steepness parameter lines have the phase set to -9999. Edit starter.ss to refer to this control file and the appropriate data file. %\begin{center} \begin{longtable}{p{0.5cm} p{16cm}} - & Create a profilevalues.ss file\\ - & 2 \# number of parameters using profile feature\\ - & 0.16 \# value for first selected parameter when runnumber equals 1\\ - & 0.35 \# value for second selected parameter when runnumber equals 1\\ - & 0.16 \# value for first selected parameter when runnumber equals 2\\ - & 0.40 \# value for second selected parameter when runnumber equals 2\\ - & 0.18 \# value for first selected parameter when runnumber equals 3\\ - & 0.40 \# value for second selected parameter when runnumber equals 3\\ - & etc.; make it as long as you like.\\ + & Create a profilevalues.ss file \\ + & 2 \# number of parameters using profile feature \\ + & 0.16 \# value for first selected parameter when runnumber equals 1 \\ + & 0.35 \# value for second selected parameter when runnumber equals 1 \\ + & 0.16 \# value for first selected parameter when runnumber equals 2 \\ + & 0.40 \# value for second selected parameter when runnumber equals 2 \\ + & 0.18 \# value for first selected parameter when runnumber equals 3 \\ + & 0.40 \# value for second selected parameter when runnumber equals 3 \\ + & etc.; make it as long as you like. \\ \end{longtable} -Create a batch file that looks something like this. Or make it more complicated as in the example above. +Create a batch file that looks something like this. Or make it more complicated as in the example above. \begin{quote} \begin{verbatim} del cumreport.sso copy /Y runnumber.zero runnumber.ss % so you will start with runnumber=0 - C:\SS330\ss.exe - C:\SS330\ss.exe - C:\SS330\ss.exe + C:\SS330\ss3.exe + C:\SS330\ss3.exe + C:\SS330\ss3.exe \end{verbatim} \end{quote} Repeat as many times as you have set up conditions in the profilevalues.ss file. -The summary results will all be collected in the cumreport.sso file. Each step of the profile will have an unique run number and its output will include the values of the natural mortality and steepness parameters for that run. +The summary results will all be collected in the cumreport.sso file. Each step of the profile will have an unique run number and its output will include the values of the natural mortality and steepness parameters for that run. \subsubsection{Re-Starting a Run} -Model runs can be restarted from a previously estimated set of parameter values. In the starter.ss file, enter a value of 1 on the first numeric input line. This will cause the model to read the file ss.par and use these parameter values in place of the initial values in the control file. This option only works if the number of parameters to be estimated in the new run is the same as the number of parameters in the previous run because only actively estimated parameters are saved to the file ss.par. The file ss.par can be edited with a text editor, so values can be changed and rows can be added or deleted. However, if the resulting number of elements does not match the setup in the control file, then unpredictable results will occur. Because ss.par is a text file, the values stored in it will not give exactly the same initial results as the run just completed. To achieve greater numerical accuracy, the model can also restart from ss.bar which is the binary version of ss.par. In order to do this, the user must make the change described above to the starter.ss file and must also enter -binp ss.bar as one of the command line options. +Model runs can be restarted from a previously estimated set of parameter values. In the starter.ss file, enter a value of 1 on the first numeric input line. This will cause the model to read the file ss.par and use these parameter values in place of the initial values in the control file. This option only works if the number of parameters to be estimated in the new run is the same as the number of parameters in the previous run because only actively estimated parameters are saved to the file ss.par. The file ss.par can be edited with a text editor, so values can be changed and rows can be added or deleted. However, if the resulting number of elements does not match the setup in the control file, then unpredictable results will occur. Because ss.par is a text file, the values stored in it will not give exactly the same initial results as the run just completed. To achieve greater numerical accuracy, the model can also restart from ss.bar which is the binary version of ss.par. In order to do this, the user must make the change described above to the starter.ss file and must also enter -binp ss.bar as one of the command line options. \subsubsection{Optional Output Subfolders} -As of 3.30.19, users can optionally send .sso and .ss\_new extension files to subfolders. To send files with a .sso extension to a subfolder within the model folder, create a subfolder called sso before running the model. To send files with a .ss\_new extension to a separate subfolder, create a folder called ssnew before running the model. +As of v.3.30.19, users can optionally send .sso and .ss\_new extension files to subfolders. To send files with a .sso extension to a subfolder within the model folder, create a subfolder called sso before running the model. To send files with a .ss\_new extension to a separate subfolder, create a folder called ssnew before running the model. \subsection{Putting Stock Synthesis in your PATH} @@ -201,24 +201,24 @@ \subsection{Putting Stock Synthesis in your PATH} \subsubsection{For Unix (OS X and Linux)} -To check if SS3 is in your path, assuming the binary is named SS: open a Terminal window and type \texttt{which SS} and hit enter. If you get nothing returned, then SS3 (named SS or SS.exe) is not in your path. The easiest way to fix this is to move the SS3 binary to a folder that's already in your path. To find existing path folders type \texttt{echo \$PATH} in the terminal and hit enter. Now move the SS3 binary to one of these folders. +To check if SS3 is in your path, assuming the binary is named SS3: open a Terminal window and type \texttt{which SS3} and hit enter. If you get nothing returned, then SS3 (named SS3 or SS3.exe) is not in your path. The easiest way to fix this is to move the SS3 binary to a folder that's already in your path. To find existing path folders type \texttt{echo \$PATH} in the terminal and hit enter. Now move the SS3 binary to one of these folders. For example, in a Terminal window type: \begin{quote} \begin{verbatim} - sudo cp ~/Downloads/SS /usr/bin/ + sudo cp ~/Downloads/SS3 /usr/bin/ \end{verbatim} \end{quote} -to move an binary called SS from the Downloads folder to \texttt{/usr/bin}. You will need to use \texttt{sudo} and enter your password after to have permission to move a file to a folder like \texttt{/usr/bin/}, because doing so edits the system for other users also. +to move an binary called SS3 from the Downloads folder to \texttt{/usr/bin}. You will need to use \texttt{sudo} and enter your password after to have permission to move a file to a folder like \texttt{/usr/bin/}, because doing so edits the system for other users also. -Also note that you may need to add executable permissions to the SS binary after downloading it. You can do that by switching to the folder where you placed the binary +Also note that you may need to add executable permissions to the SS3 binary after downloading it. You can do that by switching to the folder where you placed the binary (\texttt{cd /usr/bin/} if you followed the instructions above), and running the command: \begin{quote} \begin{verbatim} - sudo chmod +x SS + sudo chmod +x SS3 \end{verbatim} \end{quote} @@ -226,7 +226,7 @@ \subsubsection{For Unix (OS X and Linux)} \begin{quote} \begin{verbatim} - which SS + which SS3 \end{verbatim} \end{quote} @@ -234,7 +234,7 @@ \subsubsection{For Unix (OS X and Linux)} \begin{quote} \begin{verbatim} - /usr/bin/SS + /usr/bin/SS3 \end{verbatim} \end{quote} @@ -248,21 +248,21 @@ \subsubsection{For Unix (OS X and Linux)} \subsubsection{For Windows} -To check if SS3 is in your path for Windows, open a DOS prompt (either Command Prompt or Powershell should work) and type \texttt{SS -?} and hit enter. If the prompt returns a message like \texttt{SS is not recognized...}, then SS3 is not in your path (assuming the SS3 executable is called SS.exe). +To check if SS3 is in your path for Windows, open a DOS prompt (either Command Prompt or Powershell should work) and type \texttt{SS3 -?} and hit enter. If the prompt returns a message like \texttt{SS3 is not recognized...}, then SS3 is not in your path (assuming the SS3 executable is called SS3.exe). To add the SS3 binary file to your path, follow these steps: \begin{enumerate} - \item Find the correct version of the SS.exe binary on your computer (or download from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases}{SS3 releases}). - \item Move to and note the folder location. E.g., \texttt{C:/SS/} + \item Find the correct version of the SS3.exe binary on your computer (or download from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases}{SS3 releases}). + \item Move to and note the folder location. E.g., \texttt{C:/SS3/} \item Click on the start menu and type \texttt{environment} \item Choose \texttt{Edit environment variables for your account} under Control Panel \item Click on \texttt{PATH} if it exists, create it if does not exist \item Choose `PATH` and click edit \item In the \texttt{Edit User Variable} window add to the end of the \texttt{Variable value} section a semicolon and the SS3 folder location you recorded earlier. - E.g., \texttt{;C:/SS}. Do not overwrite what was previously in the \texttt{PATH} variable. + E.g., \texttt{;C:/SS3}. Do not overwrite what was previously in the \texttt{PATH} variable. \item Restart your computer - \item Go back to the DOS prompt and try typing \texttt{SS -?} and hitting return again. + \item Go back to the DOS prompt and try typing \texttt{SS3 -?} and hitting return again. \end{enumerate} @@ -285,23 +285,26 @@ \subsection{Running Stock Synthesis from R} Running SS3 from within R may be desirable for setting up simulations where many runs of SS3 models are required (e.g., \href{https://github.com/ss3sim/ss3sim}{ss3sim}) or if \texttt{r4ss} is already used to read model output. \subsection{The Stock Synthesis GUI (SSI)} -\href{https://vlab.noaa.gov/web/stock-synthesis/document-library/-/document_library/0LmuycloZeIt/view/5042951}{Stock Synthesis Interface} (SSI or the SS3 GUI) provides an interface for loading, editing, and running model files, and also can link to r4ss to generate plots. +\href{https://vlab.noaa.gov/web/stock-synthesis/document-library/-/document_library/0LmuycloZeIt/view/5042951}{Stock Synthesis Interface} (SSI or the SS3 GUI) provides an interface for loading, editing, and running model files, and also can link to r4ss to generate plots. Note that SSI is not maintained for Stock Synthesis versions after v.3.30.21. + +\subsection{The Stock Assessment Continuum Tool} +\href{https://github.com/shcaba/SS-DL-tool}{The Stock Assessment Continuum Tool} (previously known as the Stock Synthesis Data-limited Tool) is a Shiny-based application that uses SS3 as the flexible framework to apply a variety of model types depending on the available data (catch time-series, age composition, length composition, abundance index data). It is meant to make SS3 accessible to users, open up many features and tools associated with Stock Synthesis, provide an easy way to enter data in the model, and make model specification and uncertainty exploration easier. \subsection{Debugging Tips} -When input files are causing the program to crash or fail to produce sensible results, there are a few steps that can be taken to diagnose the problem. Before trying the steps below, examine the echoinput.sso file. It is highly annotated, so you should be able to see if the model is interpreting your input files as you intended. Additionally, users should check the warning.sso file when attempting to debug a non-running model. +When input files are causing the program to crash or fail to produce sensible results, there are a few steps that can be taken to diagnose the problem. Before trying the steps below, examine the echoinput.sso file. It is highly annotated, so you should be able to see if the model is interpreting your input files as you intended. Additionally, users should check the warning.sso file when attempting to debug a non-running model. \begin{enumerate} - \item Set the turn\_off\_phase switch to 0 in the starter.ss file. This will cause the mode to not attempt to adjust any parameters and simply converges a dummy parameter. It will still produce a Report.sso file, which can be examined to see what has been calculated from the initial parameter values. - \item Turn the verbosity level to 2 in the starter.ss file. This will cause the program to display the value of each likelihood component to the screen on each iteration. So it the program is creating an illegal computation (e.g., divide by zero), it may show you which likelihood component contains the problematic calculation. If the program is producing a Report.sso file, you may then see which observation is causing the illegal calculation. - \item Run the program with the command ss >>SSpipe.txt. This will cause all screen display to go to the specified text file (note, delete this file before running because it will be appended to). Examination of this file will show detailed statements produced during the reading and preprocessing of input files. - \item If the model fails to achieve a proper Hessian it exits without writing the detailed outputs in the FINAL\_SECTION. If this happens, you can do a run with the -nohess option so you can view the Report.sso to attempt to diagnose the problem. + \item Set the turn\_off\_phase switch to 0 in the starter.ss file. This will cause the mode to not attempt to adjust any parameters and simply converges a dummy parameter. It will still produce a Report.sso file, which can be examined to see what has been calculated from the initial parameter values. + \item Turn the verbosity level to 2 in the starter.ss file. This will cause the program to display the value of each likelihood component to the screen on each iteration. So it the program is creating an illegal computation (e.g., divide by zero), it may show you which likelihood component contains the problematic calculation. If the program is producing a Report.sso file, you may then see which observation is causing the illegal calculation. + \item Run the program with the command ss3 >>SSpipe.txt. This will cause all screen display to go to the specified text file (note, delete this file before running because it will be appended to). Examination of this file will show detailed statements produced during the reading and preprocessing of input files. + \item If the model fails to achieve a proper Hessian it exits without writing the detailed outputs in the FINAL\_SECTION. If this happens, you can do a run with the -nohess option so you can view the Report.sso to attempt to diagnose the problem. \item If the problem is with reading one or more of the input files, please note that certain Mac line endings cannot be read by the model (although this is a rare occurrence). Be sure to save the text files with Windows or Linux style line endings so that the executable can parse them. \end{enumerate} \subsection{Keyboard Tips} Typing ``N'' during a run will cause ADMB to immediately advance to the next phase of estimation. -Typing ``Q'' during a run will cause ADMB to immediately go to the final phase. This bypasses estimation of the Hessian and will produce all of the model outputs, which are coded in the FINAL\_SECTION. +Typing ``Q'' during a run will cause ADMB to immediately go to the final phase. This bypasses estimation of the Hessian and will produce all of the model outputs, which are coded in the FINAL\_SECTION. \subsection{Running MCMC} Running SS3 with MCMC can be done through command line options using the default ADMB MCMC algorithm (described below). Another possibility is using the R package adnuts. See the \href{https://cran.r-project.org/web/packages/adnuts/vignettes/adnuts.html}{adnuts vignette} for more information. The \href{https://www.admb-project.org/developers/mcmc/mcmc-guide-for-admb.pdf}{MCMC guide for ADMB} provides the most comprehensive guidance available for using MCMC with ADMB models (such as SS3). Additional guidance is available in \citep{monnahan2019overcoming}. @@ -315,7 +318,7 @@ \subsection{Running MCMC} \item Recommended: Remove existing .psv files in run directory to generate a new chain. \item Recommended: Before running, set the run detail switch in starter file to 0 to limit printing to the screen; reporting to screen will slow MCMC progress. \item Optional: Add \texttt{-nohess} to use the existing Hessian file without re-estimating. - \item Optional: To start the MCMC chain from specific values change the par file: run the model with estimation, adjust the par file to the values that the chain should start from, change within the starter file for the model to begin from the par file, and call the MCMC function using \texttt{ss -mcmc xxxx - mcsave yyyy -nohess -noest}. + \item Optional: To start the MCMC chain from specific values change the par file: run the model with estimation, adjust the par file to the values that the chain should start from, change within the starter file for the model to begin from the par file, and call the MCMC function using \texttt{ss3 -mcmc xxxx - mcsave yyyy -nohess -noest}. \end{itemize} \noindent Run SS3 with argument -mceval to get more summaries diff --git a/13output.tex b/13output.tex index 246fb895..79f085cb 100644 --- a/13output.tex +++ b/13output.tex @@ -1,42 +1,42 @@ \section{Output Files} \subsection{Custom Reporting} -\hypertarget{custom}{Additional} user control for what is included in the Report.sso file was added in v.3.30.16. This approach allows for full customizing of what is printed to the Report file by selecting custom reporting (option = 3) in the starter file where specific items now can be included or excluded depending upon a list passed to SS3 from the starter file. The numbering system for each item in the Report file is as follows: +\hypertarget{custom}{Additional} user control for what is included in the Report.sso file was added in v.3.30.16. This approach allows for full customizing of what is printed to the Report file by selecting custom reporting (option = 3) in the starter file where specific items now can be included or excluded depending upon a list passed to SS3 from the starter file. The numbering system for each item in the Report file is as follows: \begin{center} \begin{longtable}{p{1cm} p{6.5cm}p{1cm} p{6cm}} \hline - Num. & Report Item & Num. & Report Item\Tstrut\Bstrut\\ + Num. & Report Item & Num. & Report Item \Tstrut\Bstrut\\ \hline -1 & DEFINITIONS & 31 & LEN SELEX \\ -2 & LIKELIHOOD & 32 & AGE SELEX \\ -3 & Input Variance Adjustment & 33 & ENVIRONMENTAL DATA \\ -4 & Parm devs detail & 34 & TAG Recapture \\ -5 & PARAMETERS & 35 & NUMBERS-AT-AGE \\ -6 & DERIVED QUANTITIES & 36 & BIOMASS-AT-AGE \\ -7 & MGparm By Year after adjustments & 37 & NUMBERS-AT-LENGTH \\ -8 & selparm(Size) By Year after adjustments & 38 & BIOMASS-AT-LENGTH \\ -9 & selparm(Age) By Year after adjustments & 39 & F-AT-AGE \\ -10 & RECRUITMENT DIST & 40 & CATCH-AT-AGE \\ -11 & MORPH INDEXING & 41 & DISCARD-AT-AGE \\ +1 & DEFINITIONS & 31 & LEN SELEX \\ +2 & LIKELIHOOD & 32 & AGE SELEX \\ +3 & Input Variance Adjustment & 33 & ENVIRONMENTAL DATA \\ +4 & Parm devs detail & 34 & TAG Recapture \\ +5 & PARAMETERS & 35 & NUMBERS-AT-AGE \\ +6 & DERIVED QUANTITIES & 36 & BIOMASS-AT-AGE \\ +7 & MGparm By Year after adjustments & 37 & NUMBERS-AT-LENGTH \\ +8 & selparm(Size) By Year after adjustments & 38 & BIOMASS-AT-LENGTH \\ +9 & selparm(Age) By Year after adjustments & 39 & F-AT-AGE \\ +10 & RECRUITMENT DIST & 40 & CATCH-AT-AGE \\ +11 & MORPH INDEXING & 41 & DISCARD-AT-AGE \\ 12 & SIZEFREQ TRANSLATION & 42 & BIOLOGY \\ -13 & MOVEMENT & 43 & Natural Mortality \\ -14 & EXPLOITATION & 44 & AGE SPECIFIC K \\ -15 & CATCH & 45 & Growth Parameters \\ +13 & MOVEMENT & 43 & Natural Mortality \\ +14 & EXPLOITATION & 44 & AGE SPECIFIC K \\ +15 & CATCH & 45 & Growth Parameters \\ 16 & TIME SERIES & 46 & Seas Effects \\ -17 & SPR SERIES & 47 & Biology at age in endyr \\ -18 & Kobe Plot & 48 & MEAN BODY WT(Begin) \\ -19 & SPAWN RECRUIT & 49 & MEAN SIZE TIMESERIES \\ +17 & SPR SERIES & 47 & Biology at age in endyr \\ +18 & Kobe Plot & 48 & MEAN BODY WT(Begin) \\ +19 & SPAWN RECRUIT & 49 & MEAN SIZE TIMESERIES \\ 20 & SPAWN RECR CURVE & 50 & AGE LENGTH KEY \\ 21 & INDEX 1 & 51 & AGE AGE KEY \\ -22 & INDEX 2 & 52 & COMPOSITION DATABASE \\ -23 & INDEX 3 & 53 & SELEX database \\ +22 & INDEX 2 & 52 & COMPOSITION DATABASE \\ +23 & INDEX 3 & 53 & SELEX database \\ 24 & DISCARD SPECIFICATION & 54 & SPR/YPR Profile \\ 25 & DISCARD OUTPUT & 55 & GLOBAL MSY \\ 26 & MEAN BODY WT OUTPUT & 56 & SS\_summary.sso \\ -27 & FIT LEN COMPS & 57 & rebuilder.sso \\ -28 & FIT AGE COMPS & 58 & SIStable.sso \\ -29 & FIT SIZE COMPS & 59 & Dynamic Bzero \\ +27 & FIT LEN COMPS & 57 & rebuilder.sso \\ +28 & FIT AGE COMPS & 58 & SIStable.sso \\ +29 & FIT SIZE COMPS & 59 & Dynamic Bzero \\ 30 & OVERALL COMPS & 60 & wtatage.ss\_new \\ \hline \end{longtable} @@ -45,21 +45,21 @@ \subsection{Custom Reporting} \subsection{Standard ADMB output files} Standard ADMB files are created by SS3. These are: -ss.par - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see \hyperref[sec:RunningSS]{Running Stock Synthesis} for more info). +ss.par - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see \hyperref[sec:RunningSS3]{Running Stock Synthesis} for more info). -ss.std - This file has the parameter values and their estimated standard deviation for those parameters that were active during the model run. It also contains the derived quantities declared as standard deviation report variables. All of this information is also report in the covar.sso. Also, the parameter section of Report.sso lists all the parameters with their SS3 generated names, denotes which were active in the reported run, displays the parameter standard deviations, then displays the derived quantities with their standard deviations. +ss.std - This file has the parameter values and their estimated standard deviation for those parameters that were active during the model run. It also contains the derived quantities declared as standard deviation report variables. All of this information is also report in the covar.sso. Also, the parameter section of Report.sso lists all the parameters with their SS3 generated names, denotes which were active in the reported run, displays the parameter standard deviations, then displays the derived quantities with their standard deviations. ss.rep - This report file is created between phases so, unlike Report.sso, will be created even if the hessian fails. It does not contain as much output as shown in Report.sso. -ss.cor - This is the standard ADMB report for parameter and standard deviation report correlations. It is in matrix form and challenging to interpret. This same information is reported in covar.sso. +ss.cor - This is the standard ADMB report for parameter and standard deviation report correlations. It is in matrix form and challenging to interpret. This same information is reported in covar.sso. \subsection{Stock Synthesis Summary} -The ss\_summary.sso file (available for versions 3.30.08.03 and later) is designed to put key model outputs all in one concise place. It is organized as a list. At the top of the file are descriptors, followed by the 1) likelihoods for each component, 2) parameters and their standard errors, and 3) derived quantities and their standard errors. Total biomass, summary biomass, and catch were added to the quantities reported in this file in version 3.30.11 and later. +The ss\_summary.sso file (available for versions 3.30.08.03 and later) is designed to put key model outputs all in one concise place. It is organized as a list. At the top of the file are descriptors, followed by the 1) likelihoods for each component, 2) parameters and their standard errors, and 3) derived quantities and their standard errors. Total biomass, summary biomass, and catch were added to the quantities reported in this file in v.3.30.11 and later. -Before 3.30.17, TotBio and SmryBio did not always match values reported in columns of the TIME\_SERIES table of Report.sso. The report file should be used instead of ss\_summary.sso for correct calculation of these quantities before 3.30.17. Care should be taken when using the TotBio and SmryBio if the model configuration has recruitment after January 1 or in a later season, as TotBio and SmryBio quantities are always calculated on January 1. Consult the detailed age-, area-, and season-specific tables in report.sso for calculations done at times other than January 1. +Before v.3.30.17, TotBio and SmryBio did not always match values reported in columns of the TIME\_SERIES table of Report.sso. The report file should be used instead of ss\_summary.sso for correct calculation of these quantities before v.3.30.17. Care should be taken when using the TotBio and SmryBio if the model configuration has recruitment after January 1 or in a later season, as TotBio and SmryBio quantities are always calculated on January 1. Consult the detailed age-, area-, and season-specific tables in report.sso for calculations done at times other than January 1. \subsection{SIS table} -The SIS\_table.sso is deprecated as of SS3 v.3.30.17. Please use the \hyperref[sec:r4ss]{r4ss} function \texttt{get\_SIS\_info()} instead. +The SIS\_table.sso is deprecated as of v.3.30.17. Please use the \hyperref[sec:r4ss]{r4ss} function \texttt{get\_SIS\_info()} instead. The SIS\_table.sso file contains model output formatted for reading into the NMFS Species Information System (\href{https://www.st.nmfs.noaa.gov/sis/}{SIS}). This file includes an assessment summary for categories of information (abundance, recruitment, spawners, catch estimates) that are input into the SIS database. A time-series of estimated quantities which aggregates estimates across multiple areas and seasons are provided to summarize model results. Access to the SIS database is granted to all NOAA employees. @@ -68,7 +68,7 @@ \subsection{Derived Quantities} \hypertarget{VirginUnfished}{} \subsubsection{Virgin Spawning Biomass vs Unfished Spawning Biomass} -Unfished is the condition for which reference points (benchmark) are calculated. Virgin Spawning Biomass (B0) is the initial condition on which the start of the time-series depends.If biology or spawner-recruitment parameters are time-varying, then the benchmark year input in the forecast file tells the model which years to average in order to calculate ``unfished''. In this case, virgin recruitment and/or the virgin spawning biomass will differ from their unfished counterparts. Virgin recruitment and spawning biomass are reported in the mgmt\_quant portion of the sd\_report and are now labeled as ``unfished'' for clarity. Note that if ln(R0) is time-varying, then this will cause unfished to differ from virgin. However, if regime shift parameter is time-varying, then unfished will remain the same as virgin because the regime shift is treated as a temporary offset from virgin. Virgin spawning biomass is denoted as SPB\_virgin and spawning biomass unfished is denoted as SPB\_unf in the report file. +Unfished is the condition for which reference points (benchmark) are calculated. Virgin Spawning Biomass (B0) is the initial condition on which the start of the time-series depends. If biology or spawner-recruitment parameters are time-varying, then the benchmark year input in the forecast file tells the model which years to average in order to calculate ``unfished''. In this case, virgin recruitment and/or the virgin spawning biomass will differ from their unfished counterparts. Virgin recruitment and spawning biomass are reported in the mgmt\_quant portion of the sd\_report and are now labeled as ``unfished'' for clarity. Note that if ln(R0) is time-varying, then this will cause unfished to differ from virgin. However, if regime shift parameter is time-varying, then unfished will remain the same as virgin because the regime shift is treated as a temporary offset from virgin. Virgin spawning biomass is denoted as SPB\_virgin and spawning biomass unfished is denoted as SPB\_unf in the report file. Virgin Spawning Biomass (B0) is used in four ways within SS3: \begin{enumerate} @@ -83,9 +83,9 @@ \subsubsection{Metric for Fishing Mortality} A generic single metric of annual fishing mortality is difficult to define in a generalized model that admits multiple areas, multiple biological cohorts, dome-shaped selectivity in size and age for each of many fleets. Several separate indices are provided and others could be calculated by a user from the detailed information in Report.sso. \subsubsection{Equilibrium SPR} -This index focuses on the effect of fishing on the spawning potential of the stock. It is calculated as the ratio of the equilibrium reproductive output per recruit that would occur with the current year's F intensities and biology, to the equilibrium reproductive output per recruit that would occur with the current year's biology and no fishing. Thus it internalizes all seasonality, movement, weird selectivity patterns, and other factors. Because this index moves in the opposite direction than F intensity itself, it is usually reported as 1-SPR. A benefit of this index is that it is a direct measure of common proxies used for F\textsubscript{MSY}, such as F\textsubscript {40\%}. A shortcoming of this index is that it does not directly demonstrate the fraction of the stock that is caught each year. The SPR value is also calculated in the benchmarks (see below). +This index focuses on the effect of fishing on the spawning potential of the stock. It is calculated as the ratio of the equilibrium reproductive output per recruit that would occur with the current year's F intensities and biology, to the equilibrium reproductive output per recruit that would occur with the current year's biology and no fishing. Thus it internalizes all seasonality, movement, weird selectivity patterns, and other factors. Because this index moves in the opposite direction than F intensity itself, it is usually reported as 1-SPR. A benefit of this index is that it is a direct measure of common proxies used for F\textsubscript{MSY}, such as F\textsubscript {40\%}. A shortcoming of this index is that it does not directly demonstrate the fraction of the stock that is caught each year. The SPR value is also calculated in the benchmarks (see below). -The derived quantities report shows an annual SPR statistic. The options, as specified in the starter.ss file, are: +The derived quantities report shows an annual SPR statistic. The options, as specified in the starter.ss file, are: \begin{itemize} \item 0 = skip \item 1 = (1-SPR)/(1-SPR\textsubscript{TGT}) @@ -94,27 +94,27 @@ \subsubsection{Equilibrium SPR} \item 4 = raw SPR \end{itemize} -The SPR approach to measuring fishing intensity was implemented because the concept of a single annual F does not exist in SS3 because F varies by age, sex, and growth morph and season and area. There is no single F value that is applied to all ages unless you create a very simple model setup with knife-edge selectivity. So, what you see in the options are various ways to calculate annual fishing intensity. They can be broken down into three categories. One is exploitation rate calculated simply as total catch divided by biomass from a defined age range. Another is SPR, which is a single measure of the equilibrium effect of fishing according to the F. The third category are various ways to calculate an average F. Some measures of fishing intensity will be misleading if applied inappropriately. For example, the sum of the apical F's will be misleading if different fleets have very different selectivities or, worse, if they occur in different areas. The F=Z-M approach to getting fishing intensity is a way to have a single F that represents a number's weighted value across multiple areas, sexes, morphs, ages. An important distinction is that the exploitation rate and F-based approaches directly relate to the fraction of the population removed each year by fishing; whereas the SPR approach represents the cumulative effect of fishing so it's equivalent in F-space depends on M. +The SPR approach to measuring fishing intensity was implemented because the concept of a single annual F does not exist in SS3 because F varies by age, sex, and growth morph and season and area. There is no single F value that is applied to all ages unless you create a very simple model setup with knife-edge selectivity. So, what you see in the options are various ways to calculate annual fishing intensity. They can be broken down into three categories. One is exploitation rate calculated simply as total catch divided by biomass from a defined age range. Another is SPR, which is a single measure of the equilibrium effect of fishing according to the F. The third category are various ways to calculate an average F. Some measures of fishing intensity will be misleading if applied inappropriately. For example, the sum of the apical F's will be misleading if different fleets have very different selectivities or, worse, if they occur in different areas. The F=Z-M approach to getting fishing intensity is a way to have a single F that represents a number's weighted value across multiple areas, sexes, morphs, ages. An important distinction is that the exploitation rate and F-based approaches directly relate to the fraction of the population removed each year by fishing; whereas the SPR approach represents the cumulative effect of fishing so it's equivalent in F-space depends on M. \subsubsection{F std} -This index provides a direct measure of fishing mortality. The options are: +This index provides a direct measure of fishing mortality. The options are: \begin{itemize} \item 0 = skip \item 1 = exploitation(Bio) \item 2 = exploitation(Num) \item 3 = sum(Frates) \end{itemize} -The exploitation rates are calculated as the ratio of the total annual catch (in either biomass or numbers as specified) to the summary biomass or summary numbers on January 1. The sum of the F rates is simply the sum of all the apical Fs. This makes sense if the F method is in terms of instantaneous F (not Pope's approximation) and if there are not fleets with widely different size/age at peak selectivity, and if there is no seasonality, and especially if there is only one area. In the derived quantities, there is an annual statistic that is the ratio of the can be annual F\_std value to the corresponding benchmark statistic. The available options for the denominator are: +The exploitation rates are calculated as the ratio of the total annual catch (in either biomass or numbers as specified) to the summary biomass or summary numbers on January 1. The sum of the F rates is simply the sum of all the apical Fs. This makes sense if the F method is in terms of instantaneous F (not Pope's approximation) and if there are not fleets with widely different size/age at peak selectivity, and if there is no seasonality, and especially if there is only one area. In the derived quantities, there is an annual statistic that is the ratio of the can be annual F\_std value to the corresponding benchmark statistic. The available options for the denominator are: \begin{itemize} \item 0 = raw \item 1 = F/F\textsubscript {SPR} \item 2 = F/F\textsubscript {MSY} \item 3 = F/F\textsubscript {Btarget} - \item >= 11 A new option to allow for the calculation of a multi-year trailing average in F was implemented in v. 3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average. + \item >= 11 A new option to allow for the calculation of a multi-year trailing average in F was implemented in v.3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average. \end{itemize} \subsubsection{F-at-Age} -Because the annual F is so difficult to interpret as a sum of individual F components, an indirect calculation of F-at-age is reported at the end of the report.sso file. This section of the report calculates Z-at-age simply as $ln(N_{a+1,t+1}/N_{a,t})$. This is done on an annual basis and summed over all areas. It is done once using the fishing intensities as estimated (to get Z), and once with the F intensities set to 0.0 to get M-at-age. This latter sequence also provides a measure of dynamic Bzero. The user can then subtract the table of M-at-age/year from the table of Z-at-age/year to get a table of F-at-age/year. From this apical F, average F over a range of ages, or other user-desired statistics could be calculated. Further work within SS3 with this table of values is anticipated. +Because the annual F is so difficult to interpret as a sum of individual F components, an indirect calculation of F-at-age is reported at the end of the report.sso file. This section of the report calculates Z-at-age simply as $ln(N_{a+1,t+1}/N_{a,t})$. This is done on an annual basis and summed over all areas. It is done once using the fishing intensities as estimated (to get Z), and once with the F intensities set to 0.0 to get M-at-age. This latter sequence also provides a measure of dynamic Bzero. The user can then subtract the table of M-at-age/year from the table of Z-at-age/year to get a table of F-at-age/year. From this apical F, average F over a range of ages, or other user-desired statistics could be calculated. Further work within SS3 with this table of values is anticipated. \subsubsection{MSY and other Benchmark Items} The following quantities are included in the sdreport vector mgmt\_quantities, so obtain estimates of variance. Some additional quantities can be found in the benchmarks section of the forecast\_report.sso. @@ -122,12 +122,12 @@ \subsubsection{MSY and other Benchmark Items} \begin{center} \begin{longtable}{p{4cm} p{11cm}} \hline - Benchmark Item & Description\Tstrut\Bstrut\\ + Benchmark Item & Description \Tstrut\Bstrut\\ \hline \endfirsthead \hline - Benchmark Item & Description\Tstrut\Bstrut\\ + Benchmark Item & Description \Tstrut\Bstrut\\ \hline \endhead @@ -135,22 +135,22 @@ \subsubsection{MSY and other Benchmark Items} \hline \endlastfoot - SSB\_Unfished \Tstrut& Unfished reproductive potential (SSB is commonly female mature spawning biomass).\\ - TotBio\_Unfished \Tstrut& Total age 0+ biomass on January 1.\\ - SmryBio\_Unfished \Tstrut& Biomass for ages at or above the summary age on January 1.\\ - Recr\_Unfished \Tstrut& Unfished recruitment.\\ - SSB\_Btgt \Tstrut& SSB at user specified SSB target.\\ - SPR\_Btgt \Tstrut& Spawner potential ratio (SPR) at F intensity that produces user specified SSB target.\\ - Fstd\_Btgt \Tstrut& F statistic at F intensity that produces user specified SSB target.\\ - TotYield\_Btgt \Tstrut& Total yield at F intensity that produces user specified SSB target.\\ - SSB\_SPRtgt \Tstrut& SSB at user specified SPR target (but taking into account the spawner-recruitment relationship).\\ - Fstd\_SPRtgt \Tstrut& F intensity that produces user specified SPR target.\\ - TotYield\_SPRtgt \Tstrut& Total yield at F intensity that produces user specified SPR target.\\ - SSB\_MSY \Tstrut& SSB at F intensity that is associated with MSY; this F intensity may be directly calculated to produce MSY, or can be mapped to F\_SPR or F\_Btgt.\\ - SPR\_MSY \Tstrut& Spawner potential ratio (SPR) at F intensity associated with MSY.\\ - Fstd\_MSY \Tstrut& F statistic at F intensity associated with MSY.\\ - TotYield\_MSY \Tstrut& Total yield (biomass) at MSY.\\ - RetYield\_MSY \Tstrut& Retained yield (biomass) at MSY.\Bstrut\\ + SSB\_Unfished \Tstrut & Unfished reproductive potential (SSB is commonly female mature spawning biomass). \\ + TotBio\_Unfished \Tstrut & Total age 0+ biomass on January 1. \\ + SmryBio\_Unfished \Tstrut & Biomass for ages at or above the summary age on January 1. \\ + Recr\_Unfished \Tstrut & Unfished recruitment. \\ + SSB\_Btgt \Tstrut & SSB at user specified SSB target. \\ + SPR\_Btgt \Tstrut & Spawner potential ratio (SPR) at F intensity that produces user specified SSB target. \\ + Fstd\_Btgt \Tstrut & F statistic at F intensity that produces user specified SSB target. \\ + TotYield\_Btgt \Tstrut & Total yield at F intensity that produces user specified SSB target. \\ + SSB\_SPRtgt \Tstrut & SSB at user specified SPR target (but taking into account the spawner-recruitment relationship). \\ + Fstd\_SPRtgt \Tstrut & F intensity that produces user specified SPR target. \\ + TotYield\_SPRtgt \Tstrut & Total yield at F intensity that produces user specified SPR target. \\ + SSB\_MSY \Tstrut & SSB at F intensity that is associated with MSY; this F intensity may be directly calculated to produce MSY, or can be mapped to F\_SPR or F\_Btgt. \\ + SPR\_MSY \Tstrut & Spawner potential ratio (SPR) at F intensity associated with MSY. \\ + Fstd\_MSY \Tstrut & F statistic at F intensity associated with MSY. \\ + TotYield\_MSY \Tstrut & Total yield (biomass) at MSY. \\ + RetYield\_MSY \Tstrut & Retained yield (biomass) at MSY. \Bstrut\\ \end{longtable} \end{center} @@ -159,14 +159,14 @@ \subsection{Brief cumulative output} \hypertarget{bootstrap}{} \subsection{Bootstrap Data Files} -It is possible to create bootstrap data files for SS3 where an internal parametric bootstrap function generates a simulated data set by parametric bootstrap sampling the expected values given the input observation error. Starting in version 3.30.19, bootstrap data files are output separated in single numbered files (e.g., data\_boot\_001.ss). In version prior to version 3.30.19 a single file called data.ss\_new was output that contained multiple sections: the original data echoed out, the expected data values based on the model fit, and then subsequent bootstrap data files. +It is possible to create bootstrap data files for SS3 where an internal parametric bootstrap function generates a simulated data set by parametric bootstrap sampling the expected values given the input observation error. Starting in v.3.30.19, bootstrap data files are output separated in single numbered files (e.g., data\_boot\_001.ss). In version prior to v.3.30.19 a single file called data.ss\_new was output that contained multiple sections: the original data echoed out, the expected data values based on the model fit, and then subsequent bootstrap data files. -Specifying the number of bootstrap data files has remained the same across model versions. Creating bootstrap data files is specified in the starter file via the ``Number of datafiles to produce'' line where a value of 3 or greater will create three files: the original data file, data\_echo.ss\_new, a data file with the model expected values, data\_expval.ss, and single bootstrap data file, data\_boot\_001.ss. The first output provides the unaltered input data file (with annotations added). The second provides the expected values for only the data elements used in the model run. The third and subsequent outputs provide parametric bootstraps around the expected values. +Specifying the number of bootstrap data files has remained the same across model versions. Creating bootstrap data files is specified in the starter file via the ``Number of datafiles to produce'' line where a value of 3 or greater will create three files: the original data file, data\_echo.ss\_new, a data file with the model expected values, data\_expval.ss, and single bootstrap data file, data\_boot\_001.ss. The first output provides the unaltered input data file (with annotations added). The second provides the expected values for only the data elements used in the model run. The third and subsequent outputs provide parametric bootstraps around the expected values. The bootstrapping procedure within SS3 is done via the following steps: \begin{itemize} - \item Expected values of all input data are calculated (these are also used in the likelihood which compares observed to expected values for all data). The calculation of these expected values is described in detail under the ``Observation Model'' section of the appendix to \citet{methotstock2013}. \ + \item Expected values of all input data are calculated (these are also used in the likelihood which compares observed to expected values for all data). The calculation of these expected values is described in detail under the ``Observation Model'' section of the appendix to \citet{methotstock2013}. \item Parametric bootstrap data are calculated for each observation by sampling from a probability distribution corresponding to the likelihood for that data type using the expected values noted above. Examples of how this happens include the following: @@ -193,35 +193,35 @@ \subsection{Bootstrap Data Files} \item Often there is need to explore the removal (not include in the model fitting) of specific years in a data set which can be done by specifying a negative fleet number. If bootstrapping a data file, note that specifying a negative fleet in the data inputs for indices, length composition, or age composition will include the ``observation'' in the model (hence generating predicted values and bootstrap data sets for the data), but not in the negative log likelihood. The ``observation values'' used with negative fleet do not influence the predicted values, except when using tail compression with length or age composition. Non-zero values greater than the minimum tail compression should be used for the observation values when tail compression is being used, as using zeros or values smaller than the minimum tail compression can cause the predicted values to be reported as zero and shift predictions to other bins. - \item As of SS3 v.3.30.15, age and length composition data that use the Dirichlet-Multinomial distribution in the model are generated using the Dirichlet-Multinomial in bootstrap data sets. + \item As of v.3.30.15, age and length composition data that use the Dirichlet-Multinomial distribution in the model are generated using the Dirichlet-Multinomial in bootstrap data sets. \end{itemize} \subsection{Forecast and Reference Points (Forecast-report.sso)} -The Forecast-report file contains output of fishery reference points and forecasts. It is designed to meet the needs of the Pacific Fishery Management Council's Groundfish Fishery Management Plan, but it should be quite feasible to develop other regionally specific variants of this output. +The Forecast-report file contains output of fishery reference points and forecasts. It is designed to meet the needs of the Pacific Fishery Management Council's Groundfish Fishery Management Plan, but it should be quite feasible to develop other regionally specific variants of this output. -The vector of forecast recruitment deviations is estimated during an additional model estimation phase. This vector includes any years after the end of the recruitment deviation time series and before or at the end year. When this vector starts before the ending year of the time series, then the estimates of these recruitments will be influenced by the data in these final years. This is problematic, because the original reason for not estimating these recruitments at the end of the time series was the poor signal/noise ratio in the available data. It is not that these data are worse than data from earlier in the time series, but the low amount of data accumulated for each cohort allows an individual datum to dominate the model's fit. Thus, an additional control is provided so that forecast recruitment deviations during these years can receive an extra weighting in order to counter-balance the influence of noisy data at the end of the time series. +The vector of forecast recruitment deviations is estimated during an additional model estimation phase. This vector includes any years after the end of the recruitment deviation time series and before or at the end year. When this vector starts before the ending year of the time series, then the estimates of these recruitments will be influenced by the data in these final years. This is problematic, because the original reason for not estimating these recruitments at the end of the time series was the poor signal/noise ratio in the available data. It is not that these data are worse than data from earlier in the time series, but the low amount of data accumulated for each cohort allows an individual datum to dominate the model's fit. Thus, an additional control is provided so that forecast recruitment deviations during these years can receive an extra weighting in order to counter-balance the influence of noisy data at the end of the time series. -An additional control is provided for the fraction of the log-bias adjustment to apply to the forecast recruitments. Recall that R is the expected mean level of recruitment for a particular year as specified by the spawner-recruitment curve and R' is the geometric mean recruitment level calculated by discounting R with the log-bias correction factor $e-0.5s^2$. Thus a lognormal distribution of recruitment deviations centered on R' will produce a mean level of recruitment equal to R. During the modeled time series, the virgin recruitment level and any recruitments prior to the first year of recruitment deviations are set at the level of R, and the lognormal recruitment deviations are centered on the R' level. For the forecast recruitments, the fraction control can be set to 1.0 so that 100\% of the log-bias correction is applied and the forecast recruitment deviations will be based on the R' level. This is certainly the configuration to use when the model is in MCMC mode. Setting the fraction to 0.0 during maximum likelihood forecasts would center the recruitment deviations, which all have a value of 0.0 in maximum likelihood mode, on R. Thus would provide a mean forecast that would be more comparable to the mean of the ensemble of forecasts produced in MCMC mode. Further work on this topic is underway. +An additional control is provided for the fraction of the log-bias adjustment to apply to the forecast recruitments. Recall that R is the expected mean level of recruitment for a particular year as specified by the spawner-recruitment curve and R' is the geometric mean recruitment level calculated by discounting R with the log-bias correction factor $e-0.5s^2$. Thus a lognormal distribution of recruitment deviations centered on R' will produce a mean level of recruitment equal to R. During the modeled time series, the virgin recruitment level and any recruitments prior to the first year of recruitment deviations are set at the level of R, and the lognormal recruitment deviations are centered on the R' level. For the forecast recruitments, the fraction control can be set to 1.0 so that 100\% of the log-bias correction is applied and the forecast recruitment deviations will be based on the R' level. This is certainly the configuration to use when the model is in MCMC mode. Setting the fraction to 0.0 during maximum likelihood forecasts would center the recruitment deviations, which all have a value of 0.0 in maximum likelihood mode, on R. Thus would provide a mean forecast that would be more comparable to the mean of the ensemble of forecasts produced in MCMC mode. Further work on this topic is underway. Note: \begin{itemize} \item Cohorts continue growing according to their specific growth parameters in the forecast period rather than staying static at the end year values. - \item Environmental data entered for future years can be used to adjust expected recruitment levels. However, environmental data will not affect growth or selectivity parameters in the forecast. + \item Environmental data entered for future years can be used to adjust expected recruitment levels. However, environmental data will not affect growth or selectivity parameters in the forecast. \end{itemize} The top of the Forecast-report file shows the search for F\textsubscript {SPR} and the search for F\textsubscript {MSY}, allowing the user to verify convergence. Note: if the STD file shows aberrant results, such as all the standard deviations being the same value for all recruitments, then check the F\textsubscript {MSY} search for convergence. The F\textsubscript {MSY} can be calculated, or set equal to one of the other F reference points per the selection made in starter.ss. \subsection{Main Output File, Report.sso} -This is the primary output file. Its major sections (as of SS3 v.3.30.16) are listed below. +This is the primary output file. Its major sections (as of v.3.30.16) are listed below. The sections of the output file are: \begin{itemize} - \item SS3 version number with date compiled. Time and date of model run. This info appears at the top of all output files. + \item SS3 version number with date compiled. Time and date of model run. This info appears at the top of all output files. \item Comments \begin{itemize} - \item Input file lines starting with \#C are echoed here. + \item Input file lines starting with \#C are echoed here. \end{itemize} \item Keywords \begin{itemize} @@ -245,7 +245,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Parameters \begin{itemize} - \item The parameters are listed here. For the estimated parameters, the display shows: Num (count of parameters), Label (as internally generated by SS3), Value, Active\_Cnt, Phase, Min, Max, Init, Prior, Prior\_type, Prior\_SD, Prior\_Like, Parm\_StD (standard deviation of parameter as calculated from inverse Hessian), Status (e.g., near bound), and Pr\_atMin (value of prior penalty if parameter was near bound). The Active\_Cnt entry is a count of the parameters in the same order they appear in the ss.cor file. + \item The parameters are listed here. For the estimated parameters, the display shows: Num (count of parameters), Label (as internally generated by SS3), Value, Active\_Cnt, Phase, Min, Max, Init, Prior, Prior\_type, Prior\_SD, Prior\_Like, Parm\_StD (standard deviation of parameter as calculated from inverse Hessian), Status (e.g., near bound), and Pr\_atMin (value of prior penalty if parameter was near bound). The Active\_Cnt entry is a count of the parameters in the same order they appear in the ss.cor file. \end{itemize} \item Derived Quantities \begin{itemize} @@ -258,7 +258,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \end{itemize} -Then the time series of output, with standard deviation of estimates, are produced with internally generated labels. Note that these time series extend through the forecast era. The order of the output is: spawning biomass, recruitment, SPRratio, Fratio, Bratio, management quantities, forecast catch (as a target level), forecast catch as a limit level (OFL), Selex\_std, Grow\_std, NatAge\_std. For the three ``ratio'' quantities, there is an additional column of output showing a Z-score calculation of the probability that the ratio differs from 1.0. The ``management quantities'' section is designed to meet the terms of reference for west coast groundfish assessments; other formats could be made available upon request. The standard deviation quantities at the end are set up according to specifications at the end of the control input file. In some cases, a user may specify that no derived quantity output of a certain type be produced. In those cases, SS3 substitutes a repeat output of the virgin spawning biomass so that vectors of null length are not created. +Then the time series of output, with standard deviation of estimates, are produced with internally generated labels. Note that these time series extend through the forecast era. The order of the output is: spawning biomass, recruitment, SPRratio, Fratio, Bratio, management quantities, forecast catch (as a target level), forecast catch as a limit level (OFL), Selex\_std, Grow\_std, NatAge\_std. For the three ``ratio'' quantities, there is an additional column of output showing a Z-score calculation of the probability that the ratio differs from 1.0. The ``management quantities'' section is designed to meet the terms of reference for west coast groundfish assessments; other formats could be made available upon request. The standard deviation quantities at the end are set up according to specifications at the end of the control input file. In some cases, a user may specify that no derived quantity output of a certain type be produced. In those cases, SS3 substitutes a repeat output of the virgin spawning biomass so that vectors of null length are not created. \begin{itemize} \item Mortality and growth parameters by year after adjustments @@ -279,11 +279,11 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Growth Morph Indexing \begin{itemize} - \item This block shows the internal index values for various quantities. It can be a useful reference for complex model setups. The vocabulary is: Bio\_Pattern refers to a collection of cohorts with the same defined growth and natural mortality parameters; sex is the next main index. If recruitment occurs in multiple seasons, then birth season is the index for that factor. The index labeled ``Platoon'' is used as a continuous index across all the other factor-specific indices. If sub-platoons are used, they are nested within the Bio\_Pattern x Sex x Birth Season platoon. However, some of the output tables use the column label ``platoon'' as a continuous index across platoons and sub-platoons. Note that there is no index here for area. Each of the cohorts is distributed across areas and they retain their biological characteristics as they move among areas. + \item This block shows the internal index values for various quantities. It can be a useful reference for complex model setups. The vocabulary is: Bio\_Pattern refers to a collection of cohorts with the same defined growth and natural mortality parameters; sex is the next main index. If recruitment occurs in multiple seasons, then birth season is the index for that factor. The index labeled ``Platoon'' is used as a continuous index across all the other factor-specific indices. If sub-platoons are used, they are nested within the Bio\_Pattern x Sex x Birth Season platoon. However, some of the output tables use the column label ``platoon'' as a continuous index across platoons and sub-platoons. Note that there is no index here for area. Each of the cohorts is distributed across areas and they retain their biological characteristics as they move among areas. \end{itemize} \item Size Frequency Translation \begin{itemize} - \item If the generalized size frequency approach is used, this block shows the translation probabilities between population length bins and the units of the defined size frequency method. If the method uses body weight as the accumulator, then output is in corresponding units. + \item If the generalized size frequency approach is used, this block shows the translation probabilities between population length bins and the units of the defined size frequency method. If the method uses body weight as the accumulator, then output is in corresponding units. \end{itemize} \item Movement \begin{itemize} @@ -349,7 +349,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Environmental Data \begin{itemize} - \item The input values of environmental data are echoed here. In the future, the summary biomass in the previous year will be mirrored into environmental column -2 and that the age zero recruitment deviation into environmental column -1. Once so mirrored, they may enable density-dependent effects on model parameters. + \item The input values of environmental data are echoed here. Density-dependence can be used by linking to population quantities that have already been calculated at the start of the year. These include summary biomass, spawning biomass, and recruitment deviations. These three quantities are mapped into the -1, -2, and -3 columns of the environmental data matrix where they can be used as if there were environmental data input. \end{itemize} \item Tag Recapture Information \item Numbers at Age @@ -365,7 +365,7 @@ \subsection{Main Output File, Report.sso} \item F at Age \item Catch at Age \begin{itemize} - \item The output is shown for each fleet. It is not necessary to show by area because each fleet operates in only one area. + \item The output is shown for each fleet. It is not necessary to show by area because each fleet operates in only one area. \end{itemize} \item Discard at Age \item Biology @@ -381,7 +381,7 @@ \subsection{Main Output File, Report.sso} \item Seasonal Effects \item Biology at Age \begin{itemize} - \item This section shows derived size-at-age and other quantities. As of v3.30.21 sex ratio is reported by area in this output table. + \item This section shows derived size-at-age and other quantities. As of v.3.30.21 sex ratio is reported by area in this output table. \end{itemize} \item Mean Body Wt (begin) \begin{itemize} @@ -401,7 +401,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Composition Database \begin{itemize} - \item Contains the length composition, age composition, and mean size-at-age observed and expected values. It is arranged in a database format, rather than an array of vectors. + \item Contains the length composition, age composition, and mean size-at-age observed and expected values. It is arranged in a database format, rather than an array of vectors. \end{itemize} \item Selectivity Database \begin{itemize} diff --git a/14r4ss.tex b/14r4ss.tex index 74d0b390..1c818f09 100644 --- a/14r4ss.tex +++ b/14r4ss.tex @@ -36,15 +36,19 @@ \section{Using R To View Model Output (r4ss)}\label{sec:r4ss} \hline Core Functions & \Tstrut\Bstrut\\ \hline - SS\_output \Tstrut& A function to create a list object for the output from Stock Synthesis\\ - SS\_plots \Tstrut& Plot many quantities related to output from Stock Synthesis\\ + SS\_output \Tstrut & A function to create a list object for the output from Stock Synthesis \\ + SS\_plots \Tstrut & Plot many quantities related to output from Stock Synthesis \\ \hline + \multicolumn{2}{l}{Download the SS3 Executable:} \Tstrut\Bstrut\\ + \hline + get\_ss3\_exe \Tstrut & Download the latest version or a specified version of the SS3 executable \\ + \hline \multicolumn{2}{l}{Model comparisons and other diagnostics:} \Tstrut\Bstrut\\ \hline - SSsummarize \Tstrut & Read output from multiple SS3 models\\ - SStableComparison \Tstrut & Make table comparing quantities across models\\ + SSsummarize \Tstrut & Read output from multiple SS3 models \\ + SStableComparison \Tstrut & Make table comparing quantities across models \\ SSplotComparison \Tstrut & Plot output from multiple SS3 models \\ SSplotPars \Tstrut & Plot distributions of priors, posteriors, and estimates \\ SS\_profile \Tstrut & Run likelihood parameter profiles \\ @@ -52,12 +56,12 @@ \section{Using R To View Model Output (r4ss)}\label{sec:r4ss} PinerPlot \Tstrut & Plot fleet-specific contributions to likelihood profile \\ SS\_RunJitter \Tstrut & Run multiple model jitters to determine best model fit \\ SS\_doRetro \Tstrut & Run retrospective analysis \\ - SSmohnsrho \Tstrut & Calculate Mohn's Rho values\\ + SSmohnsrho \Tstrut & Calculate Mohn's Rho values \\ SSplotRetroRecruits \Tstrut & Make retrospective pattern of recruitment estimates (a.k.a. squid plot) as seen in Pacific hake assessments\Bstrut \\ SS\_fitbiasramp \Tstrut& Estimate bias adjustment for recruitment deviates \Bstrut\\ \hline - \multicolumn{2}{l}{File manipulation for inputs:}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{File manipulation for inputs:} \Tstrut\Bstrut\\ \hline SS\_readdat \Tstrut & Read data file \\ SS\_readctl \Tstrut & Read control file \\ @@ -77,9 +81,9 @@ \section{Using R To View Model Output (r4ss)}\label{sec:r4ss} NegLogInt\_Fn \Tstrut& Calculated variances of time-varying parameters using SS3 implementation of the Laplace Approximation \Bstrut\\ \hline - \multicolumn{2}{l}{File manipulations for outputs:}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{File manipulations for outputs:} \Tstrut\Bstrut\\ \hline - SS\_recdevs \Tstrut & Insert a vector of recruitment deviations into the control file \\ + SS\_recdevs \Tstrut & Insert a vector of recruitment deviations into the control file \\ \hline \end{longtable} diff --git a/15special.tex b/15special.tex index f9317792..fd8b85d6 100644 --- a/15special.tex +++ b/15special.tex @@ -6,7 +6,7 @@ \subsection{Using Time-Varying Parameters} \hypertarget{tvOrder}{} \subsubsection{Time-Varying Parameters} -Starting in SS3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of SS3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3: +Starting in v.3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of v.3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3: \begin{enumerate} \item Environmental or Density dependent Linkages: Links the base parameter with environmental data or a model derived quantity. \item Parameter deviations: Creates annual deviations from the base parameter during a user-specified range of years. @@ -30,7 +30,7 @@ \subsubsection{Time-Varying Parameters} \subsubsection{Time-Varying Growth Considerations} When time-varying growth is used, there are some additional considerations to be aware of: \begin{itemize} - \item Growth in the forecast with time blocks: Growth deviations propagate into the forecast because growth is by cohort according to the current year's growth parameters. The user can select which growth parameters get used during the forecast by setting the end year of the last block, if using time blocks. If the last block ends in the model's end year, then the growth parameters in effect during the forecast will be the base parameters. By setting the end year of the last block to one year past the model end year (endyr), the model will continue the last block's growth parameter levels throughout the forecast. + \item Growth in the forecast with time blocks: Growth deviations propagate into the forecast because growth is by cohort according to the current year's growth parameters. The user can select which growth parameters get used during the forecast by setting the end year of the last block, if using time blocks. If the last block ends in the model's end year, then the growth parameters in effect during the forecast will be the base parameters. By setting the end year of the last block to one year past the model end year (endyr), the model will continue the last block's growth parameter levels throughout the forecast. \item The equilibrium benchmark quantities (MSY, F40\%, etc.) previously used the model end year's (endyr) body size-at-age, which is not in equilibrium. Through the forecast file, it is possible to specify a range of years over which to average the size-at-age used in the benchmark calculations. An option to create equilibrium growth from averaged growth parameters would be a more realistic option and is under consideration, but is not yet available. % Which input in forecast?? The benchmark years input? I couldn't find this option... % Details about a potentially better solution. @@ -116,11 +116,11 @@ \subsection{Parameterizing the Two-Dimensional Autoregressive Selectivity} Second, fix $\sigma_s$ at the value iteratively tuned in the previous step and estimate $\epsilon_{a,t}$. Plot both Pearson residuals and $\epsilon_{a,t}$ out on the age-year surface to check their 2D dimensions. If their distributions seems to be not random but rather be autocorrelated (deviation estimates have the same sign several ages and/or years in a row), users should consider estimating and then including the autocorrelations in $\epsilon_{a,t}$. -Third, extract the estimated selectivity deviation samples from the previous step for estimating $\rho_a$ and $\rho_t$ externally by fitting the samples to a stand-alone model written in Template-Model Builder (TMB). In this model, both $\rho_a$ and $\rho_t$ are bounded between 0 and 1 via applying a logic transformation. If at least one of the two AR1 coefficients are notably different from 0, the model should be run one more time by fixing the two AR1 coefficients at their values externally estimated from deviation samples. The Pearson residuals and $\epsilon_{a,t}$ from this run are expected to distribute more randomly as the autocorrelations in selectivity deviations can be at least partially included in the 2D AR1 process. +Third, extract the estimated selectivity deviation samples from the previous step for estimating $\rho_a$ and $\rho_t$ externally by fitting the samples to a stand-alone model written in Template-Model Builder (TMB). In this model, both $\rho_a$ and $\rho_t$ are bounded between 0 and 1 via applying a logic transformation. If at least one of the two AR1 coefficients are notably different from 0, the model should be run one more time by fixing the two AR1 coefficients at their values externally estimated from deviation samples. The Pearson residuals and $\epsilon_{a,t}$ from this run are expected to distribute more randomly as the autocorrelations in selectivity deviations can be at least partially included in the 2D AR1 process. \hypertarget{continuous-seasonal-recruitment-sec}{} \subsection{Continuous seasonal recruitment} -Setting up a seasonal model such that recruitment can occur with similar and independent probability in any season of any year is awkward in SS3. Instead, SS3 can be set up so that each quarter appears as a year (i.e., a seasons as years model). All the data and parameters are set up to treat quarters as if they were years. Note that setting up a seasons as years model also requires that all rate parameters be re-scaled to correctly account for the quarters being treated as years. +Setting up a seasonal model such that recruitment can occur with similar and independent probability in any season of any year is awkward in SS3. Instead, SS3 can be set up so that each quarter appears as a year (i.e., a seasons as years model). All the data and parameters are set up to treat quarters as if they were years. Note that setting up a seasons as years model also requires that all rate parameters be re-scaled to correctly account for the quarters being treated as years. Other adjustments to make when using seasons as years include: @@ -144,7 +144,7 @@ \section{Detailed Information on Stock Synthesis Processes} \subsection{Jitter} \hypertarget{Jitter}{} -The jitter function has been updated with SS3.30. The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}): +The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}): \begin{enumerate} \item A normal distribution is calculated such that the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. \item A jitter shift value, termed ``\textit{K}'', is calculated from the distribution equal to pr(P\textsubscript{CURRENT}). @@ -174,15 +174,15 @@ \subsection{Jitter} \hypertarget{PriorDescrip}{} \subsection{Parameter Priors} -Priors on parameters fulfill two roles in SS3. First, for parameters provided with an informative prior, SS3 is receiving additional information about the true value of the parameter. This information works with the information in the data through the overall log likelihood function to arrive at the final parameter estimate. Second, diffuse priors provide only weak information about the value of a prior and serve to manage model performance during execution. For example, some selectivity parameters may become unimportant depending upon the values of other parameters of that selectivity function. In the double normal selectivity function, the parameters controlling the width of the peak and the slope of the descending side become redundant if the parameter controlling the final selectivity moves to a value indicating asymptotic selectivity. The width and slope parameters would no longer have any effect on the log likelihood, so they would have no gradient in the log likelihood and would drift aimlessly. A diffuse prior would then steer them towards a central value and avoid them crashing into the bounds. Another benefit of diffuse priors is the control of parameters that are given unnaturally wide bounds. When a parameter is given too broad of a bound, then early in a model run it could drift into this tail and potentially get into a situation where the gradient with respect that parameter approaches zero even though it is not at its global best value. Here the diffuse prior helps move the parameter back towards the middle of its range where it presumably will be more influential and estimable. +Priors on parameters fulfill two roles in SS3. First, for parameters provided with an informative prior, SS3 is receiving additional information about the true value of the parameter. This information works with the information in the data through the overall log likelihood function to arrive at the final parameter estimate. Second, diffuse priors provide only weak information about the value of a prior and serve to manage model performance during execution. For example, some selectivity parameters may become unimportant depending upon the values of other parameters of that selectivity function. In the double normal selectivity function, the parameters controlling the width of the peak and the slope of the descending side become redundant if the parameter controlling the final selectivity moves to a value indicating asymptotic selectivity. The width and slope parameters would no longer have any effect on the log likelihood, so they would have no gradient in the log likelihood and would drift aimlessly. A diffuse prior would then steer them towards a central value and avoid them crashing into the bounds. Another benefit of diffuse priors is the control of parameters that are given unnaturally wide bounds. When a parameter is given too broad of a bound, then early in a model run it could drift into this tail and potentially get into a situation where the gradient with respect that parameter approaches zero even though it is not at its global best value. Here the diffuse prior helps move the parameter back towards the middle of its range where it presumably will be more influential and estimable. The options for parameter priors are described as a function of $Pval$, the value of the parameter for which a prior is being calculated, as well as the parameter bounds in the case of the beta distribution ($Pmax$ and $Pmin$), and the input values for $Prior$ and $Pr\_SD$, which in some cases are the mean and standard deviation, but interpretation depends on the prior type. The Prior Likelihoods below represent the negative log likelihood in all cases. \myparagraph{Prior Types} -Note that the numbering in SS3 v.3.30 is different from that used in SS3 v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs: +Note that the numbering in v.3.30 is different from that used in v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs: \begin{tabular}{ll} - $P_\text{init}$ & The value of the parameter for which a prior is being calculated where init can either be\\ + $P_\text{init}$ & The value of the parameter for which a prior is being calculated where init can either be \\ & the initial un-estimated value or the estimated value (3rd column in control or \\ & control.ss\_new file) \\ $P_\text{LB}$ & The lower bound of the parameter (1st column in control file) \\ @@ -196,7 +196,7 @@ \subsection{Parameter Priors} In a Bayesian context this is equivalent to a uniform prior between the parameter bounds. \item \textbf{Prior Type = 1 = Symmetric beta prior} \\ - The symmetric beta is scaled between parameter bounds, imposing a larger penalty near the bounds. Prior standard deviation of 0.05 is very diffuse and a value of 5.0 provides a smooth U-shaped prior. The prior input is ignored for this prior type. + The symmetric beta is scaled between parameter bounds, imposing a larger penalty near the bounds. Prior standard deviation of 0.05 is very diffuse and a value of 5.0 provides a smooth U-shaped prior. The prior input is ignored for this prior type. \begin{equation} \mu = -P_\text{PRSD} \cdot ln\left(\frac{P_\text{UB}+P_\text{LB}}{2} - P_\text{LB} \right) - P_\text{PRSD} \cdot ln(0.5) \end{equation} @@ -216,7 +216,7 @@ \subsection{Parameter Priors} \end{figure} - \item \textbf{Prior Type = 2 = Beta prior} \\ + \item \textbf{Prior Type = 2 = Beta prior} \\ The definition of $\mu$ is consistent with CASAL's formulation with the $\beta_\text{PR}$ and $\alpha_\text{PR}$ corresponding to the $m$ and $n$ parameters. \begin{equation} \mu = \frac{P_\text{PR}-P_\text{LB}}{P_\text{UB}-P_\text{LB}} diff --git a/1_4sections.tex b/1_4sections.tex index d848e8b7..3ab3e21b 100644 --- a/1_4sections.tex +++ b/1_4sections.tex @@ -7,7 +7,7 @@ \section{Introduction}\label{sec:intro} Assessment models are loosely coupled to other models. For example, an ocean-temperature or circulation model or benthic-habitat map may be directly included in the pre-processing of the fish abundance survey. A time series of a derived ocean factor, like the North Atlantic Oscillation, can be included as an indicator of a change in a population process. Output of a multi-decadal time series of derived fish abundance can be an input to ecosystem and economic models to better understand cumulative impacts and benefits. -Stock Synthesis is an age- and size-structured assessment model in the class of models termed integrated analysis models. Stock Synthesis has evolved since its initial inception in order to model a wide range of fish populations and dynamics. The most recent major revision to Stock Synthesis occurred in 2016, when version 3.30 was introduced. This new version of Stock Synthesis required major revisions to the input files relative to earlier versions (see the \hypertarget{ConvIssues}{Converting Files} section for more information). The acronym for Stock Synthesis has evolved over time with earlier versions being referred to as SS2 (Stock Synthesis v.2.xx) and older versions as SS3 (Stock Synthesis v.3.xx). +Stock Synthesis is an age- and size-structured assessment model in the class of models termed integrated analysis models. Stock Synthesis has evolved since its initial inception in order to model a wide range of fish populations and dynamics. The most recent major revision to Stock Synthesis occurred in 2016, when v.3.30 was introduced. This new version of Stock Synthesis required major revisions to the input files relative to earlier versions (see the \hypertarget{ConvIssues}{Converting Files} section for more information). The acronym for Stock Synthesis has evolved over time with earlier versions being referred to as SS2 (Stock Synthesis v.2.xx) and older versions as SS3 (Stock Synthesis v.3.xx). SS3 has a population sub-model that simulates a stock's growth, maturity, fecundity, recruitment, movement, and mortality processes, an observation sub-model estimates expected values for various types of data, a statistical sub-model characterizes the data's goodness of fit and obtains best-fitting parameters with associated variance, and a forecast sub-model projects needed management quantities. SS3 outputs the quantities, with confidence intervals, needed to implement risk-averse fishery control rules. The model is coded in C++ with parameter estimation enabled by automatic differentiation (\href{http://www.admb-project.org}{admb}). Windows, Linux, and iOS versions are available. Output processing and associated tools are in R, and a graphical interface is in QT. SS3 executables and support material is available on \href{https://github.com/nmfs-stock-synthesis}{GitHub}. The rich feature set in SS3 allows it to be configured for a wide range of situations. SS3 has become the basis for a large fraction of U.S. assessments and many other assessments around the world. @@ -44,16 +44,16 @@ \section{File Organization}\label{FileOrganization} \subsection{Output Files} \begin{enumerate} - \item data\_echo.ss\_new: Contains the input data as read by the model. In model versions prior to 3.30.19 a single data.ss\_new file was created that included the echoed data, the expected data values (data\_expval.ss), and any bootstap data files selected (data\_boot\_x.ss). + \item data\_echo.ss\_new: Contains the input data as read by the model. In model versions prior to v.3.30.19 a single data.ss\_new file was created that included the echoed data, the expected data values (data\_expval.ss), and any bootstrap data files selected (data\_boot\_x.ss). \item data\_expval.ss: Contains the expected data values given the model fit. This file is only created if the value for ``Number of datafiles to produce'' in the starter file is set to 2 or greater. \item data\_boot\_x.ss: A new data file filled with bootstrap data based on the original input data and variances. This file is only created if the value in the ``Number of datafiles to produc'' in the starter file is set to 3 or greater. A separate bootstrap data file will be written for the number of bootstrap data file requests where x in the file name indicates the bootstrap simulation number (e.g., data\_boot\_001.ss, data\_boot\_002.ss,...). \item control.ss\_ new: Updated version of the control file with final parameter values replacing the initial parameter values. \item starter.ss\_ new: New version of the starter file with annotations. \item Forecast.ss\_ new: New version of the forecast file with annotations. - \item warning.sso: This file contains a list of warnings generated during program execution. Starting in SS3 v.3.30.20 warnings are categorized into either Note or Warning. An item marked as a not denotes settings that the user may want to revise but do not require any additional changes for the model to run. Items marked with Warning are items that may or may not have allowed the model to finish running. Items with a fatal warning caused the model to fail during either reading input files or calculations. Warnings classified as error or adjustment may be causing calculation issues, even if the model was able to finish reading file and running, and should be addressed the user. + \item warning.sso: This file contains a list of warnings generated during program execution. Starting in v.3.30.20 warnings are categorized into either Note or Warning. An item marked as a not denotes settings that the user may want to revise but do not require any additional changes for the model to run. Items marked with Warning are items that may or may not have allowed the model to finish running. Items with a fatal warning caused the model to fail during either reading input files or calculations. Warnings classified as error or adjustment may be causing calculation issues, even if the model was able to finish reading file and running, and should be addressed the user. \item echoinput.sso: This file is produced while reading the input files and includes an annotated echo of the input. The sole purpose of this output file is debugging input errors. \item Report.sso: This file is the primary report file. - \item ss\_summary.sso: Output file that contains all the likelihood components, parameters, derived quantities, total biomass, summary biomass, and catch. This file offers an abridged version of the report file that is useful for quick model evaluation. This file is only available in SS3 v.3.30.08.03 and greater. + \item ss\_summary.sso: Output file that contains all the likelihood components, parameters, derived quantities, total biomass, summary biomass, and catch. This file offers an abridged version of the report file that is useful for quick model evaluation. This file is only available in v.3.30.08.03 and greater. \item CompReport.sso: Observed and expected composition data in a list-based format. \item Forecast-report.sso: Output of management quantities and for forecasts. \item CumReport.sso: This file contains a brief version of the run output, output is appended to current content of file so results of several runs can be collected together. This is useful when a batch of runs is being processed. @@ -61,7 +61,7 @@ \section{File Organization}\label{FileOrganization} \item ss.par: This file contains all estimated and fixed parameters from the model run. \item ss.std, ss.rep, ss.cor etc.: Standard ADMB output files. \item checkup.sso: Contains details of selectivity parameters and resulting vectors. This is written during the first call of the objective function. - \item Gradient.dat: New for SS3 v.3.30, this file shows parameter gradients at the end of the run. + \item Gradient.dat: New for v.3.30, this file shows parameter gradients at the end of the run. \item rebuild.dat: Output formatted for direct input to Andre Punt's rebuilding analysis package. Cumulative output is output to REBUILD.SS (useful when doing MCMC or profiles). \item SIS\_table.sso: Output formatted for reading into the NMFS Species Information System. \item Parmtrace.sso: Parameter values at each iteration. @@ -71,8 +71,8 @@ \section{File Organization}\label{FileOrganization} \pagebreak \section{Starting Stock Synthesis} -SS3 is typically run through the command line interface, although it can also be called from another program, R, the Stock Synthesis Interface, or a script file (such as a DOS batch file). SS3 is compiled for Windows, Mac, and Linux operating systems. The memory requirements depend on the complexity of the model you run, but in general, SS3 will run much slower on computers with inadequate memory. See \hyperref[sec:RunningSS]{Running Stock Synthesis} for additional notes on methods of running SS3. +SS3 is typically run through the command line interface, although it can also be called from another program, R, the Stock Synthesis Interface, or a script file (such as a DOS batch file). SS3 is compiled for Windows, Mac, and Linux operating systems. The memory requirements depend on the complexity of the model you run, but in general, SS3 will run much slower on computers with inadequate memory. See \hyperref[sec:RunningSS3]{Running Stock Synthesis} for additional notes on methods of running SS3. -Communication with the program is through text files. When the program first starts, it reads the file starter.ss, which typically must be located in the same directory from which SS3 is being run. The file starter.ss contains required input information plus references to other required input files, as described in the \hyperref[FileOrganization]{File Organization section}. The names of the control and data files must match the names specified in the starter.ss file. File names, including starter.ss, are case-sensitive on Linux and Mac systems but not on Windows. The echoinput.sso file outputs how the executable reads each input file and can be used for troubleshooting when trying to setup a model correctly. Output from SS3 consists of text files containing specific keywords. Output processing programs, such as the SSI, Excel, or R can search for these keywords and parse the specific information located below that keyword in the text file. +Communication with the program is through text files. When the program first starts, it reads the file starter.ss, which typically must be located in the same directory from which SS3 is being run. The file starter.ss contains required input information plus references to other required input files, as described in the \hyperref[FileOrganization]{File Organization section}. The names of the control and data files must match the names specified in the starter.ss file. File names, including starter.ss, are case-sensitive on Linux and Mac systems but not on Windows. The echoinput.sso file outputs how the executable reads each input file and can be used for troubleshooting when trying to setup a model correctly. Output from SS3 consists of text files containing specific keywords. Output processing programs, such as Excel, or R can search for these keywords and parse the specific information located below that keyword in the text file. \pagebreak diff --git a/5converting.tex b/5converting.tex index e5e3d200..00d62a98 100644 --- a/5converting.tex +++ b/5converting.tex @@ -1,6 +1,6 @@ \hypertarget{ConvIssues}{} -\section{Converting Files from SS3 v.3.24} -Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes 3.24 files as input and will output 3.30 input and output files. SS\_trans executables are available for v. 3.30.01 - 3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v. 3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. +\section{Converting Files from Stock Synthesis v.3.24} +Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes v.3.24 files as input and will output v.3.30 input and output files. SS\_trans executables are available for v.3.30.01 - v.3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{v.3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. The following file structure and steps are recommended for converting model files: \begin{enumerate} @@ -10,14 +10,14 @@ \section{Converting Files from SS3 v.3.24} \item Review the control (control.ss\_new) file to determine that all model functions converted correctly. The structural changes and assumptions for a couple of the advanced model features are too complicated to convert automatically. See below for some known features that may not convert. When needed, it is recommended to modify the control.ss\_new file, the converted control file, for only the features that failed to convert properly. - \item Change the max phase to a value greater than the last phase in which the a parameter is set to estimated within the control file. Run the new SS3 v.3.30 executable (ss.exe) within the ``converted'' folder using the renamed ss\_new files created from the transition executable. + \item Change the max phase to a value greater than the last phase in which the a parameter is set to estimated within the control file. Run the new v.3.30 executable (ss3.exe) within the ``converted'' folder using the renamed ss\_new files created from the transition executable. - \item Compare likelihood and model estimates between the SS3 v.3.24 and SS3 v.3.30 model versions. + \item Compare likelihood and model estimates between the v.3.24 and v.3.30 model versions. - \item If desired, update to versions of SS3 > v.3.30.17 by running the new v.3.30 input files with the higher executable. + \item If desired, update to versions of Stock Synthesis > v.3.30.17 by running the new v.3.30 input files with the higher executable. \end{enumerate} -\noindent There are some options that have been substantially changed in SS3 v.3.30, which impedes the automatic converting of SS3 v.3.24 model files. Known examples of SS3 v.3.24 options that cannot be converted, but for which better alternatives are available in SS3 v.3.30 are: +\noindent There are some options that have been substantially changed in v.3.30, which impedes the automatic converting of v.3.24 model files. Known examples of v.3.24 options that cannot be converted, but for which better alternatives are available in v.3.30 are: \begin{enumerate} \item The use of Q deviations, \item Complex birth seasons, diff --git a/6starter.tex b/6starter.tex index 00a8bf30..99d0513d 100644 --- a/6starter.tex +++ b/6starter.tex @@ -14,12 +14,12 @@ \subsection{Starter File Options (starter.ss)} \begin{longtable}{p{1.5cm} p{7.2cm} p{12.3cm}} \hline - \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut \\ + \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut\\ \hline \endfirsthead \hline - \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut \\ + \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut\\ \hline \endhead @@ -27,7 +27,7 @@ \subsection{Starter File Options (starter.ss)} \endfoot \hline - \multicolumn{3}{ c }{ \textbf{End of Starter File}}\Tstrut\Bstrut\\ + \multicolumn{3}{c}{\textbf{End of Starter File}} \Tstrut\Bstrut\\ \hline \endlastfoot @@ -40,7 +40,7 @@ \subsection{Starter File Options (starter.ss)} control\_ file.ctl & & File name of the control file \Tstrut\\ \hline - 0 & Initial Parameter Values: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Do not set equal to 1 if there have been any changes to the control file that would alter the number or order of parameters stored in the ss.par file. Values in ss.par can be edited, carefully. Do not run ss\_trans.exe from a ss.par from SS3 v.3.24.}}\Tstrut\\ + 0 & Initial Parameter Values: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Do not set equal to 1 if there have been any changes to the control file that would alter the number or order of parameters stored in the ss.par file. Values in ss.par can be edited, carefully. Do not run ss\_trans.exe from a ss.par from v.3.24.}}\Tstrut\\ & 0 = use values in control file; and& \\ & 1 = use ss.par after reading setup in the control file. & \\ @@ -58,18 +58,18 @@ \subsection{Starter File Options (starter.ss)} & 3 = custom output & \\ \pagebreak - \multicolumn{2}{l}{COND: Detailed age-structure report = 3 } & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Custom report options: First value: -100 start with minimal items or -101 start with all items; Next Values: A list of items to add or remove where negative number items are removed and positive number items added, -999 to end. The \hyperlink{custom}{reporting numbers} for each item that can be selected or omitted are shown in the Report file next to each section key word.}} \Tstrut\\ + \multicolumn{2}{l}{COND: Detailed age-structure report = 3} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Custom report options: First value: -100 start with minimal items or -101 start with all items; Next Values: A list of items to add or remove where negative number items are removed and positive number items added, -999 to end. The \hyperlink{custom}{reporting numbers} for each item that can be selected or omitted are shown in the Report file next to each section key word.}} \Tstrut\\ \multicolumn{1}{r}{-100} & & \\ \multicolumn{1}{r}{ -5} & & \\ \multicolumn{1}{r}{ 9} & & \\ \multicolumn{1}{r}{ 11} & & \\ \multicolumn{1}{r}{ 15} & & \\ - \multicolumn{1}{r}{-999} & & \\ + \multicolumn{1}{r}{-999} & & \Bstrut\\ \hline - 0 & Write 1st iteration details: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This output is largely unformatted and undocumented and is mostly used by the developer. }} \Tstrut\\ + 0 & Write 1st iteration details: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This output is largely unformatted and undocumented and is mostly used by the developer.}} \Tstrut\\ & 0 = omit; and & \\ - & 1 = write detailed intermediate calculations to echoinput.sso during first call. & \\ + & 1 = write detailed intermediate calculations to echoinput.sso during first call. & \Bstrut\\ \hline 0 & Parameter Trace: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This controls the output to parmtrace.sso. The contents of this output can be used to determine which values are changing when a model approaches a crash condition. It also can be used to investigate patterns of parameter changes as model convergence slowly moves along a ridge. In order to access parameter gradients option 4 should be selected which will write the gradient of each parameter with respect to each likelihood component}} \Tstrut\\ @@ -77,23 +77,22 @@ \subsection{Starter File Options (starter.ss)} & 1 = write good iteration and active parameters; & \\ & 2 = write good iterations and all parameters; & \\ & 3 = write every iteration and all parameters; and & \\ - & 4 = write every iteration and active parameters. & \\ + & 4 = write every iteration and active parameters. & \Bstrut\\ \hline \pagebreak - 1 & Cumulative Report: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Controls reporting to the file Cumreport.sso. - This cumulative report is most useful when accumulating summary information from likelihood profiles or when simply accumulating a record of all model runs within the current subdirectory}}\Tstrut\\ + 1 & Cumulative Report: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Controls reporting to the file Cumreport.sso. This cumulative report is most useful when accumulating summary information from likelihood profiles or when simply accumulating a record of all model runs within the current subdirectory}} \Tstrut\\ & 0 = omit; & \\ & 1 = brief; and & \\ - & 2 = full. & \\ + & 2 = full. & \\ \hline - 1 & Full Priors: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Turning this option on (1) adds the log likelihood contribution from all prior values for fixed and estimated parameters to the total negative log likelihood. With this option off (0), the total negative log likelihood will include the log likelihood for priors for only estimated parameters.}} \Tstrut\\ + 1 & Full Priors: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Turning this option on (1) adds the log likelihood contribution from all prior values for fixed and estimated parameters to the total negative log likelihood. With this option off (0), the total negative log likelihood will include the log likelihood for priors for only estimated parameters.}} \Tstrut\\ & 0 = only calculate priors for active parameters; and & \\ & 1 = calculate priors for all parameters that have a defined prior. & \\ \hline - 1 & Soft Bounds: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This option creates a weak symmetric beta penalty for the selectivity parameters. This becomes important when estimating selectivity functions in which the values of some parameters cause other parameters to have negligible gradients, or when bounds have been set too widely such that a parameter drifts into a region in which it has negligible gradient. The soft bound creates a weak penalty to move parameters away from the bounds.}} \Tstrut\\ + 1 & Soft Bounds: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This option creates a weak symmetric beta penalty for the selectivity parameters. This becomes important when estimating selectivity functions in which the values of some parameters cause other parameters to have negligible gradients, or when bounds have been set too widely such that a parameter drifts into a region in which it has negligible gradient. The soft bound creates a weak penalty to move parameters away from the bounds.}}\Tstrut\Bstrut\\ & 0 = omit; and & \\ & 1 = use. & \\ & & \\ @@ -101,31 +100,32 @@ \subsection{Starter File Options (starter.ss)} & & \\ \pagebreak - %\hline - 1 & Number of Data Files to Output: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{All output files are sequentially output to data\_echo.ss\_new and will need to be parsed by the user into separate data files. The output of the input data file makes no changes, so retains the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data\_echo.ss\_new. As of v.3.30.19 the output file names have changed were now a separate file is created for the echoed data (data\_echo.ss\_new), the expected data values given the model fit (data\_expval.ss), and any requested bootstap data files (data\_boot\_x.ss where x is the bootstrap number) In versions prior to 3.30.19 each of these outputs was printed to a single file called data.ss\_new. }}\Tstrut\\ - & 0 = none; As of 3.30.16, none of the .ss\_new files will be produced;& \\ - & 1 = output an annotated replicate of the input data file; & \\ - & 2 = add a second data file containing the model's expected values with no added error. ; and & \\ - & 3+ = add N-2 parametric bootstrap data files. & \\ - & & \\ - & & \\ +% \hline + 1 & Number of Data Files to Output: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{All output files are sequentially output to data\_echo.ss\_new and need to be parsed by the user into separate data files. The output of the input data file makes no changes, retaining the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data\_echo.ss\_new. As of v.3.30.19, the output file names have changed; now a separate file is created for the echoed data (data\_echo.ss\_new), the expected data values given the model fit (data\_expval.ss), and any requested bootstrap data files (data\_boot\_x.ss where x is the bootstrap number). In versions before v.3.30.19, each of these outputs was printed to a single file called data.ss\_new.}} \Tstrut\Bstrut\\ + & 0 = none; As of v.3.30.16, none of the .ss\_new files will be produced;& \Bstrut\\ + & 1 = output an annotated replicate of the input data file; & \Tstrut\Bstrut\\ + & 2 = add a second data file containing the model's expected values with no added error. ; and & \Tstrut\Bstrut\\ + & 3+ = add N-2 parametric bootstrap data files. & \Tstrut\\ + & & \Bstrut\\ + % & & \\ \hline %\pagebreak - 8 & Turn off estimation: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The 0 option is useful for (1) quickly reading in a messy set of input files and producing the annotated control.ss\_new and data\_echo.ss\_new files, or (2) examining model output based solely on input parameter values. Similarly, the value option allows examination of model output after completing a specified phase. Also see usage note for restarting from a specified phase.}}\Tstrut\\ + 8 & Turn off estimation: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The 0 option is useful for (-1) quickly reading in a messy set of input files and producing the annotated control.ss\_new and data\_echo.ss\_new files, or (0) examining model output based solely on input parameter values. Similarly, the value option allows examination of model output after completing a specified phase. Also see usage note for restarting from a specified phase.}} \Tstrut\\ & -1 = exit after reading input files; & \\ & 0 = exit after one call to the calculation routines and production of sso and ss\_new files; and & \\ - & = exit after completing this phase. & \\ + & = exit after completing this phase. & \Bstrut\\ \hline - 1000 & MCMC burn interval & Number of iterations to discard at the start of an MCMC run. \Tstrut\\ + 1000 & MCMC burn interval & Number of iterations to discard at the start of an MCMC run. \Tstrut\Bstrut\\ - %\hline - \pagebreak + \hline + %\pagebreak 200 & MCMC thin interval & Number of iterations to remove between the main period of the MCMC run. \Tstrut\\ - - \hline - 0.0 & \hyperlink{Jitter}{Jitter:} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The jitter function has been revised with SS3 v.3.30. Starting values are now jittered based on a normal distribution with the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, care should be given when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.}}\Tstrut\\ + + \pagebreak +% \hline + 0.0 & \hyperlink{Jitter}{Jitter:} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The jitter function has been revised with v.3.30. Starting values are now jittered based on a normal distribution with the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, care should be given when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.}} \Tstrut\\ & 0 = no jitter done to starting values; and & \\ & >0 starting values will vary with larger jitter values resulting in larger changes from the parameter values in the control or par file. & \\ & & \\ @@ -133,17 +133,17 @@ \subsection{Starter File Options (starter.ss)} \hline -1 & SD Report Start: & \Tstrut\\ & -1 = begin annual SD report in start year; and & \\ - & = begin SD report this year. & \\ + & = begin SD report this year. & \Bstrut\\ \hline - %\pagebreak +% \pagebreak -1 & SD Report End: & \Tstrut\\ & -1 = end annual SD report in end year; & \\ & -2 = end annual SD report in last forecast year; and & \\ - & = end SD report in this year. & \\ + & = end SD report in this year. & \Bstrut\\ \hline - 2 & Extra SD Report Years: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{In a long time series application, the model variance calculations will be smaller and faster if not all years are included in the SD reporting. For example, the annual SD reporting could start in 1960 and the extra option could select reporting in each decade before then.}}\Tstrut\\ + 2 & Extra SD Report Years: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{In a long time series application, the model variance calculations will be smaller and faster if not all years are included in the SD reporting. For example, the annual SD reporting could start in 1960 and the extra option could select reporting in each decade before then.}} \Tstrut\\ & 0 = none; and & \\ & = number of years to read. & \\ & & \\ @@ -153,118 +153,117 @@ \subsection{Starter File Options (starter.ss)} \multicolumn{3}{l}{COND: If Extra SD report years > 0} \Tstrut\\ %\pagebreak - \hline - \multicolumn{1}{r}{1940 1950} & \multirow{1}{1cm}[-0.25cm]{\parbox{19.5cm}{Vector of years for additional SD reporting. The number of years need to equal the value specified in the above line (Extra SD Reporting). }} \Tstrut\\ - & & \\ - + %\hline + \multicolumn{1}{r}{1940 1950} & \multirow{1}{1cm}[-0.25cm]{\parbox{19.5cm}{Vector of years for additional SD reporting. The number of years need to equal the value specified in the above line (Extra SD Reporting).}} \Tstrut\\ + & & \\ \hline - 0.0001 & Final convergence & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This is a reasonable default value for the change in log likelihood denoting convergence. For applications with much data and thus a large total log likelihood value, a larger convergence criterion may still provide acceptable convergence}}\Tstrut\\ - & & \\ - & & \\ - & & \\ + 0.0001 & Final convergence & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{This is a reasonable default value for the change in log likelihood denoting convergence. For applications with much data and thus a large total log likelihood value, a larger convergence criterion may still provide acceptable convergence}} \Tstrut\Bstrut\\ + & & \Bstrut\\ + & & \Bstrut\\ + % & & \\ \hline - 0 & Retrospective year: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Adjusts the model end year and disregards data after this year. May not handle time varying parameters completely.}} \Tstrut\\ + 0 & Retrospective year: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Adjusts the model end year and disregards data after this year. May not handle time varying parameters completely.}} \Tstrut\\ & 0 = none; and & \\ - & -x = retrospective year relative to end year. & \\ + & -x = retrospective year relative to end year. & \Bstrut\\ \hline - 0 & Summary biomass min age & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Minimum integer age for inclusion in the summary biomass used for reporting and for calculation of total exploitation rate.}}\Tstrut\\ + 0 & Summary biomass min age & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Minimum integer age for inclusion in the summary biomass used for reporting and for calculation of total exploitation rate.}} \Tstrut\\ & & \\ \hline - %\pagebreak - 1 & Depletion basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Selects the basis for the denominator when calculating degree of depletion in SB. The calculated values are reported to the SD report.}}\Tstrut\\ +% \pagebreak + 1 & Depletion basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Selects the basis for the denominator when calculating degree of depletion in SB. The calculated values are reported to the SD report.}} \Tstrut\\ & 0 = skip; & \\ - & 1 = X*SB0; & Relative to virgin spawning biomass.\\ - & 2 = X*SB\textsubscript{MSY}; & Relative to spawning biomass that achieves MSY.\\ - & 3 = X*SB\textsubscript{styr}; and & Relative to model start year spawning biomass.\\ - & 4 = X*SB\textsubscript{endyr}. & Relative to spawning biomass in the model end year.\\ + & 1 = X*SB0; & Relative to virgin spawning biomass. \\ + & 2 = X*SB\textsubscript{MSY}; & Relative to spawning biomass that achieves MSY. \\ + & 3 = X*SB\textsubscript{styr}; and & Relative to model start year spawning biomass. \\ + & 4 = X*SB\textsubscript{endyr}. & Relative to spawning biomass in the model end year. \\ & 5 = X*Dynamic SB0 & Relative to the calculated dynamic SB0. \\ & use tens digit (1-9) to invoke multi-year (up to 9 yrs) & \\ - & use 1 as hundreds digit to invoke log(ratio) & \\ + & use 1 as hundreds digit to invoke log(ratio) & \Bstrut\\ \hline - 1 & Fraction (X) for depletion denominator & Value for use in the calculation of the ratio for SB\textsubscript{y}/(X*SB0).\Tstrut\\ + 1 & Fraction (X) for depletion denominator & Value for use in the calculation of the ratio for SB\textsubscript{y}/(X*SB0). \Tstrut\Bstrut\\ - %\hline +% \hline \pagebreak - 1 & SPR report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{SPR is the equilibrium SB per recruit that would result from the current year's pattern and intensity of F's. The quantities identified by 1, 2, and 3 here are all calculated in the benchmarks section. Then the one specified here is used as the selected }}\Tstrut\\ + 1 & SPR report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{SPR is the equilibrium SB per recruit that would result from the current year's pattern and intensity of F's. The quantities identified by 1, 2, and 3 here are all calculated in the benchmarks section. Then the one specified here is used as the selected.}} \Tstrut\\ & 0 = skip; & \\ & 1 = use 1-SPR\textsubscript{target}; & \\ & 2 = use 1-SPR at MSY; & \Tstrut\\ - & 3 = use 1-SPR at B\textsubscript{target}; and & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Denominator in a ratio with the annual value of (1 - SPR). This ratio (and its variance) is reported to the SD report output for the years selected above in the SD report year selection.}}\Tstrut\\ + & 3 = use 1-SPR at B\textsubscript{target}; and & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Denominator in a ratio with the annual value of (1 - SPR). This ratio (and its variance) is reported to the SD report output for the years selected above in the SD report year selection.}} \Tstrut\\ & 4 = no denominator, so report actual 1-SPR values. & \\ - %\pagebreak - \hline - 4 & Annual F units: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{In addition to SPR, an additional proxy for annual F can be specified here. As with SPR, the selected quantity will be calculated annually and in the benchmarks section. The ratio of the annual value to the selected (see F report basis below) benchmark value is reported to the SD report vector. Options 1 and 2 use total catch for the year and summary abundance at the beginning of the year, so combines seasons and areas. But if most catch occurs in one area and there is little movement between areas, this ratio is not informative about the F in the area where the catch is occurring. Option 3 is a simple sum of the full F's by fleet, so may provide non-intuitive results when there are multi areas or seasons or when the selectivities by fleet do not have good overlap in age. Option 4 is a real annual F calculated as a numbers weighted F for a specified range of ages (read below). The F is calculated as Z-M where Z and M are each calculated an ln(N\textsubscript{t+1}/N\textsubscript{t}) with and without F active, respectively. The numbers are summed over all biology morphs and all areas for the beginning of the year, so subsumes any seasonal pattern.}}\Tstrut\\ +% \pagebreak +\hline + 4 & Annual F units: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{In addition to SPR, an additional proxy for annual F can be specified here. As with SPR, the selected quantity will be calculated annually and in the benchmarks section. The ratio of the annual value to the selected (see F report basis below) benchmark value is reported to the SD report vector. Options 1 and 2 use total catch for the year and summary abundance at the beginning of the year, so combines seasons and areas. But if most catch occurs in one area and there is little movement between areas, this ratio is not informative about the F in the area where the catch is occurring. Option 3 is a simple sum of the full F's by fleet, so may provide non-intuitive results when there are multi areas or seasons or when the selectivities by fleet do not have good overlap in age. Option 4 is a real annual F calculated as a numbers weighted F for a specified range of ages (read below). The F is calculated as Z-M where Z and M are each calculated an ln(N\textsubscript{t+1}/N\textsubscript{t}) with and without F active, respectively. The numbers are summed over all biology morphs and all areas for the beginning of the year, so subsumes any seasonal pattern.}} \Tstrut\Bstrut\\ & 0 = skip; & \\ & 1 = exploitation rate in biomass; & \\ & 2 = exploitation rate in numbers; & \\ - & 3 = sum(apical F's by fleet); & \\ & 4 = population F for range of ages; and & \\ & 5 = unweighted average F for range of ages. & \\ & & \\ & & \\ - & & \\ - & & \\ - & & \\ - & & \\ + & & \Bstrut\\ + & & \Bstrut\\ + & & \Bstrut\\ + % & & \\ \hline %\pagebreak \multicolumn{2}{l}{COND: If F std reporting $\geq$ 4} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify range of ages. Upper age must be less than max age because of incomplete handling of the accumulator age for this calculation.}} \Tstrut\\ + \multicolumn{1}{r}{3 7} & Age range if F std reporting = 4. & \Tstrut\Bstrut\\ - \multicolumn{1}{r}{3 7} & Age range if F std reporting = 4. & \Tstrut\\ - - \hline - %\pagebreak - 1 & F report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Selects the denominator to use when reporting the F std report values. A new option to allow for the calculation of a multi-year trailing average in F was implemented in v. 3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average.}}\Tstrut\\ +% \hline + \pagebreak + 1 & F report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Selects the denominator to use when reporting the F std report values. A new option to allow for the calculation of a multi-year trailing average in F was implemented in v.3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average.}} \Tstrut\\ & 0 = not relative, report raw values; & \\ & 1 = use F std value relative to SPR\textsubscript{target}; & \\ & 2 = use F std value relative to F\textsubscript{MSY}; and & \\ & 3 = use F std value relative to F\textsubscript{Btarget}. & \\ & use tens digit (1-9) to invoke multi-year (up to 9 yrs) F std & \\ - & use 1 as hundreds digit to invoke log(ratio) & \Tstrut\\ + & use 1 as hundreds digit to invoke log(ratio) & \Bstrut\\ \hline %\pagebreak - 0.01 & MCMC output detail: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify format of MCMC output. This input requires the specification of two items; the output detail and a bump value to be added to the ln(R0) in the first call to MCMC. A bias adjustment of 1.0 is applied to recruitment deviations in the MCMC phase, which could result in reduced recruitment estimates relative to the MLE when a lower bias adjustment value is applied. A small value, called the ``bump'', is added to the ln(R0) for the first call to MCMC in order to prevent the stock from hitting the lower bounds when switching from MLE to MCMC. If you wanted to select the default output option and apply a bump value of 0.01 this is specified by 0.01 where the integer value represents the output detail and the decimal is the bump value.}} \Tstrut\\ + 0.01 & MCMC output detail: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify format of MCMC output. This input requires the specification of two items; the output detail and a bump value to be added to the ln(R0) in the first call to MCMC. A bias adjustment of 1.0 is applied to recruitment deviations in the MCMC phase, which could result in reduced recruitment estimates relative to the MLE when a lower bias adjustment value is applied. A small value, called the ``bump'', is added to the ln(R0) for the first call to MCMC in order to prevent the stock from hitting the lower bounds when switching from MLE to MCMC. If you wanted to select the default output option and apply a bump value of 0.01 this is specified by 0.01 where the integer value represents the output detail and the decimal is the bump value.}} \Tstrut\Bstrut\\ & 0 = default; & \\ - & 1 = output likelihood components and associated lambda values; & \\ - & 2 = write report for each mceval; and & \\ - & 3 = make output subdirectory for each MCMC vector. & \\ - & & \\ + & 1 = output likelihood components and associated lambda values; & \\ + & 2 = write report for each mceval; and & \\ + & 3 = make output subdirectory for each MCMC vector. & \Bstrut\\ + & & \Tstrut\Bstrut\\ & & \\ - & & \\ + % & & \\ \hline - \hypertarget{ALK}{0} & Age-length-key (ALK) tolerance level, 0 >= values required & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Value of 0 will not apply any compression. Values > 0 (e.g., 0.0001) will apply compression to the ALK which will increase the speed of calculations. The size of this value will impact the run time of your model, but one should be careful to ensure that the value used does not appreciably impact the estimated quantities relative to no compression of the ALK. The suggested value if applied is 0.0001.}} \Tstrut\\ - & & \\ - & & \\ - & & \Bstrut\\ - - \hline - \multicolumn{2}{l}{COND: Seed Value (i.e., 1234)}& \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify a seed for data generation. This feature is not available in versions prior to 3.30.15. This is an optional input value which allows for the specification of a random number seed value. If you do not want to specify a seed, skip this input line and end the reading of the starter file with the check value (3.30).}} \Tstrut\\ - & & \\ - & & \\ + \hypertarget{ALK}{0} & Age-length-key (ALK) tolerance level & This effect is disabled in code, enter 0. \Tstrut\Bstrut\\ + % \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Value of 0 will not apply any compression. Values > 0 (e.g., 0.0001) will apply compression to the ALK which will increase the speed of calculations. The size of this value will impact the run time of your model, but one should be careful to ensure that the value used does not appreciably impact the estimated quantities relative to no compression of the ALK. The suggested value if applied is 0.0001.}} \Tstrut\Bstrut\\ + % & & \\ + % & & \Tstrut\\ + % & & \Tstrut\Bstrut\\ + + \pagebreak + % \hline + \multicolumn{2}{l}{COND: Seed Value (i.e., 1234)}& \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify a seed for data generation. This feature is not available in versions prior to v.3.30.15 This is an optional input value allowing for the specification of a random number seed value. If you do not want to specify a seed, skip this input line and end the starter file with the check value (3.30).}} \Tstrut\Bstrut\\ & & \\ + & & \Bstrut\\ & & \\ - \pagebreak +% \pagebreak \hline - \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in SS3 v3.30 format and a value of 999 indicates that the control and data file are in a previous SS3 v.3.24 version. The ss\_trans.exe executable should be used and will convert the 3.24 version files to the new format in the control.ss\_new and data\_echo.ss\_new files. All ss\_new files are in the SS3 v.3.30 format, so starter.ss\_new has SS3 v.3.30 on the last line. The mortality-growth parameter section has a new sequence, so SS3 v.3.30 cannot read a ss.par file produced by SS3 v.3.24 and earlier, so please ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from SS3 v.3.24} section has additional information on model features that may impede file conversion.}}\Tstrut\\ + \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in v.3.30 format. A value of 999 indicates that the control and data files are in a previous v.3.24 version. The ss\_trans.exe executable should be used and will convert the v.3.24 files the control.ss\_new and data\_echo.ss\_new files to the new format. All ss\_new files are in the v.3.30 format, so starter.ss\_new has v.3.30 on the last line. The mortality-growth parameter section has a new sequence and v.3.30 cannot read a ss.par file produced by v.3.24 and earlier, so ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from Stock Synthesis v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ & & \\ & & \\ - & & \\ + & & \\ & & \\ & & \\ & & \\ & & \\ - & & \\ + & & \\ + \end{longtable} \end{landscape} } diff --git a/7forecast.tex b/7forecast.tex index 70b2b9ce..f84a84f5 100644 --- a/7forecast.tex +++ b/7forecast.tex @@ -1,7 +1,7 @@ \section{Forecast File} The specification of options for forecasts is contained in the mandatory input file named forecast.ss. See \hyperref[sec:forecast]{Forecast Module: Benchmark and Forecasting Calculations} for additional details. -The term COND appears in the ``Typical Value'' column of this documentation (it does not actually appear in the model files), it indicates that the following section is omitted except under certain conditions, or that the factors included in the following section depend upon certain conditions. In most cases, the description in the definition column is the same as the label output to the ss\_new files. +The term COND appears in the ``Typical Value'' column of this documentation (it does not actually appear in the model files) and indicates that the following section is omitted except under certain conditions, or that the factors included in the following section depend upon certain conditions. In most cases, the description in the definition column is the same as the label output to the ss\_new files. \begin{landscape} @@ -11,7 +11,7 @@ \subsection{Forecast File Options (forecast.ss)} { \setlength\extrarowheight{4pt} - \begin{longtable}{p{3.2cm} p{7cm} p{10.8cm}} + \begin{longtable}{p{2cm} p{7cm} p{12cm}} \hline \textbf{Value} & \textbf{Options} & \textbf{Description} \Tstrut\Bstrut\\ @@ -31,15 +31,14 @@ \subsection{Forecast File Options (forecast.ss)} \hline \endlastfoot - - 1 & \hyperlink{Benchmark}{Benchmarks/Reference Points:} & \multirow{1}{1cm}[-0.1cm]{\parbox{11cm}{SS3 checks for consistency of the Forecast specification and the benchmark specification. It will turn benchmarks on if necessary and report a warning.}} \Tstrut\\ - & 0 = omit; & \\ + 1 & \hyperlink{Benchmark}{Benchmarks/Reference Points}:\hypertarget{Bmark_RefPoints}{} & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{SS3 checks for consistency of the forecast specification and the benchmark specification. It will turn on benchmarks if necessary and report a warning.}} \Tstrut\\ + & 0 = skip/omit; & \\ & 1 = calculate F\textsubscript{SPR}, F\textsubscript{Btarget}, and F\textsubscript{MSY}; & \\ & 2 = calculate F\textsubscript{SPR}, F\textsubscript{MSY}, F\textsubscript{0.10}; and & \\ - & 3 = add F at B\textsubscript{LIMIT} \Bstrut\\ + & 3 = add F at B\textsubscript{LIMIT} \\ \hline - 1 & MSY Method: & \multirow{1}{1cm}[-0.1cm]{\parbox{11cm}{Specifies basis for calculating a single population level F\textsubscript{MSY} value.}} \Tstrut\\ + 1 & MSY Method: & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Specifies basis for calculating a single population level F\textsubscript{MSY} value.}} \Tstrut\\ & 1 = F\textsubscript{SPR} as proxy; & \\ & 2 = calculate F\textsubscript{MSY}; & \\ & 3 = F\textsubscript{Btarget} as proxy or F\textsubscript{0.10}; & \\ @@ -56,7 +55,7 @@ \subsection{Forecast File Options (forecast.ss)} \pagebreak - \multicolumn{1}{r}{1 0 0 1} & MEY options - Fleet, Cost/F, Price/F, and Include F\textsubscript{MEY} in Optimization & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{To calculate the F\textsubscript{MEY} enter fleet number, the cost per fishing mortality, price per mt, and whether optimization should adjust the fleet's F or keep it at the mean from the benchmark years (0 = no, 1= yes). Care should taken when scaling the values used for cost/F and price/mt. Units in the example show cost=0 and price = 1, so will be identical to MSY in weight. Note, if a fleet's catch is excluded from the F\textsubscript{MEY} search, its catch or profits are still included in the MSY value using historical F levels from benchmark years}} \Tstrut\\ + \multicolumn{1}{r}{1 0 0 1} & MEY options - Fleet, Cost/F, Price/F, and Include F\textsubscript{MEY} in Optimization & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{To calculate the F\textsubscript{MEY} enter fleet number, the cost per fishing mortality, price per mt, and whether optimization should adjust the fleet's F or keep it at the mean from the benchmark years (0 = no, 1= yes). Take care when scaling the values used for cost/F and price/mt. Units in the example show cost = 0 and price = 1, so it will be identical to MSY in weight. Note, if a fleet's catch is excluded from the F\textsubscript{MEY} search, its catch or profits are still included in the MSY value using historical F levels from benchmark years.}} \Tstrut\Bstrut\\ \multicolumn{1}{r}{-9999 0 0 0} & & \\ & & \\ & & \\ @@ -64,68 +63,119 @@ \subsection{Forecast File Options (forecast.ss)} & & \\ \hline - 0.45 & SPR\textsubscript{target} & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{SS3 searches for F multiplier that will produce this level of spawning biomass per recruit (reproductive output) relative to unfished value.}} \Tstrut\\ - & & \\ + 0.45 & SPR\textsubscript{target} & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{SS3 searches for the F multiplier that will produce this level of spawning biomass per recruit (reproductive output) relative to the unfished value.}} \Tstrut\Bstrut\\ + & & \Bstrut\\ \hline - 0.40 & Relative Biomass Target & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{SS3 searches for F multiplier that will produce this level of spawning biomass relative to unfished value. This is not ``per recruit'' and takes into account the spawner-recruitment relationship.}} \Tstrut\\ + 0.40 & Relative Biomass Target & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{SS3 searches for the F multiplier that will produce this level of spawning biomass relative to unfished value. This is not ``per recruit'' and takes into account the spawner-recruitment relationship.}} \Tstrut\Bstrut\\ & & \\ & & \\ - + \hline - \multicolumn{2}{l}{COND: Do Benchmark = 3} & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{B\textsubscript{LIMIT} as a fraction of the B\textsubscript{MSY} where a negative value will be applied as a fraction of B0}} \Tstrut\\ - & -0.25 & \Bstrut\\ + \multicolumn{2}{l}{COND: \hyperlink{Bmarks_RefPoints}{Benchmarks} = 3} & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{B\textsubscript{LIMIT} as a fraction of the B\textsubscript{MSY} where a negative value will be applied as a fraction of B0}} \Tstrut\\ + & -0.25 & \\ \hline - 0 0 0 0 0 0 0 0 0 0 & Benchmark Years: & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{Requires 10 values, beginning and ending years for (1,2) biology (e.g., growth, natural mortality, maturity, fecundity), (3,4) selectivity, (5,6) relative Fs, (7,8) movement and recruitment distribution; (9,10) stock-recruitment parameters for averaging years in calculating benchmark quantities. If there is no time-varying biology it is recommend to select the first model year for the beginning year for biology.}} \Tstrut\\ + \multirow{1}{1cm}[-0.15cm]{\parbox{2cm}{0 0 0 0 0 0 0 0 0 0}} & Benchmark Years: & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{Requires 5 pairs of year values over which the mean of derived vectors will be calculated to use in the benchmark (e.g. MSY) calculations. First pair of years is for biology (e.g., growth, natural mortality, maturity, fecundity); second is selectivity; third is relative Fs among fleets; fourth is movement and recruitment distribution; fifth is stock-recruitment (as the parameters, not as derived quantities). If a factor is not time-varying, select the first model year for the beginning year for the factor or else the variance will be artificially reduced.}} \Tstrut\\ & -999: start year; & \\ & >0: absolute year; and & \\ - & <= 0: year relative to end year. & \\ + & <= 0: year relative to end year. & \Bstrut\\ + & & \\ & & \\ & & \\ \pagebreak - %\hline - 1 & Benchmark Relative F Basis: & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{The specification does not affect year range for selectivity and biology.}} \Tstrut\\ + % \hline + 1 & Benchmark Relative F Basis: & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{The specification does not affect year range for selectivity and biology.}} \Tstrut\\ & 1 = use year range; and & \\ - & 2 = set range for relF same as forecast below. & \\ + & 2 = set range for relF same as \hyperlink{Fcast}{Forecast}. & \Bstrut\\ - %\pagebreak \hline - 2 & Forecast: & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{This input is required but is ignored if benchmarks are turned off. This determines how forecast catches are calculated and removed from the population which is separate from the ``MSY Method'' above. If F\textsubscript{MSY} is selected, it uses whatever proxy (e.g., F\textsubscript{SPR} or F\textsubscript{BTGT}) is selected in the ``MSY Method'' row.}} \Tstrut\\ + 2 & \hypertarget{Fcast}{Forecast}: & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{This input is required but is ignored if benchmarks are turned off. This determines how forecast catches are calculated and removed from the population which is separate from the ``MSY Method'' above. If F\textsubscript{MSY} is selected, it uses whatever proxy (e.g., F\textsubscript{SPR} or F\textsubscript{BTGT}) is selected in the ``MSY Method'' row.}} \Tstrut\\ & -1 = none, no forecast years; & \\ & 0 = simple, single forecast year calculated; & \\ & 1 = use F\textsubscript{SPR}; & \\ & 2 = use F\textsubscript{MSY}; & \\ & 3 = use F\textsubscript{Btarget} or F\textsubscript{0.10}; & \\ - & 4 = set to average F scalar for the forecast relative F years below; and & \\ + & 4 = set to mean F scalar for the forecast relative F years below; and & \\ & 5 = input annual F scalar. & \Bstrut\\ \hline - 10 & N forecast years (must be >= 1) & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{At least one forecast year now required if the Forecast option above is >=0 (Note: SS3 v.3.24 allowed zero forecast years).}} \Tstrut\\ - & & \Bstrut\\ + 10 & N forecast years (must be >= 1) & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{At least one forecast year now required if the Forecast option above is >=0 (Note: v.3.24 allowed zero forecast years).}} \Tstrut\\ + & & \\ \hline - 1 & F scalar & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{Only used if Forecast option = 5 (input annual F scalar), but is a required line in the forecast file.}} \Tstrut\Bstrut\\ + 1 & F scalar/multiplier & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{Only used if Forecast option = 5 (input annual F scalar), but is a required line in the forecast file.}} \Tstrut\\ & & \\ - + + % \pagebreak \hline - 0 0 0 0 0 0 & Forecast Years: & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{Requires 6 values: beginning and ending years for selectivity, relative Fs, and recruitment distribution that will be used to create averages to use in forecasts. In future, hope to allow random effects to propagate into forecast. Please note, relative F for bycatch only fleets is scaled just like other fleets. More options for this in future.}} \Tstrut\\ - & -999 = start year; & \\ - & >0 = absolute year; and & \\ - & <= 0 = year relative to end year. & \\ - & & \Bstrut\\ + \multicolumn{3}{l}{There are 2 options for entering \hypertarget{FcastYears}{Forecast Years}:} \Tstrut\\ + + Option 1: & \multicolumn{2}{l}{\multirow{1}{1cm}[-0.15cm]{\parbox{18.5cm}{This approach for forecast year ranges is no longer recommended because blocks, random effects, and other time-varying parameter changes can now operate on forecast years and the new approach provides better control averaging.}}} \Tstrut\Bstrut\\ + & & \Tstrut\\ \pagebreak - %\hline - 0 & Forecast Selectivity Option: & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{Determines the selectivity used in the forecast years. Selecting 1 will allow for application of time-varying selectivity parameters (e.g., random walk) to continue into the forecast period.}} \Tstrut\\ - & 0 = forecast selectivity is mean from year range; and & \\ - & 1 = forecast selectivity from annual time-varying parameters. & \Bstrut\\ - - %\pagebreak - \hline - 1 & Control Rule: & \multirow{1}{1cm}[-0.15cm]{\parbox{11cm}{Used to apply reductions (``buffer'') to either the catch or F based on the control rule during the forecast period. The buffer value is specified below via the Control Rule Buffer.}} \Tstrut\\ + 0 0 0 0 0 0 & Enter 6 Forecast Year Values & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{To continue to use this pre-v.3.20.22 approach, enter 6 values: beginning and ending years for selectivity, relative Fs, and recruitment distribution. These are used to create means over the specified range of years. Values can be entered as the actual year, -999 for start year, or values of 0 or -integer to be relative endyr. It is important to note:}} \Tstrut\Bstrut\\ + & & \\ + & & \\ + & & \Bstrut\\ + + % \pagebreak + & & -- Relative F for bycatch only fleets is scaled just like other fleets.\Tstrut\\ + & & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{-- For selectivity averaging with the new approach the method code is ``1'', whereas with the old Forecast Selectivity Option, the code was ``1'' for using time-varying parameters. SS3 accounts for this change internally.}} \Bstrut\\ + & & \\ + & & \\ + & & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{-- Whenever calculating means, the calculated mean will have artificially low variance than if a minimal range of years is selected.}} \\ + & & \\ + + 0 & \hypertarget{FcastSelectivity}{Forecast Selectivity Option}: & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{Determines selectivity used in the forecast years. Selecting 1 will allow for application of time-varying selectivity parameters (e.g., random walk) to continue into the forecast period. This setting is not included in Option 2.}} \\ + & 0 = forecast selectivity means from year range; and & \\ + & 1 = forecast selectivity from annual time-varying parameters. & \\ + + % \hline + Option 2: & \multicolumn{2}{l}{\multirow{1}{1cm}[-0.15cm]{\parbox{18.5cm}{To use the new approach, enter -12345 and omit entry of the \hyperlink{FcastSelectivity}{Forecast Selectivity Option}.}}} \\ + + % \pagebreak + -12345 & Invoke New Forecast Format & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{Biology and selectivity vectors are updated annually in the forecast according to their time-varying parameters. Be sure to check the end year of the blocks and the deviation vectors. Input in this section directs the creation of means over historical years to override any time-varying changes. To invoke taking the mean of a range of historical recruitments after all adjustments and deviations were applied, see the \hyperlink{FcastRecruitment}{Base recruitment in forecast} option. See the Example New Forecast Format Input below.}} \Tstrut\Bstrut\\ + & & \\ + & & \Bstrut\\ + & & \Tstrut\Bstrut\\ + & & \Tstrut\Bstrut\\ + + \pagebreak + % \hline + \multicolumn{2}{l}{Example New Forecast Format Input:} & \\ + Factor & Method \hspace{15mm} Start Year & End Year \\ + 1 & 1 \hspace{26mm} 2002 & 2003 \hspace{24mm} \# natural mortality \\ + 4 & 1 \hspace{26mm} 2016 & 2018 \hspace{24mm} \# recruitment distribution \\ + 10 & 1 \hspace{26mm} -999 & 0 \hspace{30mm} \# selectivity \\ + 11 & 1 \hspace{26mm} -3 & 0 \hspace{30mm} \# relative F\\ + 12 & 1 \hspace{26mm} 2006 & 2014 \hspace{24mm} \# recruitment\\ + -9999 & -1 \hspace{25mm} -1 & -1 \Bstrut\\ + + % \hline + & Factor & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{Factors implemented thus far. Terminate with -9999.}} \\ + & 1 = natural mortality (M); & \\ + & 4 = recruitment distribution; & \\ + & 5 = migration; & \\ + & 10 = selectivity; & \\ + & 11 = relative F & \\ + & 12 = recruitment & \\ + + % \pagebreak + % \hline + & Method & \Tstrut\\ + & 0 (or omitted) = continue using time\_vary parameters; & \\ + & 1 = use means of derived factor; & \\ + & 2 (future) = means parameter then apply as if time\_vary & \\ + & Start Year & Enter the actual year or values of 0, -999 to be styr, or -integer to be relative endyr. \\ + & End Year & Enter the actual year or values of 0 or -integer to be relative endyr. \\ + + \pagebreak + % \hline + 1 & Control Rule Method: & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{Used to apply reductions (``buffer'') to either the catch or F based on the control rule during the forecast period. The buffer value is specified below via the Control Rule Buffer.}} \Tstrut\\ & 0 = none (additional control rule inputs will be ignored); & \\ & 1 = catch as function of SSB, buffer on F; & \\ & 2 = F as function of SSB, buffer on F; & \\ @@ -133,105 +183,87 @@ \subsection{Forecast File Options (forecast.ss)} & 4 = F is a function of SSB, buffer on catch. & \Bstrut\\ \hline + 0.40 \Tstrut & Control Rule Inflection & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{Relative biomass level to unfished biomass above which F is constant at control rule F. If set to -1 the ratio of B\textsubscript{MSY} to the unfished spawning biomass will automatically be used.\textsubscript{target}.}} \Bstrut\\ + & & \Tstrut\Bstrut\\ - 0.40 \Tstrut & Control Rule Inflection & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{Relative biomass level to unfished biomass above which F is constant at control rule F. If set to -1 the ratio of B\textsubscript{MSY} to the unfished spawning biomass will automatically be used.\textsubscript{target}.}} \Bstrut\\ - & & \\ + %\pagebreak + \hline + 0.10 \Tstrut & Control Rule Cutoff & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{Relative biomass level to unfished biomass below which F is set to 0 (management threshold).}} \\ & & \\ - %\pagebreak \hline - 0.10 \Tstrut & Control Rule Cutoff & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{Relative biomass level to unfished biomass below which F is set to 0 (management threshold).}} \\ + % \pagebreak + 0.75 \Tstrut & Control Rule Buffer (multiplier between 0-1.0 or -1) & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Control rule catch or F\textsubscript{target} as a fraction of selected catch or F\textsubscript{MSY} proxy. The buffer will be applied to reduce catch from the estimated overfishing limit. The buffer value is a value between 0-1.0 where a value of 1.0 would set catch equal to the overfishing limit. As example if the buffer is applied to catch (Control Rule option 3 or 4 above) the catch will equal the buffer times the overfishing limit. Alternatively a value of -1 will allow the user to input a forecast year specific control rule fraction (added in v.3.30.13).}} \Bstrut\\ + & & \Bstrut\\ + & & \Bstrut\\ & & \Bstrut\\ - - %\hline - \pagebreak - 0.75 \Tstrut & Control Rule Buffer (multiplier between 0-1.0 or -1) & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{Control rule catch or F\textsubscript{target} as a fraction of selected catch or F\textsubscript{MSY} proxy. The buffer will be applied to reduce catch from the estimated overfishing limit. The buffer value is a value between 0-1.0 where a value of 1.0 would set catch equal to the overfishing limit. As example if the buffer is applied to catch (Control Rule option 3 or 4 above) the catch will equal the buffer times the overfishing limit. Alternatively a value of -1 will allow the user to input a forecast year specific control rule fraction (added in v. 3.30.13).}} \\ - & & \\ - & & \\ - & & \\ - & & \\ & & \Bstrut\\ - %\pagebreak % + \pagebreak %\hline - \multicolumn{2}{l}{COND -1: Conditional input for annual control rule buffer} & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{Year and control rule buffer value. Can enter a value for each year, or starting sequence of years. The final control rule buffer value will apply to all sequent forecast years.}} \Tstrut\\ + \multicolumn{2}{l}{COND -1: Conditional input for annual control rule buffer} & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Year and control rule buffer value. Can enter a value for each year, or starting sequence of years. The final control rule buffer value will apply to all sequent forecast years.}} \\ \multicolumn{1}{r}{2019 0.8} & & \\ \multicolumn{1}{r}{2020 0.6} & & \\ \multicolumn{1}{r}{2021 0.5} & & \\ - \multicolumn{1}{r}{-9999 0} & & \Bstrut\\ + \multicolumn{1}{r}{-9999 0} & & \\ \hline %\pagebreak - 3 \Tstrut & Number of forecast loops (1,2,3) & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{SS3 sequentially goes through the forecast up to three times. Maximum number of forecast loops: 1 = OFL only, 2 = ABC control rule and buffers, 3 = set catches equal to control rule or input catch and redo forecast implementation error.}} \\ - & & \\ - & & \\ - & & \Bstrut\Bstrut\\ + 3 \Tstrut & Number of forecast loops & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{SS3 sequentially goes through the forecast up to three times.}} \\ + & 1 = OFL only; & \\ + & 2 = ABC control rule and buffers; & \\ + & 3 = set catches equal to control rule or input catch and redo forecast implementation error. & \Bstrut\\ \hline - 3 \Tstrut & \hyperlink{appendB}{First forecast loop with stochastic recruitment} & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{If this is set to 1 or 2, then OFL and ABC will be calculated as if there was perfect knowledge about recruitment deviations in the future. If running a long forecast (e.g., 10-100 years) it is recommended to run without recruitment deviations since running long forecasts with recruitment deviations not turned on until loop 3 may have poor results (e.g., crashed stock), especially if below average forecast recruitment is assumed (via ``Forecast recruitment'' option, next input line).}} \Bstrut\\ + % \pagebreak + 3 \Tstrut & \hyperlink{appendB}{First forecast loop with stochastic recruitment} & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{If this is set to 1 or 2, then OFL and ABC will be calculated as if there was perfect knowledge about future recruitment deviations. If running a long forecast (e.g., 10-100 years) its recommended to run without recruitment deviations because running long forecasts where recruitment deviations aren't turned on until loop 3 may have poor results (e.g., crashed stock), especially if below mean forecast recruitment is assumed (via \hyperlink{FcastRecruitment}{Base recruitment in forecast} option, next input line).}} \Bstrut\\ & & \\ & & \\ & & \\ & & \\ & & \\ - %\hline + % \hline \pagebreak - 1 \Tstrut & \hyperlink{ForeSpawn}{Base recruitment in forecast:} & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{This option controls the base R (to which devs are applied) in the forecast, or the averaging of a range of historical Rs after all adjustments and devs were applied. For options 1 and 2, the next value read is a scalar applied to the base. Option 3 also averages the distribution of R among areas and morphs, but this is now redundant with the Control Averaging input two boxes down. Options 3 and 4 require that the user set the forecast R\_dev phase to negative and that the last year of rec\_devs is the end year.}} \\ + 1 \Tstrut & \hyperlink{ForeSpawn}{Base recruitment in forecast:} \hypertarget{FcastRecruitment}{} & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{This option controls the base recruitment (to which deviations are applied) in the forecast, or taking the mean of a range of historical recruitments after all adjustments and deviations were applied. For options 1 and 2, the next value read is a scalar applied to the base. Option 4 requires the user set the \hyperlink{FcastRecDevPhase}{forecast recruitment deviation phase} to negative (specifically -1 to get constant mean in MCMC) and the \hyperlink{RecDevEndYear}{last year of recruitment deviations} is the \hyperlink{EndYear}{end year}.}} \\ & 0 = spawner recruit curve; & \\ & 1 = value*(spawner recruit curve); & \\ - & 2 = value*(virgin R); & \\ - & 3 = mean R from Forecast Year range above, R distribution vectors averaged same way; & \\ - & 4 = mean R from Forecast Year range above, R distribution not affected. & \Bstrut\\ - - \hline - 0.7 \Tstrut & Scalar applied to base & \multirow{1}{1cm}[-0.05cm]{\parbox{11cm}{Scalar is ignored unless option 1 and 2 is selected}} \Bstrut\\ + & 2 = value*(virgin recruitment); & \\ + & 3 = deprecated; and & \\ + & 4 = mean recruitment from Forecast Year range above, recruitment distribution not affected. & \Bstrut\\ \hline - 1 & Control Averaging & \multirow{4}{1cm}{\parbox{11cm}{This option was introduced in 3.20.22 to provide averaging of various biology vectors in forecast. It invokes reading a list of sub-options. It works like the averaging of selectivity by averaging the annual vectors, not the parameters used to create the vectors.}} \Tstrut\\ - & 0 = ignore option & \\ - & 1 = use option & \\ - & & \\ + 0.7 \Tstrut & Scalar/multiplier applied to base & \multirow{1}{1cm}[-0.05cm]{\parbox{12cm}{Scalar is ignored unless option 1 and 2 is selected.}} \Bstrut\\ \hline - \multicolumn{3}{l}{\multirow{5}{1cm}{\parbox{22cm}{If Control Averaging is selected, then enter a list of biology types and their selected year range for averaging. For all others and those not selected here, biology during forecast will be created annually from parameters, which can be time-varying. Obviously, these averaging options and time-varying parameter options are incompatible. The initial set of types released with 3.30.22 includes those shown in the example below: 1 = M, 4 = Recruitment Distribution, 5 = movement. The Average\_Method is a future option and must be ``1'' at this time. The min year and max year values can use -999 just as the Forecast Years entered above.}}} \\ - & & \\ - & & \\ - & & \\ - & & \\ - + 0 & Not used & \Tstrut\Bstrut\\ + \hline - Biology type & Average\_Method & Min year of average \hspace{6mm} Max year of average \Tstrut\\ - 1 & 1 & 1976 \hspace{30mm} 2001 \\ - 4 & 1 & 1986 \hspace{30mm} 1989 \\ - 5 & 1 & 1997 \hspace{30mm} 2005 \\ - -9999 & 0 & 0 \hspace{36mm} 0 \Bstrut\\ - - \pagebreak - 2015 \Tstrut & First year for caps and allocations & \multirow{1}{1cm}[-0.10cm]{\parbox{11cm}{Should be after years with fixed inputs.}} \Bstrut\\ + %\pagebreak + 2015 \Tstrut & First year for caps and allocations & \multirow{1}{1cm}[-0.10cm]{\parbox{12cm}{Should be after years with fixed inputs.}} \Bstrut\\ %\pagebreak \hline - 0 \Tstrut & Implementation Error & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{The standard deviation of the log of the ratio between the realized catch and the target catch in the forecast. (set value >0.0 to cause implementation error deviations to be an estimated parameter that will add variance to forecast).}} \Bstrut\\ + 0 \Tstrut & Implementation Error & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{The standard deviation of the log of the ratio between the realized catch and the target catch in the forecast. (set value > 0.0 to cause implementation error deviations to be an estimated parameter that will add variance to forecast).}} \Bstrut\\ & & \Bstrut\\ & & \Bstrut\\ %\pagebreak \hline - 0 \Tstrut & Rebuilder: &\multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{Creates a rebuild.dat file to be used for U.S. West Coast groundfish rebuilder program.}} \\ + 0 \Tstrut & Do West Coast Groundfish Rebuilder Output: &\multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{Creates a rebuild.dat file to be used for U.S. West Coast groundfish rebuilder program.}} \\ & 0 = omit U.S. West Coast rebuilder output; and & \\ & 1 = do abbreviated U.S. West Coast rebuilder output \Bstrut\\ - \hline - %\pagebreak - 2004 & Rebuilder catch (Year Declared): & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{Input line is required even if Rebuilder = 0, specified in the line above.}} \Tstrut\\ - & >0 = year first catch should be set to zero; and & \\ + % \hline + \pagebreak + 2004 & Rebuilder catch (Year Declared): & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{Input line is required even if Rebuilder = 0, specified in the line above.}} \Tstrut\\ + & > 0 = year first catch should be set to zero; and & \\ & -1 = set to 1999. & \Bstrut\\ \hline %\pagebreak - 2004 & Rebuilder start year (Year Initial): & \multirow{1}{1cm}[-0.2cm]{\parbox{11cm}{Input line is required even if Rebuilder = 0, specified two line above.}} \Tstrut\\ - & >0 = year for current age structure; and & \\ + 2004 & Rebuilder start year (Year Initial): & \multirow{1}{1cm}[-0.2cm]{\parbox{12cm}{Input line is required even if Rebuilder = 0, specified two line above.}} \Tstrut\\ + & > 0 = year for current age structure; and & \\ & -1 = set to end year +1. & \Bstrut\\ \hline @@ -239,9 +271,9 @@ \subsection{Forecast File Options (forecast.ss)} & 1 = use first-last allocation year; and & \\ & 2 = read season(row) x fleet (column) set below. & \Bstrut\\ - %\hline - \pagebreak - 2 & Basis for maximum forecast catch: & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{The maximum basis for forecasted catch will be implemented for the for the ``First year for caps and allocations'' selected above. The maximum catch (biomass or numbers) by fleet is specified below on the ``Maximum total forecast catch by fleet'' line.}} \Tstrut\\ + \hline + % \pagebreak + 2 & Basis for maximum forecast catch: & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{The maximum basis for forecasted catch will be implemented for the for the ``First year for caps and allocations'' selected above. The maximum catch (biomass or numbers) by fleet is specified below on the ``Maximum total forecast catch by fleet'' line.}} \Tstrut\\ & 2 = total catch biomass; & \\ & 3 = retained catch biomass; & \\ & 5 = total catch numbers; and & \\ @@ -250,37 +282,40 @@ \subsection{Forecast File Options (forecast.ss)} \hline %\pagebreak \multicolumn{3}{l}{COND 2: Conditional input for fleet relative F (Enter: Season, Fleet, Relative F)} \Tstrut\\ - \multicolumn{1}{r}{1 1 0.6} & Fleet allocation by relative F fraction. & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{The fraction of the forecast F value. For a multiple area model user must define a fraction for each fleet and each area. The total fractions must sum to one over all fleets and areas.}} \\ + \multicolumn{1}{r}{1 1 0.6} & Fleet allocation by relative F fraction. & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{The fraction of the forecast F value. For a multiple area model user must define a fraction for each fleet and each area. The total fractions must sum to one over all fleets and areas.}} \\ \multicolumn{1}{r}{1 2 0.4} & & \\ \multicolumn{1}{r}{-9999 0 0} & Terminator line & \Bstrut\\ - \hline - 1 50 & Maximum total forecast catch by fleet (in units specified above total catch/numbers, retained catch/numbers) & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{Enter fleet number and its maximum value. Last line of the entry must have fleet number = -9999.}} \Tstrut\\ + \pagebreak + % \hline + 1 50 & \multirow{1}{1cm}[-0.25cm]{\parbox{7cm}{Maximum total forecast catch by fleet (in units specified above total catch/numbers, retained catch/numbers)}} & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Enter fleet number and its maximum value. Last line of the entry must have fleet number = -9999.}} \Tstrut\Bstrut\\ -9999 -1 & & \Bstrut\\ + & & \Bstrut\\ \hline - -9999 -1 & Maximum total catch by area & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{Enter area number and its max. Last line of the entry must have area number = -9999.}} \Tstrut\\ + -9999 -1 & Maximum total catch by area & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Enter area number and its max. Last line of the entry must have area number = -9999.}} \Tstrut\\ & -1 = no maximum & \Bstrut\\ \hline - 1 1 & Fleet assignment to allocation group & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{Enter list of fleet number and its allocation group number if it is in a group. Last line of the entry must have fleet number = -9999.}} \Tstrut\\ - -9999 -1 & & \Bstrut\\ + 1 1 & Fleet assignment to allocation group & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Enter list of fleet number and its allocation group number if it is in a group. Last line of the entry must have fleet number = -9999.}} \Tstrut\\ + -9999 -1 & & \\ %\pagebreak %\hline - \multicolumn{2}{l}{COND: if N allocation groups is > 0} & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{Enter a year and the allocation fraction to each group for that year. SS3 will fill those values to the end of the forecast, then read another year from this list. Terminate with -9999 in year field. Annual values are rescaled to sum to 1.0.}} \Tstrut \\ + \multicolumn{2}{l}{COND: if N allocation groups is > 0} & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Enter a year and the allocation fraction to each group for that year. SS3 will fill those values to the end of the forecast, then read another year from this list. Terminate with -9999 in year field. Annual values are rescaled to sum to 1.0.}} \\ \multicolumn{1}{r}{2002 1} & Allocation to each group for each year of the forecast & \\ \multicolumn{1}{r}{-9999 1} & & \Bstrut\\ - %\hline - \pagebreak - -1 & Basis for forecast catch: & \multirow{1}{1cm}[-0.25cm]{\parbox{11cm}{The dead or retained value in the forecast catch inputs will be interpreted in terms of numbers or biomass based on the units of the input catch for each fleet.}} \Tstrut\\ + \hline + % \pagebreak + -1 & Basis for forecast catch: & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{The dead or retained value in the forecast catch inputs will be interpreted in terms of numbers or biomass based on the units of the input catch for each fleet.}} \Tstrut\\ & -1 = Read basis with each observation, allows for a mixture of dead, retained, or F basis by different fleets for the fixed catches below; & \\ & 2 = Dead catch (retained + discarded); & \\ & 3 = Retained catch; and & \\ & 99 = Input apical F (the apical F value for the model years can be found in the EXPLOITATION section in the Report file). & \Bstrut\\ - - \hline + + \pagebreak + % \hline \multicolumn{1}{l}{COND: == -1} & \multicolumn{2}{l}{Forecasted catches - enter one line per number of fixed forecast year catch} \Tstrut\\ \multicolumn{1}{r}{2012 1 1 1200 2} & \multicolumn{2}{l}{Year \& Season \& Fleet \& Catch or F value \& Basis} \\ \multicolumn{1}{r}{2013 1 1 1400 3} & \multicolumn{2}{l}{Year \& Season \& Fleet \& Catch or F value \& Basis} \\ @@ -322,7 +357,6 @@ \subsection{Benchmark Calculations} \myparagraph{Calculations} The calculation of equilibrium biomass and catch uses the same code that is used to calculate the virgin conditions and the initial equilibrium conditions. This equilibrium calculation code takes into account all morph, timing, biology, selectivity, and movement conditions as they apply while doing the time series calculations. You can verify this by running SS3 to calculate F\textsubscript{MSY} then hardwire initial F to equal this value, use the F\_method approach 2 so each annual F is equal to F\textsubscript{MSY} and then set forecast F to be the same F\textsubscript{MSY}. Then run SS3 without estimation and no recruitment deviations. You should see that the population has an initial equilibrium abundance equal to B\textsubscript{MSY} and stays at this level during the time series and forecast. -\pagebreak \myparagraph{Catch Units} For each fleet, SS3 always calculates catch in terms of biomass (mt) and numbers (1000s) for encountered (selected) catch, dead catch, and retained catch. These three categories differ only when some fleets have discarding or are designated as a bycatch fleet. SS3 uses total dead catch biomass as the quantity that is principally reported and the quantity that is optimized when searching for F\textsubscript{MSY}. The quantity ``dead catch'' may occasionally be referred to as ``yield''. @@ -348,7 +382,7 @@ \subsection{Benchmark Calculations} \item Unfished spawning biomass can be calculated for any year or range of years, so can change over time as R0, steepness, or biological parameters change. - \item In the reference points calculation, the Benchmark Years input specifies the range of time over which various quantities are averaged to calculate the reference points. For biology, selectivity, F's, and movement the values being averaged are the year-specific derived quantities. But for the stock-recruitment parameters (R0 and steepness), the parameter values themselves are averaged over time. + \item In the reference points calculation, the Benchmark Years input specifies the range of time over which the mean of various quantities are taken to calculate the reference points. For biology, selectivity, F's, and movement the mean values are the year-specific derived quantities. But for the stock-recruitment parameters (R0 and steepness), the mean of the parameter values themselves is calculated over time. \item During the time series or forecast, the current year's unfished spawning output (SSB\_unf) is used as the basis for the spawner-recruitment curve against which deviations from the spawner-recruitment curve are applied. So, if R0 is made time-varying, then the spawner-recruit curve itself is changed. However, if the regime shift parameter is time-varying, then this is an offset from the spawner-recruitment curve and not a change in the curve itself. Changes in R0 will change year-specific reference points and change the expected value for annual recruitments, but changes in regime shift parameter only change the expected value for annual recruitments. diff --git a/8data.tex b/8data.tex index 3b4febc8..000d9ea3 100644 --- a/8data.tex +++ b/8data.tex @@ -75,15 +75,15 @@ \subsubsection{Subseasons and Timing of Events} The treatment of subseasons in SS3 provide more precision in the timing of events compared to earlier model versions. In early versions, v.3.24 and before, there was effectively only two subseasons per season because the age-length-key (ALK) for each observation used the mid-season mean length-at-age and spawning occurred at the beginning of a specified season. Time steps can be broken into subseason and the ALK can be calculated multiple times over the course of a year: - +\vspace*{-\baselineskip} \begin{center} - \begin{tabular}{| p{2.37cm}| p{2.37cm}| p{2.37cm}| p{2.37cm}| p{2.37cm}| p{2.37cm}| } + \begin{tabular}{|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|} \hline ALK & ALK* & ALK* & ALK & ALK* & ALK \Tstrut\Bstrut\\ \hline Subseason 1 & Subseason 2 & Subseason 3 & Subseason 4 & Subseason 5 & Subseason 6 \Tstrut\Bstrut\\ \hline - \multicolumn{6}{l}{ALK* only re-calculated when there is a survey that subseason }\Tstrut\Bstrut\\ + \multicolumn{6}{l}{ALK* only re-calculated when there is a survey that subseason} \Tstrut\Bstrut\\ \end{tabular} \end{center} @@ -103,7 +103,7 @@ \subsubsection{Subseasons and Timing of Events} \item Survey body weight and size composition is calculated using the nearest subseason. \item Reproductive output now has specified spawn timing (in months fraction) and interpolates growth to that timing. \item Survey numbers calculated at cruise survey timing using $e^{-z}$. - \item Continuous Z for entire season. Same as applied in version v.3.24. + \item Continuous Z for entire season. Same as applied in version v.3.24. \end{itemize} \subsection{Terminology} @@ -111,28 +111,28 @@ \subsection{Terminology} \subsection{Model Dimensions} \begin{center} - \begin{longtable}{p{4cm} p{12cm}} + \begin{longtable}{p{3cm} p{12cm}} \hline \textbf{Value} & \textbf{Description} \Tstrut\Bstrut\\ \hline - \#V3.30.XX.XX & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Model version number. This is written by SS3 in the new files and a good idea to keep updated in the input files.}} \Tstrut\\ - & \Bstrut\\ + \#V3.30.XX.XX & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Model version number. This is written by SS3 in the new files and a good idea to keep updated in the input files.}} \Tstrut\\ + & \Bstrut\\ \hline - \#C data using new survey & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Data file comment. Must start with \#C to be retained then written to top of various output files. These comments can occur anywhere in the data file, but must have \#C in columns 1-2.}} \Tstrut\\ + \#C data using new survey & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Data file comment. Must start with \#C to be retained then written to top of various output files. These comments can occur anywhere in the data file, but must have \#C in columns 1-2.}} \Tstrut\\ & \Bstrut\\ \hline 1971 & Start year \Tstrut\Bstrut\\ \hline - 2001 & End year \Tstrut\Bstrut\\ + 2001 & \hypertarget{EndYear}{End year} \Tstrut\Bstrut\\ \hline 1 & Number of seasons per year \Tstrut\Bstrut\\ \hline - 12 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Vector with the number of months in each season. These do not need to be integers. Note: If the sum of this vector is close to 12.0, then it is rescaled to sum to 1.0 so that season duration is a fraction of a year. If the sum is not equal to 12.0, then the entered values are summed and rescaled to 1. So, with one season per year and 3 months per season, the calculated season duration will be 0.25, which allows a quarterly model to be run as if quarters are years. All rates in SS3 are calculated by season (growth, mortality, etc.) using annual rates and season duration.}} \Tstrut\\ + 12 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Vector with the number of months in each season. These do not need to be integers. Note: If the sum of this vector is close to 12.0, then it is rescaled to sum to 1.0 so that season duration is a fraction of a year. If the sum is not equal to 12.0, then the entered values are summed and rescaled to 1. So, with one season per year and 3 months per season, the calculated season duration will be 0.25, which allows a quarterly model to be run as if quarters are years. All rates in SS3 are calculated by season (growth, mortality, etc.) using annual rates and season duration.}} \Tstrut\\ & \\ & \\ & \\ @@ -142,12 +142,12 @@ \subsection{Model Dimensions} & \Bstrut\\ \hline - 2 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{The number of subseasons. Entry must be even and the minimum value is 2. This is for the purpose of finer temporal granularity in calculating growth and the associated age-length key.}}\Tstrut\\ + 2 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{The number of subseasons. Entry must be even and the minimum value is 2. This is for the purpose of finer temporal granularity in calculating growth and the associated age-length key.}} \Tstrut\\ & \\ & \Bstrut\\ \hline - \hypertarget{RecrTiminig}{1.5} & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Spawning month; spawning biomass is calculated at this time of year (1.5 means January 15) and used as basis for the total recruitment of all settlement events resulting from this spawning.}}\Tstrut\\ + \hypertarget{RecrTiminig}{1.5} & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Spawning month; spawning biomass is calculated at this time of year (1.5 means January 15) and used as basis for the total recruitment of all settlement events resulting from this spawning.}} \Tstrut\\ & \\ & \Bstrut\\ @@ -158,7 +158,7 @@ \subsection{Model Dimensions} & -1 = one sex and multiply the spawning biomass by the fraction female in the control file. \Bstrut\\ \hline - 20 \Tstrut & Number of ages. The value here will be the plus-group age. SS3 starts at age 0. \\ + 20 \Tstrut & Number of ages. The value here will be the plus-group age. SS3 starts at age 0. \\ \hline 1 & Number of areas \Tstrut\Bstrut\\ @@ -167,15 +167,16 @@ \subsection{Model Dimensions} 2 \Tstrut & Total number of fishing and survey fleets (which now can be in any order).\\ \hline \end{longtable} + \vspace*{-1.7\baselineskip} \end{center} -\subsection{Fleet Definitions } -\hypertarget{GenericFleets}{The} catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (SS3 v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey. +\subsection{Fleet Definitions} +\hypertarget{GenericFleets}{The} catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey. \begin{center} - \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{4cm} } - \multicolumn{6}{l}{Inputs that define the fishing and survey fleets:}\\ + \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{4cm}} + \multicolumn{6}{l}{Inputs that define the fishing and survey fleets:} \\ \hline 2 & \multicolumn{5}{l}{Number of fleets which includes survey in any order} \Tstrut\Bstrut\\ @@ -195,8 +196,8 @@ \subsection{Fleet Definitions } \begin{itemize} \item 1 = fleet with input catches; \item 2 = bycatch fleet (all catch discarded) and invoke extra input for treatment in equilibrium and forecast; - \item 3 = survey: assumes no catch removals even if associated catches are specified below. If you would like to remove survey catch set fleet type to option = 1 with specific month timing for removals (defined below in the ``Timing'' section); and - \item 4 = predator (M2) fleet that adds additional mortality without a fleet F (added in version 3.30.18). Ideal for modeling large mortality events such as fish kills or red tide. Requires additional long parameter lines for a second mortality component (M2) in the control file after the natural mortality/growth parameter lines (entered immediately after the fraction female parameter line). + \item 3 = survey: assumes no catch removals even if associated catches are specified below. If you would like to remove survey catch set fleet type to option = 1 with specific month timing for removals (defined below in the ``Timing'' section); and + \item 4 = predator (M2) fleet that adds additional mortality without a fleet F (added in v.3.30.18). Ideal for modeling large mortality events such as fish kills or red tide. Requires additional long parameter lines for a second mortality component (M2) in the control file after the natural mortality/growth parameter lines (entered immediately after the fraction female parameter line). \end{itemize} \hypertarget{ObsTiming}{} @@ -224,33 +225,35 @@ \subsection{Fleet Definitions } \hypertarget{CatchMult}{} \myparagraph{Catch Multiplier} -Invokes use of a catch multiplier, which is then entered as a parameter in the mortality-growth parameter section. The estimated value or fixed value of the catch multiplier is used to adjust the observed catch: +Invokes use of a catch multiplier, which is then entered as a parameter in the mortality-growth parameter section. The estimated value or fixed value of the catch multiplier is used to adjust the observed catch: +Invokes use of a catch multiplier, which is then entered as a parameter in the mortality-growth parameter section. The estimated value or fixed value of the catch multiplier is used to adjust the observed catch: \begin{itemize} \item 0 = No catch multiplier used; and \item 1 = Apply a catch multiplier which is defined as an estimable parameter in the control file after the cohort growth deviation in the biology parameter section. The model's estimated retained catch will be multiplied by this factor before being compared to the observed retained catch. \end{itemize} -A catch multiplier can be useful when trying to explore historical unrecorded catches or ongoing illegal and unregulated catches. The catch multiplier is a full parameter line in the control file and has the ability to be time-varying. +A catch multiplier can be useful when trying to explore historical unrecorded catches or ongoing illegal and unregulated catches. The catch multiplier is a full parameter line in the control file and has the ability to be time-varying. \subsection{Bycatch Fleets} -The option to include bycatch fleets was introduced in SS3 v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored. +The option to include bycatch fleets was introduced in v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored. -A fishing fleet is designated as a bycatch fleet by indicating that its fleet type is 2. A bycatch fleet creates a fishing mortality, same as a fleet of type 1, but a bycatch fleet has all catch discarded so the input value for retained catch is ignored. However, an input value for retained catch is still needed to indicate that the bycatch fleet was active in that year and season. A catch multiplier cannot be used with bycatch fleets because catch multiplier works on retained catch. SS3 will expect that the retention function for this fleet will be set in the selectivity section to type 3, indicating that all selected catch is discarded dead. It is necessary to specify a selectivity pattern for the bycatch fleet and, due to generally lack of data, to externally derive values for the parameters of this selectivity. +A fishing fleet is designated as a bycatch fleet by indicating that its fleet type is 2. A bycatch fleet creates a fishing mortality, same as a fleet of type 1, but a bycatch fleet has all catch discarded so the input value for retained catch is ignored. However, an input value for retained catch is still needed to indicate that the bycatch fleet was active in that year and season. A catch multiplier cannot be used with bycatch fleets because catch multiplier works on retained catch. SS3 will expect that the retention function for this fleet will be set in the selectivity section to type 3, indicating that all selected catch is discarded dead. It is necessary to specify a selectivity pattern for the bycatch fleet and, due to generally lack of data, to externally derive values for the parameters of this selectivity. -All catch from a bycatch fleet is discarded, so one option to use a discard fleet is to enter annual values for the amount (not proportion) that is discarded in each time step. However, it is uncommon to have such data for all years. An alternative approach that has been used principally in the U.S. Gulf of Mexico is to input a time series of effort data for this fleet in the survey section (e.g., effort is a ``survey'' of F, for example, the shrimp trawl fleet in the Gulf of Mexico catches and discards small finfish and an effort time series is available for this fleet) and to input in the discard data section an observation for the average discard over time using the super year approach. Another use of bycatch fleet is to use it to estimate effect of an external source of mortality, such as a red tide event. In this usage there may be no data on the magnitude of the discards and SS3 will then rely solely on the contrast in other data to attempt to estimate the magnitude of the red tide kill that occurred. The benefit of doing this as a bycatch fleet, and not a block on natural mortality, is that the selectivity of the effect can be specified. +All catch from a bycatch fleet is discarded, so one option to use a discard fleet is to enter annual values for the amount (not proportion) that is discarded in each time step. However, it is uncommon to have such data for all years. An alternative approach that has been used principally in the U.S. Gulf of Mexico is to input a time series of effort data for this fleet in the survey section (e.g., effort is a ``survey'' of F, for example, the shrimp trawl fleet in the Gulf of Mexico catches and discards small finfish and an effort time series is available for this fleet) and to input in the discard data section an observation for the average discard over time using the super year approach. Another use of bycatch fleet is to use it to estimate effect of an external source of mortality, such as a red tide event. In this usage there may be no data on the magnitude of the discards and SS3 will then rely solely on the contrast in other data to attempt to estimate the magnitude of the red tide kill that occurred. The benefit of doing this as a bycatch fleet, and not a block on natural mortality, is that the selectivity of the effect can be specified. -Bycatch fleets are not expected to be under the same type of fishery management controls as the retained catch fleets included in the model. This means that when SS3 enters into the reference point equilibrium calculations, it would be incorrect to have SS3 re-scale the magnitude of the F for the bycatch fleet as it searches for the F that produces, for example, F35\%. Related issues apply to the forecast. Consequently, a separate set of controls is provided for bycatch fleets (defined below). Input is required for each fleet designated as fleet type = 2. +Bycatch fleets are not expected to be under the same type of fishery management controls as the retained catch fleets included in the model. This means that when SS3 enters into the reference point equilibrium calculations, it would be incorrect to have SS3 re-scale the magnitude of the F for the bycatch fleet as it searches for the F that produces, for example, F35\%. Related issues apply to the forecast. Consequently, a separate set of controls is provided for bycatch fleets (defined below). Input is required for each fleet designated as fleet type = 2. \noindent If a fleet above was set as a bycatch fleet (fleet type = 2), the following line is required: \begin{center} - \begin{tabular}{p{2.25cm} p{2.65cm} p{2.25cm} p{2.5cm} p{2.5cm} p{2cm} } + \vspace*{-\baselineskip} + \begin{tabular}{p{2.25cm} p{2.5cm} p{2.25cm} p{2.5cm} p{2.5cm} p{2cm}} - \multicolumn{6}{l}{Bycatch fleet input controls:}\\ + \multicolumn{6}{l}{Bycatch fleet input controls:} \\ \hline - a: & b: & c: & d: & e: & f: \Tstrut\\ - Fleet Index & Include in MSY & Fmult & F or First Year & Last Year & Not used \Bstrut\\ + a: & b: & c: & d: & e: & f: \Tstrut\\ + Fleet Index & Include in MSY & Fmult & F or First Year & Last Year & Not used \Bstrut\\ \hline - 2 & 2 & 3 & 1982 & 2010 & 0 \Tstrut\Bstrut\\ + 2 & 2 & 3 & 1982 & 2010 & 0 \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} @@ -313,7 +316,7 @@ \subsection{Bycatch Fleets} \end{enumerate} \end{enumerate} -In version 3.30.14 it was identified that there can be an interaction between the use of bycatch fleets and the search for the $F_{0.1}$ reference point which may results in the search failing. Changes to the search feature were implemented to make the search more robust, however, issue may still be encountered. In these instances it is recommended to not select the $F_{0.1}$ reference point calculation in the forecast file. +In v.3.30.14 it was identified that there can be an interaction between the use of bycatch fleets and the search for the $F_{0.1}$ reference point which may results in the search failing. Changes to the search feature were implemented to make the search more robust, however, issue may still be encountered. In these instances it is recommended to not select the $F_{0.1}$ reference point calculation in the forecast file. \subsection{Predator Fleets} @@ -339,13 +342,13 @@ \subsection{Catch} \hypertarget{ListBased}{There} is no longer a need to specify the number of records to be read; instead the list is terminated by entering a record with the value of -9999 in the year field. The updated list based approach extends throughout the data file (e.g., catch, length- and age-composition data), the control file (e.g., lambdas), and the forecast file (e.g., total catch by fleet, total catch by area, allocation groups, forecasted catch). -In addition, it is possible to collapse the number of seasons. So, if a season value is greater than the number of seasons for a particular model, that catch is added to the catch for the final season. This is one way to easily collapse a seasonal model into an annual model. The alternative option is to the use of season = 0. This will cause SS3 to distribute the input value of catch equally among the number of seasons. SS3 assumes that catch occurs continuously over seasons and hence is not specified as month in the catch data section. However, all other data types will need to be specified by month. +In addition, it is possible to collapse the number of seasons. \ So, if a season value is greater than the number of seasons for a particular model, that catch is added to the catch for the final season. This is one way to easily collapse a seasonal model into an annual model. The alternative option is to the use of season = 0. This will cause SS3 to distribute the input value of catch equally among the number of seasons. SS3 assumes that catch occurs continuously over seasons and hence is not specified as month in the catch data section. However, all other data types will need to be specified by month. -The format for a 2 season model with 2 fisheries looks like the table below. Example is sorted by fleet, but the sort order does not matter. In data.ss\_new, the sort order is fleet, year, season. +The format for a 2 season model with 2 fisheries looks like the table below. Example is sorted by fleet, but the sort order does not matter. In data.ss\_new, the sort order is fleet, year, season. \begin{center} - \begin{tabular}{p{3cm} p{3cm} p{3cm} p{3cm} p{2cm}} - \multicolumn{5}{l}{Catches by year, season for every fleet:}\\ + \begin{tabular}{p{3cm} p{3cm} p{2cm} p{3cm} p{3cm}} + \multicolumn{5}{l}{Catches by year, season for every fleet:} \\ \hline Year & Season & Fleet & Catch & Catch SE \Tstrut\Bstrut\\ \hline @@ -353,12 +356,14 @@ \subsection{Catch} -999 & 2 & 1 & 62 & 0.05 \\ 1975 & 1 & 1 & 876 & 0.05 \\ 1975 & 2 & 1 & 343 & 0.05 \\ - ... & ... & ... & ... & ... \\ + ... & ... & ... & ... & ... \\ + ... & ... & ... & ... & ... \\ -999 & 1 & 2 & 55 & 0.05 \\ -999 & 2 & 2 & 22 & 0.05 \\ 1975 & 1 & 2 & 555 & 0.05 \\ 1975 & 2 & 2 & 873 & 0.05 \\ - ... & ... & ... & ... & ... \\ + ... & ... & ... & ... & ... \\ + ... & ... & ... & ... & ... \\ -9999 & 0 & 0 & 0 & 0 \Bstrut\\ \hline \end{tabular} @@ -366,21 +371,21 @@ \subsection{Catch} \begin{itemize} \item Catch can be in terms of biomass or numbers for each fleet, but cannot be mixed within a fleet. - \item Catch is retained catch (aka landings). If there is discard also, then it is handled in the discard section below. This is the recommended setup which results in a model estimated retention curve based upon the discard data (specifically discard composition data). However, there may be instances where the data do not support estimation of retention curves. In these instances catches can be specified as all dead (retained + discard estimates). + \item Catch is retained catch (aka landings). If there is discard also, then it is handled in the discard section below. This is the recommended setup which results in a model estimated retention curve based upon the discard data (specifically discard composition data). However, there may be instances where the data do not support estimation of retention curves. In these instances catches can be specified as all dead (retained + discard estimates). \item If there are challenges to estimating discards within the model, catches can be input as total dead without the use of discard data and retention curves. - \item If there is reason to believe that the retained catch values underestimate the true catch, then it is possible in the retention parameter set up to create the ability for the model to estimate the degree of unrecorded catch. However, this is better handled with the new catch multiplier option. + \item If there is reason to believe that the retained catch values underestimate the true catch, then it is possible in the retention parameter set up to create the ability for the model to estimate the degree of unrecorded catch. However, this is better handled with the new catch multiplier option. \end{itemize} \subsection{Indices} -Indices are data that are compared to aggregate quantities in the model. Typically the index is a measure of selected fish abundance, but this data section also allows for the index to be related to a fishing fleet's F, or to another quantity estimated by the model. The first section of the ``Indices'' setup contains the fleet number, units, error distribution, and whether additional output (SD Report) will be written to the Report file for each fleet that has index data. +Indices are data that are compared to aggregate quantities in the model. Typically the index is a measure of selected fish abundance, but this data section also allows for the index to be related to a fishing fleet's F, or to another quantity estimated by the model. The first section of the ``Indices'' setup contains the fleet number, units, error distribution, and whether additional output (SD Report) will be written to the Report file for each fleet that has index data. \begin{center} - \begin{tabular}{p{3cm} p{2cm} p{3cm} p{6cm}} - \multicolumn{4}{l}{Catch-per-unit-effort (CPUE) and Survey Abundance Observations:}\\ + \begin{tabular}{p{3cm} p{3cm} p{4cm} p{4cm}} + \multicolumn{4}{l}{Catch-per-unit-effort (CPUE) and Survey Abundance Observations:} \\ \hline - Fleet/ & & Error & \Tstrut\\ - Survey & Units & Distribution & SD Report \Bstrut\\ + Fleet/ & & Error & \Tstrut\\ + Survey & Units & Distribution & SD Report \Bstrut\\ \hline 1 & 1 & 0 & 0 \Tstrut\\ 2 & 1 & 0 & 0 \\ @@ -465,10 +470,10 @@ \subsection{Indices} \item If the fishery or survey has time-varying selectivity, then this changing selectivity will be taken into account when calculating expected values for the CPUE or survey index. \item Year values that are before start year or after end year are excluded from model, so the easiest way to include provisional data in a data file is to put a negative sign on its year value. \item Duplicate survey observations for the same year are not allowed. - \item Observations that are to be included in the model but not included in the negative log likelihood need to have a negative sign on their fleet ID. Previously the code for not using observations was to enter the observation itself as a negative value. However, that old approach prevented use of a Z-score environmental index as a ``survey''. This approach is best for single or select years from an index rather than an approach to remove a whole index. Removing a whole index from the model should be done through the use of lambdas at the bottom of the control file which will eliminate the index from model fitting. + \item Observations that are to be included in the model but not included in the negative log likelihood need to have a negative sign on their fleet ID. Previously the code for not using observations was to enter the observation itself as a negative value. However, that old approach prevented use of a Z-score environmental index as a ``survey''. This approach is best for single or select years from an index rather than an approach to remove a whole index. Removing a whole index from the model should be done through the use of lambdas at the bottom of the control file which will eliminate the index from model fitting. \item Observations can be entered in any order, except if the super-year feature is used. - \item Super-periods are turned on and then turned back off again by putting a negative sign on the season. Previously, super-periods were started and stopped by entering -9999 and the -9998 in the SE field. See the \hyperlink{SuperPeriod}{Data Super-Period} section of this manual for more information. - \item If the statistical analysis used to create the CPUE index of a fishery has been conducted in such a way that its inherent size/age selectivity differs from the size/age selectivity estimated from the fishery's size and age composition, then you may want to enter the CPUE as if it was a separate survey and with a selectivity that differs from the fishery's estimated selectivity. The need for this split arises because the fishery size and age composition should be derived through a catch-weighted approach (to appropriately represent the removals by the fishery) and the CPUE should be derived through an area-weighted approach to better serve as a survey of stock abundance. + \item Super-periods are turned on and then turned back off again by putting a negative sign on the season. Previously, super-periods were started and stopped by entering -9999 and the -9998 in the SE field. See the \hyperlink{SuperPeriod}{Data Super-Period} section of this manual for more information. + \item If the statistical analysis used to create the CPUE index of a fishery has been conducted in such a way that its inherent size/age selectivity differs from the size/age selectivity estimated from the fishery's size and age composition, then you may want to enter the CPUE as if it was a separate survey and with a selectivity that differs from the fishery's estimated selectivity. The need for this split arises because the fishery size and age composition should be derived through a catch-weighted approach (to appropriately represent the removals by the fishery) and the CPUE should be derived through an area-weighted approach to better serve as a survey of stock abundance. \end{itemize} \subsection{Discard} @@ -487,13 +492,13 @@ \subsection{Discard} \begin{center} \begin{tabular}{p{2cm} p{3cm} p{3cm} p{3cm} p{3cm}} \hline - 1 & \multicolumn{4}{l}{Number of fleets with discard observations}\Tstrut\Bstrut\\ + 1 & \multicolumn{4}{l}{Number of fleets with discard observations} \Tstrut\Bstrut\\ \hline - Fleet & Units & \multicolumn{3}{l}{Error Distribution}\Tstrut\Bstrut\\ + Fleet & Units & \multicolumn{3}{l}{Error Distribution} \Tstrut\Bstrut\\ \hline - 1 & 2 & \multicolumn{3}{l}{-1}\Tstrut\Bstrut\\ + 1 & 2 & \multicolumn{3}{l}{-1} \Tstrut\Bstrut\\ \hline - Year & Month & Fleet & Observation & Standard Error \Tstrut\Bstrut\\ + Year & Month & Fleet & Observation & Standard Error \Tstrut\Bstrut\\ \hline 1980 & 7 & 1 & 0.05 & 0.25 \Tstrut\\ 1991 & 7 & 1 & 0.10 & 0.25 \\ @@ -502,7 +507,7 @@ \subsection{Discard} \end{tabular} \end{center} -Note that although the user must specify a month for the observed discard data, the unit for discard data is in terms of a season rather than a specific month. So, if using a seasonal model, the input month values must corresponding to some time during the correct season. The actual value will not matter because the discard amount is calculated for the entirety of the season. However, discard length or age observations will be treated by entered observation month. +Note that although the user must specify a month for the observed discard data, the unit for discard data is in terms of a season rather than a specific month. So, if using a seasonal model, the input month values must corresponding to some time during the correct season. The actual value will not matter because the discard amount is calculated for the entirety of the season. However, discard length or age observations will be treated by entered observation month. \myparagraph{Discard Units} The options are: @@ -515,11 +520,11 @@ \subsection{Discard} \myparagraph{Discard Error Distribution} The four options for discard error are: \begin{itemize} - \item >0 = degrees of freedom for Student's t-distribution used to scale mean body weight deviations. Value of error in data file is interpreted as CV of the observation; + \item >0 = degrees of freedom for Student's t-distribution used to scale mean body weight deviations. Value of error in data file is interpreted as CV of the observation; \item 0 = normal distribution, value of error in data file is interpreted as CV of the observation; \item -1 = normal distribution, value of error in data file is interpreted as standard error of the observation; \item -2 = lognormal distribution, value of error in data file is interpreted as standard error of the observation in log space; and - \item -3 = truncated normal distribution (new with SS3 v.3.30, needs further testing), value of error in data file is interpreted as standard error of the observation. This is a good option for low observed discard rates. + \item -3 = truncated normal distribution (new with v.3.30, needs further testing), value of error in data file is interpreted as standard error of the observation. This is a good option for low observed discard rates. \end{itemize} \myparagraph{Discard Notes} @@ -529,27 +534,27 @@ \subsection{Discard} \item Zero (0.0) is a legitimate discard observation, unless lognormal error structure is used. \item Duplicate discard observations from a fleet for the same year are not allowed. \item Observations can be entered in any order, except if the super-period feature is used. - \item Note that in the control file you will enter information for retention such that 1-retention is the amount discarded. All discard is assumed dead, unless you enter information for discard mortality. Retention and discard mortality can be either size-based or age-based (new with SS3 v.3.30). + \item Note that in the control file you will enter information for retention such that 1-retention is the amount discarded. All discard is assumed dead, unless you enter information for discard mortality. Retention and discard mortality can be either size-based or age-based (new with v.3.30). \end{itemize} \myparagraph{Cautionary Note} -The use of CV as the measure of variance can cause a small discard value to appear to be overly precise, even with the minimum standard error of the discard observation set to 0.001. In the control file, there is an option to add an extra amount of variance. This amount is added to the standard error, not to the CV, to help correct this problem of underestimated variance. +The use of CV as the measure of variance can cause a small discard value to appear to be overly precise, even with the minimum standard error of the discard observation set to 0.001. In the control file, there is an option to add an extra amount of variance. This amount is added to the standard error, not to the CV, to help correct this problem of underestimated variance. \subsection{Mean Body Weight or Length} -This is the overall mean body weight or length across all selected sizes and ages. This may be useful in situations where individual fish are not measured but mean weight is obtained by counting the number of fish in a specified sample, e.g., a 25 kg basket. +This is the overall mean body weight or length across all selected sizes and ages. This may be useful in situations where individual fish are not measured but mean weight is obtained by counting the number of fish in a specified sample (e.g., a 25 kg basket). \begin{center} - \begin{tabular}{p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{2cm} p{2.8cm}} - \multicolumn{7}{l}{Mean Body Weight Data Section:}\\ + \begin{tabular}{p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{2cm} p{1cm}} + \multicolumn{7}{l}{Mean Body Weight Data Section:} \\ \hline - 1 & \multicolumn{6}{l}{Use mean body size data (0/1) } \Tstrut\Bstrut\\ + 1 & \multicolumn{6}{l}{Use mean body size data (0/1)} \Tstrut\Bstrut\\ \hline \multicolumn{7}{l}{COND > 0:}\Tstrut\\ - 30 & \multicolumn{6}{l}{Degrees of freedom for Student's t-distribution used to evaluate mean body weight } \\ - & \multicolumn{6}{l}{deviation.}\Bstrut\\ + 30 & \multicolumn{6}{l}{Degrees of freedom for Student's t-distribution used to evaluate mean body weight} \\ + & \multicolumn{6}{l}{deviation.} \Bstrut\\ \hline - Year & Month & Fleet & Partition & Type & Observation & CV\Tstrut\Bstrut\\ + Year & Month & Fleet & Partition & Type & Observation & CV \Tstrut\Bstrut\\ \hline 1990 & 7 & 1 & 0 & 1 & 4.0 & 0.95 \Tstrut\\ 1990 & 7 & 1 & 0 & 1 & 1.0 & 0.95 \\ @@ -561,8 +566,9 @@ \subsection{Mean Body Weight or Length} \myparagraph{Partition} Mean weight data and composition data require specification of what group the sample originated from (e.g., discard, retained, discard + retained). +Note: if retention is not defined in the selectivity section, observations with Partition = 2 will be changed to Partition = 0. \begin{itemize} - \item 0 = whole catch in units of weight (discard + retained); + \item 0 = combined catch in units of weight (whole, e.g. discard + retained); \item 1 = discarded catch in units of weight; and \item 2 = retained catch in units of weight. \end{itemize} @@ -575,29 +581,29 @@ \subsection{Mean Body Weight or Length} \end{itemize} \myparagraph{Observation - Units} -Units must correspond to the units of body weight, normally in kilograms, (or mean length in cm). The expected value of mean body weight (or mean length) is calculated in a way that incorporates effect of selectivity and retention. +Units must correspond to the units of body weight, normally in kilograms, (or mean length in cm). The expected value of mean body weight (or mean length) is calculated in a way that incorporates effect of selectivity and retention. \myparagraph{Error} Error is entered as the CV of the observed mean body weight (or mean length) \subsection{Population Length Bins} -The first part of the length composition section sets up the bin structure for the population. These bins define the granularity of the age-length key and the coarseness of the length selectivity. Fine bins create smoother distributions, but a larger and slower running model. +The first part of the length composition section sets up the bin structure for the population. These bins define the granularity of the age-length key and the coarseness of the length selectivity. Fine bins create smoother distributions, but a larger and slower running model. First read a single value to select one of three population length bin methods, then any conditional input for options 2 and 3: \begin{center} \begin{tabular}{p{2cm} p{5cm} p{8cm}} \hline - 1 & \multicolumn{2}{l}{Use data bins to be read later. No additional input here.} \Tstrut\Bstrut\\ + 1 & \multicolumn{2}{l}{Use data bins to be read later. No additional input here.} \Tstrut\Bstrut\\ \hline 2 & \multicolumn{2}{l}{generate from bin width min max, read next:} \Tstrut\\ \multirow{4}{2cm}[-0.1cm]{} & 2 & Bin width \\ - & 10 & Lower size of first bin\\ - & 82 & Lower size of largest bin\\ + & 10 & Lower size of first bin \\ + & 82 & Lower size of largest bin \\ \multicolumn{3}{l}{The number of bins is then calculated from: (max Lread - min Lread)/(bin width) + 1}\Bstrut\\ \hline 3 & \multicolumn{2}{l}{Read 1 value for number of bins, and then read vector of bin boundaries} \Tstrut\\ - \multirow{2}{2cm}[-0.1cm]{} & 37 & Number of population length bins to be read\\ - & 10 12 14 ... 82 & Vector containing lower edge of each population size bin \Bstrut\\ + \multirow{2}{2cm}[-0.1cm]{} & 37 & Number of population length bins to be read \\ + & 10 12 14 ... 82 & Vector containing lower edge of each population size bin \Bstrut\\ \hline \end{tabular} @@ -606,42 +612,42 @@ \subsection{Population Length Bins} \myparagraph{Notes} There are some items for users to consider when setting up population length bins: \begin{itemize} - \item For option 2, bin width should be a factor of min size and max size. For options 2 and 3, the data length bins must not be wider than the population length bins and the boundaries of the bins do not have to align. The transition matrix between population and data length bins is output to echoinput.sso. + \item For option 2, bin width should be a factor of min size and max size. For options 2 and 3, the data length bins must not be wider than the population length bins and the boundaries of the bins do not have to align. The transition matrix between population and data length bins is output to echoinput.sso. \item The mean size at settlement (virtual recruitment age) is set equal to the min size of the first population length bin. \item When using more, finer population length bins, the model will create smoother length selectivity curves and smoother length distributions in the age-length key, but run more slowly (more calculations to do). - \item The mean weight-at-length, maturity-at-length and size-selectivity are based on the mid-length of the population bins. So these quantities will be rougher approximations if broad bins are defined. + \item The mean weight-at-length, maturity-at-length and size-selectivity are based on the mid-length of the population bins. So these quantities will be rougher approximations if broad bins are defined. - \item Provide a wide enough range of population size bins so that the mean body weight-at-age will be calculated correctly for the youngest and oldest fish. If the growth curve extends beyond the largest size bin, then these fish will be assigned a length equal to the mid-bin size for the purpose of calculating their body weight. + \item Provide a wide enough range of population size bins so that the mean body weight-at-age will be calculated correctly for the youngest and oldest fish. If the growth curve extends beyond the largest size bin, then these fish will be assigned a length equal to the mid-bin size for the purpose of calculating their body weight. - \item While exploring the performance of models with finer bin structure, a potentially pathological situation has been identified. When the bin structure is coarse (note that some applications have used 10 cm bin widths for the largest fish), it is possible for a selectivity slope parameter or a retention parameter to become so steep that all of the action occurs within the range of a single size bin. In this case, the model will see zero gradient of the log likelihood with respect to that parameter and convergence will be hampered. + \item While exploring the performance of models with finer bin structure, a potentially pathological situation has been identified. When the bin structure is coarse (note that some applications have used 10 cm bin widths for the largest fish), it is possible for a selectivity slope parameter or a retention parameter to become so steep that all of the action occurs within the range of a single size bin. In this case, the model will see zero gradient of the log likelihood with respect to that parameter and convergence will be hampered. - \item A value read near the end of the starter.ss file defines the degree of tail compression used for the age-length key, called ALK tolerance. If this is set to 0.0, then no compression is used and all cells of the age-length key are processed, even though they may contain trivial (e.g., 1 e-13) fraction of the fish at a given age. With tail compression of, say 0.0001, the model, at the beginning of each phase, will calculate the min and max length bin to process for each age of each morphs ALK and compress accordingly. Depending on how many extra bins are outside this range, you may see speed increases near 10-20\%. Large values of ALK tolerance, say 0.1, will create a sharp end to each distribution and likely will impede convergence. It is recommended to start with a value of 0 and if model speed is an issue, explore values greater than 0 and evaluate the trade-off between model estimates and run time. The user is encouraged to explore this feature. + \item A value read near the end of the starter.ss file defines the degree of tail compression used for the age-length key, called ALK tolerance. If this is set to 0.0, then no compression is used and all cells of the age-length key are processed, even though they may contain trivial (e.g., 1 e-13) fraction of the fish at a given age. With tail compression of, say 0.0001, the model, at the beginning of each phase, will calculate the min and max length bin to process for each age of each morphs ALK and compress accordingly. Depending on how many extra bins are outside this range, you may see speed increases near 10-20\%. Large values of ALK tolerance, say 0.1, will create a sharp end to each distribution and likely will impede convergence. It is recommended to start with a value of 0 and if model speed is an issue, explore values greater than 0 and evaluate the trade-off between model estimates and run time. The user is encouraged to explore this feature. \end{itemize} \subsection{Length Composition Data Structure} \begin{tabular}{p{2cm} p{13cm}} - \multicolumn{2}{l}{Enter a code to indicate whether or not length composition data will be used:\Tstrut\Bstrut}\\ + \multicolumn{2}{l}{Enter a code to indicate whether or not length composition data will be used:} \Tstrut\Bstrut\\ \hline - 1 & Use length composition data (0/1/2)\Tstrut\Bstrut\\ + 1 & Use length composition data (0/1/2) \Tstrut\Bstrut\\ \hline \end{tabular} -If the value 0 is entered, then skip all length related inputs below and skip to the age data setup section. If value 1 is entered, all data weighting options for composition data apply equally to all partitions within a fleet. If value 2 is entered, then the data weighting options are applied by the partition specified. Note that the partitions must be entered in numerical order within each fleet. +If the value 0 is entered, then skip all length related inputs below and skip to the age data setup section. If value 1 is entered, all data weighting options for composition data apply equally to all partitions within a fleet. If value 2 is entered, then the data weighting options are applied by the partition specified. Note that the partitions must be entered in numerical order within each fleet. If the value for fleet is negative, then the vector of inputs is copied to all partitions (0 = combined, 1 = discard, and 2 = retained) for that fleet and all higher numbered fleets. This as a good practice so that the user controls the values used for all fleets. -\begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{1.5cm} p{1.7cm}} - \multicolumn{7}{l}{Example table of length composition settings when ``Use length composition data'' = 1 (where here }\\ - \multicolumn{7}{l}{the first fleet has multinomial error structure with no associated parameter, and the second fleet}\\ - \multicolumn{7}{l}{uses Dirichlet-multinomial structure):}\\ +\begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{1cm}} + \multicolumn{7}{l}{Example table of length composition settings when ``Use length composition data'' = 1 (where here} \\ + \multicolumn{7}{l}{the first fleet has multinomial error structure with no associated parameter, and the second fleet} \\ + \multicolumn{7}{l}{uses Dirichlet-multinomial structure):} \\ \hline - Min. & Constant & Combine & & Comp. & & Min.\Tstrut\\ - Tail & added & males \& & Compress. & Error & Param. & Sample\\ - Compress. & to prop. & females & Bins & Dist. & Select & Size\Bstrut\\ + Min. & Constant & Combine & & Comp. & & Min. \Tstrut\\ + Tail & added & males \& & Compress. & Error & Param. & Sample \\ + Compress. & to prop. & females & Bins & Dist. & Select & Size \Bstrut\\ \hline 0 & 0.0001 & 0 & 0 & 0 & 0 & 0.1 \Tstrut\\ 0 & 0.0001 & 0 & 0 & 1 & 1 & 0.1 \Bstrut\\ @@ -649,14 +655,14 @@ \subsection{Length Composition Data Structure} \end{tabular} -\begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}} - \multicolumn{9}{l}{Example table of length composition settings when ``Use length composition data'' = 2 (where here}\\ - \multicolumn{9}{l}{the -1 in the fleet column applies the first parameter to all partitions for fleet 1 while fleet 2 has}\\ - \multicolumn{9}{l}{separate parameters for discards and retained fish):}\\ +\begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}} + \multicolumn{9}{l}{Example table of length composition settings when ``Use length composition data'' = 2 (where here} \\ + \multicolumn{9}{l}{the -1 in the fleet column applies the first parameter to all partitions for fleet 1 while fleet 2 has} \\ + \multicolumn{9}{l}{separate parameters for discards and retained fish):} \\ \hline - & & Min. & Constant & Combine & & Comp. & & Min.\Tstrut\\ - & & Tail & added & males \& & Compress. & Error & Param. & Sample\\ - Fleet & Partition & Compress. & to prop. & females & Bins & Dist. & Select & Size\Bstrut\\ + & & Min. & Constant & Combine & & Comp. & & Min. \Tstrut\\ + & & Tail & added & males \& & Compress. & Error & Param. & Sample \\ + Fleet & Partition & Compress. & to prop. & females & Bins & Dist. & Select & Size \Bstrut\\ \hline -1 & 0 & 0 & 0.0001 & 0 & 0 & 1 & 1 & 0.1 \Tstrut\\ 2 & 1 & 0 & 0.0001 & 0 & 0 & 1 & 2 & 0.1 \\ @@ -669,16 +675,16 @@ \subsection{Length Composition Data Structure} %\pagebreak \myparagraph{Minimum Tail Compression} -Compress tails of composition until observed proportion is greater than this value; negative value causes no compression; Advise using no compression if data are very sparse, and especially if the set-up is using age composition within length bins because of the sparseness of these data. +Compress tails of composition until observed proportion is greater than this value; negative value causes no compression; Advise using no compression if data are very sparse, and especially if the set-up is using age composition within length bins because of the sparseness of these data. A single fish being observed with tail compression on will cause the entire vector to be collapsed to that bin. \myparagraph{Added Constant to Proportions} -Constant added to observed and expected proportions at length and age to make logL calculations more robust. Tail compression occurs before adding this constant. Proportions are renormalized to sum to 1.0 after constant is added. +Constant added to observed and expected proportions at length and age to make logL calculations more robust. Tail compression occurs before adding this constant. Proportions are renormalized to sum to 1.0 after constant is added. \myparagraph{Combine Males \& Females} -Combine males into females at or below this bin number. This is useful if the sex determination of very small fish is doubtful so allows the small fish to be treated as combined sex. If Combine Males \& Females > 0, then add males into females for bins 1 through this number, zero out the males, set male data to start at the first bin above this bin. Note that Combine Males \& Females > 0 is entered as a bin index, not as the size associated with that bin. Comparable option is available for age composition data. +Combine males into females at or below this bin number. This is useful if the sex determination of very small fish is doubtful so allows the small fish to be treated as combined sex. If Combine Males \& Females > 0, then add males into females for bins 1 through this number, zero out the males, set male data to start at the first bin above this bin. Note that Combine Males \& Females > 0 is entered as a bin index, not as the size associated with that bin. Comparable option is available for age composition data. \myparagraph{Compress Bins} -This option allows for the compression of length or age bins beyond a specific length or age by each data source. As an example, a value of 5 in the compress bins column would condense the final five length bins for the specified data source. +This option allows for the compression of length or age bins beyond a specific length or age by each data source. As an example, a value of 5 in the compress bins column would condense the final five length bins for the specified data source. \myparagraph{Composition Error Distribution} The options are: @@ -693,13 +699,15 @@ \subsection{Length Composition Data Structure} \begin{itemize} \item This parameterization of the Dirichlet-multinomial Error has not been tested, so this option should be used with caution. The Dirichlet Multinomial Error data weighting approach will calculate the effective sample size based on equation 12 from \citet{thorson-model-based-2017} where the estimated parameter will now be in terms of $\beta$. The application of this method should follow the same steps detailed above for option 1. \end{itemize} - \item 3 = Multivariate Tweedie. + % \item 3 = Multivariate Tweedie. (add when MV Tweedie is implemented) \end{itemize} %\pagebreak \myparagraph{Parameter Select} -Value that indicates the groups of composition data for estimation of the Dirichlet or Multivariate Tweedie parameter for weighting composition data. +Value that indicates the groups of composition data for estimation of the Dirichlet +% or Multivariate Tweedie (add when MV Tweedie is implemented) +parameter for weighting composition data. \begin{itemize} \item 0 = Default; and @@ -707,14 +715,14 @@ \subsection{Length Composition Data Structure} \end{itemize} \myparagraph{Minimum Sample Size} -The minimum value (floor) for all sample sizes. This value must be at least 0.001. Conditional age-at-length data may have observations with sample sizes less than 1. SS3 v.3.24 had an implicit minimum sample size value of 1. +The minimum value (floor) for all sample sizes. This value must be at least 0.001. Conditional age-at-length data may have observations with sample sizes less than 1. Version 3.24 had an implicit minimum sample size value of 1. \myparagraph{Additional information on Dirichlet Parameter Number and Effective Sample Sizes} -If the Dirichlet-multinomial error distribution is selected, indicate here which of a list of Dirichlet-multinomial parameters will be used for this fleet. So each fleet could use a unique Dirichlet-multinomial parameter, or all could share the same, or any combination of unique and shared. The requested number of Dirichlet-multinomial parameters are specified as parameter lines in the control file immediately after the selectivity parameter section. Please note that age-compositions Dirichlet-multinomial parameters are continued after length-compositions, so a model with one fleet and both data types would presumably require two new Dirichlet-multinomial parameters. +If the Dirichlet-multinomial error distribution is selected, indicate here which of a list of Dirichlet-multinomial parameters will be used for this fleet. So each fleet could use a unique Dirichlet-multinomial parameter, or all could share the same, or any combination of unique and shared. The requested number of Dirichlet-multinomial parameters are specified as parameter lines in the control file immediately after the selectivity parameter section. Please note that age-compositions Dirichlet-multinomial parameters are continued after length-compositions, so a model with one fleet and both data types would presumably require two new Dirichlet-multinomial parameters. -The Dirichlet estimates the effective sample size as $N_{eff}=\frac{1}{1+\theta}+\frac{N\theta}{1+\theta}$ where $\theta$ is the estimated parameter and $N$ is the input sample size. Stock Synthesis estimates the log of the Dirichlet-multinomial parameter such that $\hat{\theta}_{\text{fishery}} = e^{-0.6072} = 0.54$ where assuming $N=100$ for the fishery would result in an effective sample size equal to 35.7. +The Dirichlet estimates the effective sample size as $N_{eff}=\frac{1}{1+\theta}+\frac{N\theta}{1+\theta}$ where $\theta$ is the estimated parameter and $N$ is the input sample size. Stock Synthesis estimates the log of the Dirichlet-multinomial parameter such that $\hat{\theta}_{\text{fishery}} = e^{-0.6072} = 0.54$ where assuming $N=100$ for the fishery would result in an effective sample size equal to 35.7. -This formula for effective sample size implies that, as the Stock Synthesis parameter ln(DM\_theta) goes to large values (i.e., 20), then the adjusted sample size will converge to the input sample size. In this case, small changes in the value of the ln(DM\_theta) parameter has no action, and the derivative of the negative log-likelihood is zero with respect to the parameter, which means the Hessian will be singular and cannot be inverted. To avoid this non-invertible Hessian when the ln(DM\_theta) parameter becomes large, turn it off while fixing it at the high value. This is equivalent to turning off down-weighting of fleets where evidence suggests that the input sample sizes are reasonable. +This formula for effective sample size implies that, as the Stock Synthesis parameter ln(DM\_theta) goes to large values (i.e., 20), then the adjusted sample size will converge to the input sample size. In this case, small changes in the value of the ln(DM\_theta) parameter has no action, and the derivative of the negative log-likelihood is zero with respect to the parameter, which means the Hessian will be singular and cannot be inverted. To avoid this non-invertible Hessian when the ln(DM\_theta) parameter becomes large, turn it off while fixing it at the high value. This is equivalent to turning off down-weighting of fleets where evidence suggests that the input sample sizes are reasonable. For additional information about the Dirichlet-multinomial please see \citet{thorson-model-based-2017} and the detailed \hyperlink{DataWeight}{Data Weighting} section. @@ -722,15 +730,15 @@ \subsection{Length Composition Data Structure} \subsection{Length Composition Data} Composition data can be entered as proportions, numbers, or values of observations by length bin based on data expansions. -The data bins do not need to cover all observed lengths. The selection of data bin structure should be based on the observed distribution of lengths and the assumed growth curve. If growth asymptotes at larger lengths, having additional length bins across these sizes may not contribute information to the model and may slow model run time. Additionally, the lower length bin selection should be selected such that, depending on the size selection, to allow for information on smaller fish and possible patterns in recruitment. While set separately users should ensure that the length and age bins align. It is recommended to explore multiple configurations of length and age bins to determine the impact of this choice on model estimation. +The data bins do not need to cover all observed lengths. The selection of data bin structure should be based on the observed distribution of lengths and the assumed growth curve. If growth asymptotes at larger lengths, having additional length bins across these sizes may not contribute information to the model and may slow model run time. Additionally, the lower length bin selection should be selected such that, depending on the size selection, to allow for information on smaller fish and possible patterns in recruitment. While set separately users should ensure that the length and age bins align. It is recommended to explore multiple configurations of length and age bins to determine the impact of this choice on model estimation. Specify the length composition data as: \begin{center} \begin{tabular}{p{4cm} p{10cm}} \hline - 28 & Number of length bins for data \\ + 28 & Number of length bins for data \\ \hline - 26 28 30 ... 80 & Vector of length bins associated with the length data\\ + 26 28 30 ... 80 & Vector of length bins associated with the length data \\ \hline \end{tabular} \end{center} @@ -748,14 +756,15 @@ \subsection{Length Composition Data} \end{center} Example of a single length composition observation: +\vspace*{-1cm} % used this because the spacing was off in the pdf \begin{center} \begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{5cm}} \multicolumn{7}{l}{} \\ \hline - Year & Month & Fleet & Sex & Partition & Nsamp & data vector\Tstrut\Bstrut\\ + Year & Month & Fleet & Sex & Partition & Nsamp & data vector \Tstrut\Bstrut\\ \hline 1986 & 1 & 1 & 3 & 0 & 20 & \Tstrut\\ - ... & ...& ... & ... & ...& ... & ... \\ + ... & ... & ... & ... & ... & ... & ... \\ -9999 & 0 & 0 & 0 & 0 & 0 & <0 repeated for each element of the data vector above> \Bstrut\\ \hline @@ -774,9 +783,10 @@ \subsection{Length Composition Data} \end{itemize} \myparagraph{Partition} -Partition indicates samples from either discards,retained, or combined. +Partition indicates samples from either combined, discards, or retained catch. +Note: if retention is not defined in the selectivity section, observations with Partition = 2 will be changed to Partition = 0. \begin{itemize} - \item 0 = combined; + \item 0 = combined (whole, e.g. discard + retained); \item 1 = discard; and \item 2 = retained. \end{itemize} @@ -790,18 +800,19 @@ \subsection{Length Composition Data} \myparagraph{Note} When processing data to be input into SS3, all observed fish of sizes smaller than the first bin should be added to the first bin and all observed fish larger than the last bin should be condensed into the last bin. -The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in SS3 v.3.30, the model will continue to read length composition data until an pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read. +The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in v.3.30, the model will continue to read length composition data until an pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read. -Each observation can be stored as one row for ease of data management in a spreadsheet and for sorting of the observations. However, the 6 header values, the female vector and the male vector could each be on a separate line because ADMB reads values consecutively from the input file and will move to the next line as necessary to read additional values. +Each observation can be stored as one row for ease of data management in a spreadsheet and for sorting of the observations. However, the 6 header values, the female vector and the male vector could each be on a separate line because ADMB reads values consecutively from the input file and will move to the next line as necessary to read additional values. -The composition observations can be in any order and replicate observations by a year for a fleet are allowed (unlike survey and discard data). However, if the super-period approach is used, then each super-periods' observations must be contiguous in the data file. +The composition observations can be in any order and replicate observations by a year for a fleet are allowed (unlike survey and discard data). However, if the super-period approach is used, then each super-periods' observations must be contiguous in the data file. \subsection{Age Composition Option} -The age composition section begins by reading the number of age bins. If the value 0 is entered for the number of age bins, then skips reading the bin structure and all reading of other age composition data inputs. +The age composition section begins by reading the number of age bins. If the value 0 is entered for the number of age bins, then skips reading the bin structure and all reading of other age composition data inputs. \begin{center} - \begin{tabular}{p{2cm} p{13cm} } + \vspace*{-\baselineskip} + \begin{tabular}{p{3cm} p{13cm}} \hline - 17 \Tstrut & Number of age bins; can be equal to 0 if age data are not used; do not include a vector of agebins if the number of age bins is set equal to 0.\Bstrut\\ + 17 \Tstrut & Number of age bins; can be equal to 0 if age data are not used; do not include a vector of agebins if the number of age bins is set equal to 0. \Bstrut\\ \hline \end{tabular} \end{center} @@ -810,45 +821,47 @@ \subsection{Age Composition Option} \subsubsection{Age Composition Bins} If a positive number of age bins is read, then reads the bin definition next. \begin{center} - \begin{tabular}{p{3cm} p{12cm} } + \vspace*{-\baselineskip} + \begin{tabular}{p{3cm} p{13cm}} \hline - 1 2 3 ... 20 25 & Vector of ages\Tstrut\Bstrut\\ + 1 2 3 ... 20 25 & Vector of ages \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} -The bins are in terms of observed age (here age) and entered as the lower edge of each bin. Each ageing imprecision definition is used to create a matrix that translates true age structure into age structure. The first and last age' bins work as accumulators. So in the example any age 0 fish that are caught would be assigned to the age = 1 bin. +The bins are in terms of observed age (here age) and entered as the lower edge of each bin. Each ageing imprecision definition is used to create a matrix that translates true age structure into age structure. The first and last age' bins work as accumulators. So in the example any age 0 fish that are caught would be assigned to the age = 1 bin. \subsubsection{Ageing Error} Here, the capability to create a distribution of age (e.g., age with possible bias and imprecision) from true age is created. One or many ageing error definitions can be created. For each, the model will expect an input vector of mean age and a vector of standard deviations associated with the mean age. \begin{center} - \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{3.5cm} p{2.5cm} } + \begin{longtable}{p{2cm} p{2cm} p{2cm} p{1cm} p{4.5cm} p{2.5cm}} \hline - \multicolumn{1}{l}{2} & \multicolumn{5}{l}{Number of ageing error matrices to generate}\Tstrut\Bstrut\\ - \hline\\ - \multicolumn{6}{l}{Example with no bias and very little uncertainty at age:}\Tstrut\Bstrut\\ + \multicolumn{1}{l}{2} & \multicolumn{5}{l}{Number of ageing error matrices to generate} \Tstrut\Bstrut\\ + \hline \\ + \multicolumn{6}{l}{Example with no bias and very little uncertainty at age Tstrut} \Bstrut\\ \hline - Age-0 & Age-1 & Age-2 & ... & Max Age & \Tstrut\Bstrut\\ + Age-0 & Age-1 & Age-2 & ... & Max Age & \Tstrut\Bstrut\\ \hline - -1 & -1 & -1 & ... & -1 & \#Mean Age\Tstrut\\ - 0.001 & 0.001 & 0.001 & ... & 0.001 & \#SD\Bstrut\\ - \hline\\ - \multicolumn{6}{l}{Example with no bias and some uncertainty at age:}\Tstrut\Bstrut\\ + -1 & -1 & -1 & ... & -1 & \#Mean Age \Tstrut\\ + 0.001 & 0.001 & 0.001 & ... & 0.001 & \#SD \Bstrut\\ + \hline \\ + \multicolumn{6}{l}{Example with no bias and some uncertainty at age:} \Tstrut\Bstrut\\ \hline - 0.5 & 1.5 & 2.5 & ... & Max Age + 0.5 & \#Mean Age\Tstrut\\ - 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age\Bstrut\\ - \hline\\ - \multicolumn{6}{l}{Example with bias and uncertainty at age:}\Tstrut\Bstrut\\ + 0.5 & 1.5 & 2.5 & ... & Max Age + 0.5 & \#Mean Age \Tstrut\\ + 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age \Bstrut\\ + \hline \\ + \multicolumn{6}{l}{Example with bias and uncertainty at age:} \Tstrut\Bstrut\\ \hline - 0.5 & 1.4 & 2.3 & ... & Max Age + Age Bias & \#Mean Age\Tstrut\\ - 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age\Bstrut\\ + 0.5 & 1.4 & 2.3 & ... & Max Age + Age Bias & \#Mean Age \Tstrut\\ + 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age \Bstrut\\ \hline - \end{tabular} + \end{longtable} \end{center} +\vspace*{-1.2cm} -In principle, one could have year or laboratory specific matrices for ageing error. For each matrix, enter a vector with mean age for each true age; if there is no ageing bias, then set age equal to true age + 0.5. Alternatively, -1 value for mean age means to set it equal to true age plus 0.5. The addition of +0.5 is needed so that fish will get assigned to the intended integer age. The length of the input vector is equal to the population maximum age plus one (0-max age), with the first entry being for age 0 fish and the last for fish of population maximum age even if the maximum age bin for the data is lower than the population maximum age. The following line is a a vector with the standard deviation of age for each true age with a normal distribution assumption. +In principle, one could have year or laboratory specific matrices for ageing error. For each matrix, enter a vector with mean age for each true age; if there is no ageing bias, then set age equal to true age + 0.5. Alternatively, -1 value for mean age means to set it equal to true age plus 0.5. The addition of +0.5 is needed so that fish will get assigned to the intended integer age. The length of the input vector is equal to the population maximum age plus one (0-max age), with the first entry being for age 0 fish and the last for fish of population maximum age even if the maximum age bin for the data is lower than the population maximum age. The following line is a a vector with the standard deviation of age for each true age with a normal distribution assumption. -The model is able to create one ageing error matrix from parameters, rather than from an input vector. The range of conditions in which this new feature will perform well has not been evaluated, so it should be considered as a preliminary implementation and subject to modification. To invoke this option, for the selected ageing error vector, set the standard deviation of ageing error to a negative value for age 0. This will cause creation of an ageing error matrix from parameters and any age or size-at-age data that specify use of this age error pattern will use this matrix. Then in the control file, add a full parameter line below the cohort growth deviation parameter (or the movement parameter lines if used) in the mortality growth parameter section. These parameters are described in the control file section of this manual. +The model is able to create one ageing error matrix from parameters, rather than from an input vector. The range of conditions in which this new feature will perform well has not been evaluated, so it should be considered as a preliminary implementation and subject to modification. To invoke this option, for the selected ageing error vector, set the standard deviation of ageing error to a negative value for age 0. This will cause creation of an ageing error matrix from parameters and any age or size-at-age data that specify use of this age error pattern will use this matrix. Then in the control file, add a full parameter line below the cohort growth deviation parameter (or the movement parameter lines if used) in the mortality growth parameter section. These parameters are described in the control file section of this manual. Code for ageing error calculation can be found in \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/blob/main/SS_miscfxn.tpl}{SS\_miscfxn.tpl}, search for function ``get\_age\_age'' or ``SS\_Label\_Function 45''. @@ -856,11 +869,11 @@ \subsubsection{Age Composition Specification} If age data are included in the model, the following set-up is required, similar to the length data section. \begin{tabular}{p{2cm} p{2cm} p{2cm} p{1.5cm} p{1.5cm} p{2cm} p{2cm}} - \multicolumn{7}{l}{Specify bin compression and error structure for age composition data for each fleet:}\\ + \multicolumn{7}{l}{Specify bin compression and error structure for age composition data for each fleet:} \\ \hline - Min. & Constant & Combine & & Comp. & & Min.\Tstrut\\ - Tail & added & males \& & Compress. & Error & Param. & Sample\\ - Compress. & to prop. & females & Bins & Dist. & Select & Size\Bstrut\\ + Min. & Constant & Combine & & Comp. & & Min. \Tstrut\\ + Tail & added & males \& & Compress. & Error & Param. & Sample \\ + Compress. & to prop. & females & Bins & Dist. & Select & Size \Bstrut\\ \hline 0 & 0.0001 & 1 & 0 & 0 & 0 & 1 \Tstrut\\ 0 & 0.0001 & 1 & 0 & 0 & 0 & 1 \Bstrut\\ @@ -870,42 +883,42 @@ \subsubsection{Age Composition Specification} \begin{tabular}{p{1cm} p{14cm}} & \\ - \multicolumn{2}{l}{Specify method by which length bin range for age obs will be interpreted:}\\ + \multicolumn{2}{l}{Specify method by which length bin range for age obs will be interpreted:} \\ \hline 1 & Bin method for age data \Tstrut\\ - & 1 = value refers to population bin index\\ - & 2 = value refers to data bin index\\ - & 3 = value is actual length (which must correspond to population length bin \\ - & boundary)\Bstrut\\ + & 1 = value refers to population bin index \\ + & 2 = value refers to data bin index \\ + & 3 = value is actual length (which must correspond to population length bin \\ + & boundary) \Bstrut\\ \hline \end{tabular} -\begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{2.1cm}} - \multicolumn{10}{l}{ }\\ - \multicolumn{10}{l}{An example age composition observation:}\\ +\begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{2.5cm}} + \multicolumn{10}{l}{} \\ + \multicolumn{10}{l}{An example age composition observation:} \\ \hline Year & Month & Fleet & Sex & Partition & Age Err & Lbin lo & Lbin hi & Nsamp & Data Vector \Tstrut\\ \hline - 1987 & 1 & 1 & 3 & 0 & 2 & -1 & -1 & 79 & \Tstrut\\ - -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\Bstrut\\ + 1987 & 1 & 1 & 3 & 0 & 2 & -1 & -1 & 79 & \Tstrut\\ + -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \Bstrut\\ \hline \end{tabular} Syntax for Sex, Partition, and data vector are same as for length. The data vector has female values then male values, just as for the length composition data. -\pagebreak +% \pagebreak \myparagraph{Age Error} Age error (Age Err) identifies which ageing error matrix to use to generate expected value for this observation. \myparagraph{Lbin Low and Lbin High} -Lbin lo and Lbin hi are the range of length bins that this age composition observation refers to. Normally these are entered with a value of -1 and -1 to select the full size range. Whether these are entered as population bin number, length data bin number, or actual length is controlled by the value of the length bin range method above. +Lbin lo and Lbin hi are the range of length bins that this age composition observation refers to. Normally these are entered with a value of -1 and -1 to select the full size range. Whether these are entered as population bin number, length data bin number, or actual length is controlled by the value of the length bin range method above. \begin{itemize} \item Entering value of 0 or -1 for Lbin lo converts Lbin lo to 1; \item Entering value of 0 or -1 for Lbin hi converts Lbin hi to Maxbin; - \item It is strongly advised to use the -1 codes to select the full size range. If you use explicit values, then the model could unintentionally exclude information from some size range if the population bin structure is changed. + \item It is strongly advised to use the -1 codes to select the full size range. If you use explicit values, then the model could unintentionally exclude information from some size range if the population bin structure is changed. \item In reporting to the comp\_report.sso, the reported Lbin\_lo and Lbin\_hi values are always converted to actual length. \end{itemize} @@ -914,47 +927,46 @@ \subsubsection{Age Composition Specification} \subsection{Conditional Age-at-Length} -Use of conditional age-at-length will greatly increase the total number of age composition observations and associated model run time but there can be several advantages to inputting ages in this fashion. First, it avoids double use of fish for both age and size information because the age information is considered conditional on the length information. Second, it contains more detailed information about the relationship between size and age so provides stronger ability to estimate growth parameters, especially the variance of size-at-age. Lastly, where age data are collected in a length-stratified program, the conditional age-at-length approach can directly match the protocols of the sampling program. +Use of conditional age-at-length will greatly increase the total number of age composition observations and associated model run time but there can be several advantages to inputting ages in this fashion. First, it avoids double use of fish for both age and size information because the age information is considered conditional on the length information. Second, it contains more detailed information about the relationship between size and age so provides stronger ability to estimate growth parameters, especially the variance of size-at-age. Lastly, where age data are collected in a length-stratified program, the conditional age-at-length approach can directly match the protocols of the sampling program. -However, simulation research has shown that the use of conditional age-at-length data can result in biased growth estimates in the presence of unaccounted for age-based movement when length-based selectivity is assumed \citep{lee-effects-2017}, when other age-based processes (e.g., mortality) are not accounted for \citep{lee-use-2019}, or based on the age sampling protocol \citep{piner-evaluation-2016}. Understanding how data are collected (e.g., random, length-conditioned samples) and the biology of the stock is important when using conditional age-at-length data for a fleet. +However, simulation research has shown that the use of conditional age-at-length data can result in biased growth estimates in the presence of unaccounted for age-based movement when length-based selectivity is assumed \citep{lee-effects-2017}, when other age-based processes (e.g., mortality) are not accounted for \citep{lee-use-2019}, or based on the age sampling protocol \citep{piner-evaluation-2016}. Understanding how data are collected (e.g., random, length-conditioned samples) and the biology of the stock is important when using conditional age-at-length data for a fleet. -In a two sex model, it is best to enter these conditional age-at-length data as single sex observations (sex = 1 for females and = 2 for males), rather than as joint sex observations (sex = 3). Inputting joint sex observations comes with a more rigid assumption about sex ratios within each length bin. Using separate vectors for each sex allows 100\% of the expected composition to be fit to 100\% observations within each sex, whereas with the sex = 3 option, you would have a bad fit if the sex ratio were out of balance with the model expectation, even if the observed proportion at age within each sex exactly matched the model expectation for that age. Additionally, inputting the conditional age-at-length data as single sex observations isolates the age composition data from any sex selectivity as well. +In a two sex model, it is best to enter these conditional age-at-length data as single sex observations (sex = 1 for females and = 2 for males), rather than as joint sex observations (sex = 3). Inputting joint sex observations comes with a more rigid assumption about sex ratios within each length bin. Using separate vectors for each sex allows 100\% of the expected composition to be fit to 100\% observations within each sex, whereas with the sex = 3 option, you would have a bad fit if the sex ratio were out of balance with the model expectation, even if the observed proportion at age within each sex exactly matched the model expectation for that age. Additionally, inputting the conditional age-at-length data as single sex observations isolates the age composition data from any sex selectivity as well. -Conditional age-at-length data are entered within the age composition data section and can be mixed with marginal age observations for other fleets of other years within a fleet. To treat age data as conditional on length, Lbin\_lo and Lbin\_hi are used to select a subset of the total size range. This is different than setting Lbin\_lo and Lbin\_hi both to -1 to select the entire size -range, which treats the data entered on this line within the age composition data section as marginal age -composition data. +Conditional age-at-length data are entered within the age composition data section and can be mixed with marginal age observations for other fleets of other years within a fleet. To treat age data as conditional on length, Lbin\_lo and Lbin\_hi are used to select a subset of the total size range. This is different than setting Lbin\_lo and Lbin\_hi both to -1 to select the entire size range, which treats the data entered on this line within the age composition data section as marginal age composition data. -\begin{tabular}{p{0.9cm} p{1cm} p{0.9cm} p{0.9cm} p{1.5cm} p{0.9cm} p{0.9cm} p{0.9cm} p{1cm} p{2.4cm}} - \multicolumn{10}{l}{ }\\ - \multicolumn{10}{l}{An example conditional age-at-length composition observations:}\\ +\vspace*{-\baselineskip} +\begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{2.5cm}} + \multicolumn{10}{l}{} \\ + \multicolumn{10}{l}{An example conditional age-at-length composition observations:} \\ \hline Year & Month & Fleet & Sex & Partition & Age Err & Lbin lo & Lbin hi & Nsamp & Data Vector \Tstrut\\ \hline - 1987 & 1 & 1 & 1 & 0 & 2 & 10 & 10 & 18 & \Tstrut\\ - 1987 & 1 & 1 & 1 & 0 & 2 & 12 & 12 & 24 & \Tstrut\\ - 1987 & 1 & 1 & 1 & 0 & 2 & 14 & 14 & 16 & \Tstrut\\ - 1987 & 1 & 1 & 1 & 0 & 2 & 16 & 16 & 30 & \Tstrut\\ - -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\Bstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 10 & 10 & 18 & \Tstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 12 & 12 & 24 & \Tstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 14 & 14 & 16 & \Tstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 16 & 16 & 30 & \Tstrut\\ + -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \Bstrut\\ \hline \end{tabular} In this example observation, the age data is treated as on being conditional on the 2 cm length bins of 10--11.99, 12--13.99, 14--15.99, and 16--17.99cm. If there are no observations of ages for a specific sex within a length bin for a specific year, that entry may be omitted. \subsection{Mean Length or Body Weight-at-Age} -The model also accepts input of mean length-at-age or mean body weight-at-age. This is done in terms of observed age, not true age, to take into account the effects of ageing imprecision on expected mean size-at-age. If the value of the Age Error column is positive, then the observation is interpreted as mean length-at-age. If the value of the Age Error column is negative, then the observation is interpreted as mean body weight-at-age and the abs(Age Error) is used as Age Error. +The model also accepts input of mean length-at-age or mean body weight-at-age. This is done in terms of observed age, not true age, to take into account the effects of ageing imprecision on expected mean size-at-age. If the value of the Age Error column is positive, then the observation is interpreted as mean length-at-age. If the value of the Age Error column is negative, then the observation is interpreted as mean body weight-at-age and the abs(Age Error) is used as Age Error. \begin{center} - \begin{tabular}{p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{1cm} p{3.2cm} p{3.2cm} } + \begin{tabular}{p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{1cm} p{3.2cm} p{3.2cm}} \hline 1 & \multicolumn{8}{l}{Use mean size-at-age observation (0 = none, 1 = read data matrix)} \Tstrut\\ - \multicolumn{9}{l}{An example observation:}\Bstrut\\ + \multicolumn{9}{l}{An example observation:} \Bstrut\\ \hline & & & & & Age & & Data Vector & Sample Size \Tstrut\\ Yr & Month & Fleet & Sex & Part. & Err. & Ignore & (Female - Male) & (Female - Male) \Bstrut\\ \hline 1989 & 7 & 1 & 3 & 0 & 1 & 999 & & \Tstrut\\ ... & & & & & & & & \\ - -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 \Bstrut\\ + -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 \Bstrut\\ \hline \end{tabular} \end{center} @@ -962,14 +974,9 @@ \subsection{Mean Length or Body Weight-at-Age} \myparagraph{Note} \begin{itemize} - \item Negatively valued mean size entries with be ignored in fitting. This - feature allows the user to see the fit to a provisional observation without having that - observation affect the model. - \item A number of fish value of 0 will cause mean size value to be ignored in fitting. This - feature allows the user to see the fit to a provisional observation without having that - observation affect the model. - \item Negative value for year causes observation to not be included in the working matrix. This feature is the easiest way to include observations in a data file but not to use them in a - particular model scenario. + \item Negatively valued mean size entries with be ignored in fitting. This feature allows the user to see the fit to a provisional observation without having that observation affect the model. + \item A number of fish value of 0 will cause mean size value to be ignored in fitting. If the number of fish is zero, a non-zero mean size or body weight-at-age value, such as 0.01 or -999, still needs to be added. This feature allows the user to see the fit to a provisional observation without having that observation affect the model. + \item Negative value for year causes observation to not be included in the working matrix. This feature is the easiest way to include observations in a data file but not to use them in a particular model scenario. \item Each sexes' data vector and N fish vector has length equal to the number of age bins. \item The ``Ignore'' column is not used (set aside for future options) but still needs to have default values in that column (any value). \item Where age data are being entered as conditional age-at-length and growth parameters are being estimated, it may be useful to include a mean length-at-age vector with nil emphasis to provide another view on the model's estimates. @@ -979,15 +986,16 @@ \subsection{Mean Length or Body Weight-at-Age} \hypertarget{env-dat}{} \subsection{Environmental Data} -The model accepts input of time series of environmental data. Parameters can be made to be time-varying by making them a function of one of these environmental time series. In v.3.30.16 the option to specify the centering of environmental data by either using the mean of the by mean and the z-score. +The model accepts input of time series of environmental data. Parameters can be made to be time-varying by making them a function of one of these environmental time series. In v.3.30.16 the option to specify the centering of environmental data by either using the mean of the by mean and the z-score. \begin{center} - \begin{tabular}{p{1cm} p{3cm} p{3cm} p{7.5cm}} - \multicolumn{4}{l}{Parameter values can be a function of an environmental data series: }\\ + \vspace*{-\baselineskip} + \begin{tabular}{p{1cm} p{3cm} p{3cm} p{3cm}} + \multicolumn{4}{l}{Parameter values can be a function of an environmental data series:} \\ \hline - 1 & \multicolumn{3}{l}{Number of environmental variables}\Tstrut\Bstrut\\ - \multicolumn{4}{l}{ The environmental data can be centered by subtracting the mean and dividing by stdev (z-score, -1) or }\\ - \multicolumn{4}{l}{ by subtracting the mean of the environmental variable (-2) based on the year column value. }\\ + 1 & \multicolumn{3}{l}{Number of environmental variables} \Tstrut\Bstrut\\ + \multicolumn{4}{l}{The environmental data can be centered by subtracting the mean and dividing by stdev (z-score, -1) or} \\ + \multicolumn{4}{l}{by subtracting the mean of the environmental variable (-2) based on the year column value.} \\ \hline \multicolumn{4}{l}{COND > 0 Example of 2 environmental observations:} \Tstrut\\ & Year & Variable & Value \Bstrut\\ @@ -1001,12 +1009,12 @@ \subsection{Environmental Data} \end{tabular} \end{center} -The final two lines in the example above indicate in that variable series 1 will be centered by subtracting the mean and dividing by the standard deviation (indicated by the -1 value in the year column). The environmental variable series 2 will be centered by subtracting the mean of the time series (indicated by the -2 value in the year column). The input in the ``value'' column for both of the final two lines specifying the centering of the time series is ignored by the model. The control file also will need to be modified to in the long parameter line column ``env-var'' for the selected parameter. This feature was added in v.3.30.16. +The final two lines in the example above indicate in that variable series 1 will be centered by subtracting the mean and dividing by the standard deviation (indicated by the -1 value in the year column). The environmental variable series 2 will be centered by subtracting the mean of the time series (indicated by the -2 value in the year column). The input in the ``value'' column for both of the final two lines specifying the centering of the time series is ignored by the model. The control file also will need to be modified to in the long parameter line column ``env-var'' for the selected parameter. This feature was added in v.3.30.16. \myparagraph{Note} \begin{itemize} - \item Any years for which environmental data are not read are assigned a value of 0.0. None of the current link functions contain a link parameter that acts as an offset. Therefore, you should subtract the mean from your data. This lessens the problem with missing observations, but does not eliminate it. A better approach for dealing with missing observations is to use a different approach for the environmental effect on the parameter. Set up the parameter to have random deviations for all years, then enter the zero-centered environmental information as a \hyperlink{SpecialSurvey}{special survey of type 35} and set up the catchability of that survey to be a link to the deviation vector. This is a more complex approach, but it is superior in treatment of missing values and superior in allowing for error in the environmental relationship. + \item Any years for which environmental data are not read are assigned a value of 0.0. None of the current link functions contain a link parameter that acts as an offset. Therefore, you should subtract the mean from your data. This lessens the problem with missing observations, but does not eliminate it. A better approach for dealing with missing observations is to use a different approach for the environmental effect on the parameter. Set up the parameter to have random deviations for all years, then enter the zero-centered environmental information as a \hyperlink{SpecialSurvey}{special survey of type 35} and set up the catchability of that survey to be a link to the deviation vector. This is a more complex approach, but it is superior in treatment of missing values and superior in allowing for error in the environmental relationship. \item Users can assign environmental conditions for the initial equilibrium year by including environmental data for one year before the start year. However, this works only for recruitment parameters, not biology or selectivity parameters. \item Environmental data can be read for up to 100 years after the end year of the model. Then, if the recruitment-environment link has been activated, the future recruitments will be influenced by any future environmental data. This could be used to create a future ``regime shift'' by setting historical values of the relevant environmental variable equal to zero and future values equal to 1, in which case the magnitude of the regime shift would be dictated by the value of the environmental linkage parameter. Note that only future recruitment and growth can be modified by the environmental inputs; there are no options to allow environmentally-linked selectivity in the forecast years. \end{itemize} @@ -1019,46 +1027,50 @@ \subsection{Generalized Size Composition Data} \item Each method has ``units'' so the frequencies can be in units of biomass or numbers. \item Each method has ``scale'' so the bins can be in terms of weight or length (including ability to convert bin definitions in pounds or inches to kg or cm). \item The composition data is input as females then males, just like all other composition data in SS3. In a two-sex model, the new composition data can be combined sex, single sex, or both sex. - \item The generalized size composition data can be from the combined discard and retained, discard only, or retained only. + \item The generalized size composition data can be from the combined discard and retained (i.e., whole), discard only, or retained only. \item There are two options for treating fish that in population size bins are smaller than the smallest size frequency bin. \begin{itemize} - \item Option 1: By default, these fish are excluded (unlike length composition data where the small fish are automatically accumulated up into the first bin.) + \item Option 1: By default, these fish are excluded (unlike length composition data where the small fish are automatically accumulated up into the first bin). \item Option 2: If the first size bin is given as a negative value, then accumulation is turned on and the absolute value of the entered value is used as the lower edge of the first size bin. \end{itemize} \end{itemize} \begin{center} \begin{tabular}{p{1.4cm} p{0.7cm} p{12.8 cm}} - \multicolumn{3}{l}{Example entry:}\\ + \multicolumn{3}{l}{Example entry:} \\ \hline - 2 & & Number (N) of size frequency methods to be read. If this value is 0, then omit all entries below. A value of -1 (or any negative value) triggers expanded optional inputs below that allow for either Dirichlet of two parameter Multivariate (MV) Tweedie likelihood for fitting these data. \Tstrut\Bstrut\\ + 2 & & Number (N) of size frequency methods to be read. If this value is 0, then omit all entries below. A value of -1 (or any negative value) triggers expanded optional inputs below that allow for Dirichlet + % or two parameter Multivariate (MV) Tweedie likelihood (add when MV Tweedie is implemented) + for fitting these data. \Tstrut\Bstrut\\ \hline - \multicolumn{3}{l}{COND < 0 - Number of size frequency } \Tstrut\\ + \multicolumn{3}{l}{COND < 0 - Number of size frequency} \Tstrut\\ \multicolumn{2}{l}{2} & Number of size frequency methods to read \Tstrut\\ - \multicolumn{3}{l}{END COND < 0} \Bstrut\\ + \multicolumn{3}{l}{END COND < 0} \Bstrut\\ \hline - \multicolumn{2}{r}{25 15} & Number of bins per method\Tstrut\\ - \multicolumn{2}{r}{2 2} & Units per each method (1 = biomass, 2 = numbers)\\ - \multicolumn{2}{r}{3 3} & Scale per each method (1 = kg, 2 = lbs, 3 = cm, 4 = inches)\\ - \multicolumn{2}{r}{1e-9 1e-9} & Min compression to add to each observation (entry for each method)\\ + \multicolumn{2}{r}{25 15} & Number of bins per method \Tstrut\\ + \multicolumn{2}{r}{2 2} & Units per each method (1 = biomass, 2 = numbers) \\ + \multicolumn{2}{r}{3 3} & Scale per each method (1 = kg, 2 = lbs, 3 = cm, 4 = inches) \\ + \multicolumn{2}{r}{1e-9 1e-9} & Min compression to add to each observation (entry for each method) \\ \multicolumn{2}{r}{2 2} & Number of observations per weight frequency method \Bstrut\\ \hline - \multicolumn{3}{l}{COND < 0 - Number of size frequency } \Tstrut\\ - \multicolumn{2}{r}{1 1} & Composition error structure (0 = multinomial, 1 = Dirichlet using Theta*n, 2 = Dirichlet using beta, 3 = MV Tweedie)\Tstrut\\ - \multicolumn{2}{r}{1 1} & Parameter select consecutive index for Dirichlet or MV Tweedie composition error\Bstrut\\ - \multicolumn{3}{l}{END COND < 0} \Tstrut\\ + \multicolumn{3}{l}{COND < 0 - Number of size frequency} \Tstrut\\ + \multicolumn{2}{r}{1 1} & Composition error structure (0 = multinomial, 1 = Dirichlet using Theta*n, 2 = Dirichlet using beta) \Tstrut\\ + % , 3 = MV Tweedie (add when MV Tweedie is implemented) + \multicolumn{2}{r}{1 1} & Parameter select consecutive index for Dirichlet + % or MV Tweedie (add when MV Tweedie is implemented) + composition error \Bstrut\\ + \multicolumn{3}{l}{END COND < 0} \Tstrut\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{p{0.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.25cm}} - \multicolumn{18}{l}{Then enter the lower edge of the bins for each method. The two row vectors shown}\\ - \multicolumn{18}{l}{below contain the bin definitions for methods 1 and 2 respectively:}\\ + \multicolumn{18}{l}{Then enter the lower edge of the bins for each method. The two row vectors shown} \\ + \multicolumn{18}{l}{below contain the bin definitions for methods 1 and 2 respectively:} \\ \hline - -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & ... & 60 & 62 & 64 & 68 & 72 & 76 & 80 & 90\Tstrut\\ - -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & 44 & 46 & 48 & 50 & 52 & \multicolumn{4}{l}{54} \ - \Bstrut\\ + -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & ... & 60 & 62 & 64 & 68 & 72 & 76 & 80 & 90 \Tstrut\\ + -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & 44 & 46 & 48 & 50 & 52 & \multicolumn{4}{l}{54} \Bstrut\\ \hline \end{tabular} \end{center} @@ -1069,7 +1081,7 @@ \subsection{Generalized Size Composition Data} \begin{tabular}{p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1.5cm} p{5cm}} \hline & & & & & & Sample & \Bstrut\\ + Method & Year & Month & Fleet & Sex & Part & Size & females then males> \Bstrut\\ \hline 1 & 1975 & 1 & 1 & 3 & 0 & 43 & \Tstrut\\ 1 & 1977 & 1 & 1 & 3 & 0 & 43 & \\ @@ -1098,39 +1110,40 @@ \subsection{Tag-Recapture Data} \begin{center} \begin{tabular}{p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{3cm}} - \multicolumn{9}{l}{Example set-up for tagging data:}\\ + \multicolumn{9}{l}{Example set-up for tagging data:} \\ \hline - 1 & & \multicolumn{7}{l}{Do tags - 0/1/2. If this value is 0, then omit all entries below.}\\ - & & \multicolumn{7}{l}{If value is 2, read 1 additional input.}\Tstrut\Bstrut\\ + 1 & & \multicolumn{7}{l}{Do tags - 0/1/2. If this value is 0, then omit all entries below.} \\ + & & \multicolumn{7}{l}{If value is 2, read 1 additional input.} \Tstrut\Bstrut\\ \hline \multicolumn{9}{l}{COND > 0 All subsequent tag-recapture entries must be omitted if ``Do Tags'' = 0} \Tstrut\\ - & 3 & \multicolumn{7}{l}{Number of tag groups}\Bstrut\\ + & 3 & \multicolumn{7}{l}{Number of tag groups} \Bstrut\\ \hline - & 7 & \multicolumn{7}{l}{Number of recapture events}\Tstrut\Bstrut\\ + & 7 & \multicolumn{7}{l}{Number of recapture events} \Tstrut\Bstrut\\ \hline - & 2 & \multicolumn{7}{l}{Mixing latency period: N periods to delay before comparing observed}\Tstrut\\ - & & \multicolumn{7}{l}{to expected recoveries (0 = release period). }\Bstrut\\ + & 2 & \multicolumn{7}{l}{Mixing latency period: N periods to delay before comparing observed} \Tstrut\\ + & & \multicolumn{7}{l}{to expected recoveries (0 = release period).} \Bstrut\\ \hline - & 10 & \multicolumn{7}{l}{Max periods (seasons) to track recoveries, after which tags enter}\Tstrut\\ - & & \multicolumn{7}{l}{ accumulator}\Bstrut\\ + & 10 & \multicolumn{7}{l}{Max periods (seasons) to track recoveries, after which tags enter} \Tstrut\\ + & & \multicolumn{7}{l}{accumulator} \Bstrut\\ \hline \multicolumn{9}{l}{COND = 2} \Tstrut\\ - & 2 & \multicolumn{7}{l}{Minimum recaptures. The number of recaptures >= mixperiod must be}\\ + & 2 & \multicolumn{7}{l}{Minimum recaptures. The number of recaptures >= mixperiod must be} \\ & & \multicolumn{7}{l}{>= min tags recaptured specified to include tag group in log likelihood}\Bstrut\\ \hline - & \multicolumn{8}{l}{Release Data} \Tstrut\\ - & TG & Area & Year & Season & & Sex & Age & N Release\Bstrut\\ + & \multicolumn{8}{l}{Release Data} \Tstrut\\ + & TG & Area & Year & Season & & Sex & Age & N Release \Bstrut\\ \hline & 1 & 1 & 1980 & 1 & 999 & 0 & 24 & 2000 \Tstrut\\ & 2 & 1 & 1995 & 1 & 999 & 1 & 24 & 1000 \\ & 3 & 1 & 1985 & 1 & 999 & 2 & 24 & 10 \Bstrut\\ \hline - & \multicolumn{8}{l}{Recapture Data}\Tstrut\\ - & TG & & Year& & Season & & Fleet & Number\Bstrut\\ - \hline + & \multicolumn{8}{l}{Recapture Data} \Tstrut\\ + & TG & & Year & & Season & & Fleet & Number \Bstrut\\ + % \hline + \pagebreak & 1 & & 1982 & & 1 & & 1 & 7 \Tstrut\\ & 1 & & 1982 & & 1 & & 2 & 5 \\ & 1 & & 1985 & & 1 & & 2 & 0 \\ @@ -1148,24 +1161,24 @@ \subsection{Tag-Recapture Data} \item values are place holders and are replaced by program generated values for model time. \item Analysis of the tag-recapture data has one negative log likelihood component for the distribution of recaptures across areas and another negative log likelihood component for the decay of tag recaptures from a group over time. Note the decay of tag recaptures from a group over time suggests information about mortality is available in the tag-recapture data. More on this is in the \hyperlink{tagrecapture}{control file documentation}. \item Do tags option 2 adds an additional input compared to do tags option 1, minimum recaptures. Minimum recaptures allows the user to exclude tag groups that have few recaptures after the mixing period from the likelihood. This may be useful when few tags from a group have been recaptured as an alternative to manually removing the groups with these low numbers of recaptured tags from the tagging data. - \item Warning for earlier versions of SS3: A shortcoming in the recapture calculations when also using Pope's F approach was identified and corrected in version 3.30.14. + \item Warning for earlier versions of SS3: A shortcoming in the recapture calculations when also using Pope's F approach was identified and corrected in v.3.30.14. \end{itemize} \subsection{Stock (Morph) Composition Data} -It is sometimes possible to observe the fraction of a sample that is composed of fish from different stocks. These data could come from genetics, otolith microchemistry, tags, or other means. The growth pattern feature allows definition of cohorts of fish that have different biological characteristics and which are independently tracked as they move among areas. SS3 now incorporates the capability to calculate the expected proportion of a sample of fish that come from different growth patterns, ``morphs''. In the inaugural application of this feature, there was a 3 area model with one stock spawning and recruiting in area 1, the other stock in area 3, then seasonally the stocks would move into area 2 where stock composition observations were collected, then they moved back to their natal area later in the year. +It is sometimes possible to observe the fraction of a sample that is composed of fish from different stocks. These data could come from genetics, otolith microchemistry, tags, or other means. The growth pattern feature allows definition of cohorts of fish that have different biological characteristics and which are independently tracked as they move among areas. SS3 now incorporates the capability to calculate the expected proportion of a sample of fish that come from different growth patterns, ``morphs''. In the inaugural application of this feature, there was a 3 area model with one stock spawning and recruiting in area 1, the other stock in area 3, then seasonally the stocks would move into area 2 where stock composition observations were collected, then they moved back to their natal area later in the year. \begin{center} \begin{tabular}{p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{3.5cm}} - \multicolumn{8}{l}{Stock composition by growth pattern (morph) data can be entered in as follows:}\\ + \multicolumn{8}{l}{Stock composition by growth pattern (morph) data can be entered in as follows:} \\ \hline - 1 & \multicolumn{7}{l}{Do morph composition, if zero, then do not enter any further input below.}\Tstrut\Bstrut\\ + 1 & \multicolumn{7}{l}{Do morph composition, if zero, then do not enter any further input below.} \Tstrut\Bstrut\\ \hline - \multicolumn{8}{l}{COND = 1}\Tstrut\\ - & 3 & \multicolumn{6}{l}{Number of observations}\Bstrut\\ + \multicolumn{8}{l}{COND = 1} \Tstrut\\ + & 3 & \multicolumn{6}{l}{Number of observations} \Bstrut\\ \hline - & 2 & \multicolumn{6}{l}{Number of morphs}\Tstrut\Bstrut\\ + & 2 & \multicolumn{6}{l}{Number of morphs} \Tstrut\Bstrut\\ \hline - & 0.0001 & \multicolumn{6}{l}{Minimum Compression}\Tstrut\Bstrut\\ + & 0.0001 & \multicolumn{6}{l}{Minimum Compression} \Tstrut\Bstrut\\ \hline & Year & Month & Fleet & Null & Nsamp & \multicolumn{2}{l}{Data by N Morphs} \Tstrut\Bstrut\\ \hline @@ -1183,18 +1196,18 @@ \subsection{Stock (Morph) Composition Data} \item The expected value is combined across sexes. The entered data values will be normalized to sum to one within SS3. \item The ``null'' flag is included here in the data input section and is a reserved spot for future features. \item Note that there is a specific value of minimum compression to add to all values of observed and expected. - \item Warning for earlier versions of SS3: A flaw was identified in the calculation of accumulation by morph. This has been corrected in version 3.30.14. Older versions were incorrectly calculating the catch by morph using the expectation around age-at-length which already was accounting for the accumulation by morph. + \item Warning for earlier versions of SS3: A flaw was identified in the calculation of accumulation by morph. This has been corrected in v.3.30.14. Older versions were incorrectly calculating the catch by morph using the expectation around age-at-length which already was accounting for the accumulation by morph. \end{itemize} \subsection{Selectivity Empirical Data (future feature)} -It is sometimes possible to conduct field experiments or other studies to provide direct information about the selectivity of a particular length or age relative to the length or age that has peak selectivity, or to have a prior for selectivity that is more easily stated than a prior on a highly transformed selectivity parameter. This section provides a way to input data that would be compared to the specified derived value for selectivity. This is a placeholder at this time, required to include in the data file and will be fully implemented soon. +It is sometimes possible to conduct field experiments or other studies to provide direct information about the selectivity of a particular length or age relative to the length or age that has peak selectivity, or to have a prior for selectivity that is more easily stated than a prior on a highly transformed selectivity parameter. This section provides a way to input data that would be compared to the specified derived value for selectivity. This is a placeholder at this time, required to include in the data file and will be fully implemented soon. \begin{center} - \begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}} - \multicolumn{9}{l}{Selectivity data feature is under development for a future option and is not yet implemented. }\\ - \multicolumn{9}{l}{The input line still must be specified in as follows:}\\ + \begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{2.5cm} p{2.5cm} p{2.5cm}} + \multicolumn{9}{l}{Selectivity data feature is under development for a future option and is not yet implemented.} \\ + \multicolumn{9}{l}{The input line still must be specified in as follows:} \\ \hline - 0 & \multicolumn{8}{l}{Do data read for selectivity (future option)}\Tstrut\Bstrut\\ + 0 & \multicolumn{8}{l}{Do data read for selectivity (future option)} \Tstrut\Bstrut\\ \hline %& Year & Month & Fleet & Age/Size & Bin \# & Datum & Datum SE\Tstrut\Bstrut\\ %\hline @@ -1202,17 +1215,17 @@ \subsection{Selectivity Empirical Data (future feature)} \end{center} \begin{center} - \begin{tabular}{p{2cm} p{14cm}}\\ - \multicolumn{2}{l}{End of Data File}\\ + \begin{tabular}{p{2cm} p{14cm}} \\ + \multicolumn{2}{l}{End of Data File} \\ \hline - 999 & \#End of data file marker\Tstrut\Bstrut\\ + 999 & \#End of data file marker \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} \subsection{Excluding Data} -Data that are before the model start year or greater than the retrospective year are not moved into the internal working arrays at all. So if you have any alternative observations that are used in some model runs and not in others, you can simply give them a negative year value rather than having to comment them out. The first output to data.ss\_new has the unaltered and complete input data. Subsequent reports to data.ss\_new produce expected values or bootstraps only for the data that are being used. Additional information on bootstrapping is available in \hyperlink{bootstrap}{Bootstrap Data Files Section}. +Data that are before the model start year or greater than the retrospective year are not moved into the internal working arrays at all. So if you have any alternative observations that are used in some model runs and not in others, you can simply give them a negative year value rather than having to comment them out. The first output to data.ss\_new has the unaltered and complete input data. Subsequent reports to data.ss\_new produce expected values or bootstraps only for the data that are being used. Additional information on bootstrapping is available in \hyperlink{bootstrap}{Bootstrap Data Files Section}. Data that are to be included in the calculations of expected values, but excluded from the calculation of negative log likelihood, are flagged by use of a negative value for fleet number. @@ -1224,23 +1237,24 @@ \subsection{Data Super-Periods} Super-periods are started with a negative value for month, and then stopped with a negative value for month, observations within the super-period are designated with a negative fleet field. The standard error or input sample size field is now used for weighting of the expected values. An error message is generated if the super-period does not contain one observation with a positive fleet field. -An expected value for the observation will be computed for each selected time period within the super-period. The expected values are weighted according to the values entered in the standard error (or input sample size) field for all observations except the single observation holding the combined data. The expected value for that year gets a relative weight of 1.0. So in the example below, the relative weights are: 1982, 1.0 (fixed); 1983, 0.85; 1985, 0.4; 1986, 0.4. These weights are summed and rescaled to sum to 1.0, and are output in the echoinput.sso file. +An expected value for the observation will be computed for each selected time period within the super-period. The expected values are weighted according to the values entered in the standard error (or input sample size) field for all observations except the single observation holding the combined data. The expected value for that year gets a relative weight of 1.0. So in the example below, the relative weights are: 1982, 1.0 (fixed); 1983, 0.85; 1985, 0.4; 1986, 0.4. These weights are summed and rescaled to sum to 1.0, and are output in the echoinput.sso file. Not all time steps within the extent of a super-period need be included. For example, in a three season model, a super-period could be set up to combine information from season 2 across 3 years, e.g., skip over the season 1 and season 3 for the purposes of calculating the expected value for the super-period. The key is to create a dummy observation (negative fleet value) for all time steps, except 1, that will be included in the super-period and to include one real observation (positive fleet value; which contains the real combined data from all the specified time steps). \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{9cm}} - \multicolumn{6}{l}{Super-period example:}\\ + \multicolumn{6}{l}{Super-period example:} \\ \hline Year & Month & Fleet & Obs & SE & Comment \Tstrut\Bstrut\\ \hline - 1982 \Tstrut & \textbf{-2} & 3 & 34.2 & 0.3 & Start super-period. This observation has positive fleet value, so is expected to contain combined data from all identified periods of the super-period. The standard error (SE) entered here is use as the SE of the combined observation. The expected value for the survey in 1982 will have a relative weight of 1.0 (default) in calculating the combined expected value.\Bstrut\\ + 1982 \Tstrut & \textbf{-2} & 3 & 34.2 & 0.3 & Start super-period. This observation has positive fleet value, so is expected to contain combined data from all identified periods of the super-period. The standard error (SE) entered here is use as the SE of the combined observation. The expected value for the survey in 1982 will have a relative weight of 1.0 (default) in calculating the combined expected value.\Bstrut\\ \hline - 1983 \Tstrut & 2 & \textbf{-3} & 55 & 0.3 & In super-period; entered observation is ignored. The expected value for the survey in 1983 will have a relative weight equal to the value in the standard error field (0.3) in calculating the combined expected value.\Bstrut\\ + 1983 \Tstrut & 2 & \textbf{-3} & 55 & 0.3 & In super-period; entered observation is ignored. The expected value for the survey in 1983 will have a relative weight equal to the value in the standard error field (0.3) in calculating the combined expected value. \Bstrut\\ \hline - 1985 \Tstrut & 2 & \textbf{-3}& 88 & 0.40 & Note that 1984 is not included in the super-period Relative weight for 1985 is 0.4\Bstrut\\ + 1985 \Tstrut & 2 & \textbf{-3}& 88 & 0.40 & Note that 1984 is not included in the super-period. Relative weight for 1985 is 0.4 \Bstrut\\ \hline - 1986 & \textbf{-2} & \textbf{-3} & 88 & 0.40 & End super-period\Tstrut\Bstrut\\ + 1986 & \textbf{-2} & \textbf{-3} & 88 & 0.40 & End super-period \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} diff --git a/9control.tex b/9control.tex index dee35482..cbebea34 100644 --- a/9control.tex +++ b/9control.tex @@ -54,23 +54,23 @@ \subsection{Parameter Line Elements} \hline Column & Element & Description \Tstrut\Bstrut\\ \hline - 1 & LO & Minimum value for the parameter\Tstrut\\ - 2 & HI & Maximum value for the parameter\Tstrut\\ + 1 & LO & Minimum value for the parameter \Tstrut\\ + 2 & HI & Maximum value for the parameter \Tstrut\\ 3 \Tstrut & INIT & Initial value for the parameter. If the phase (described below) for the parameter is negative the parameter is fixed at this value. If the ss.par file is read, it overwrites these INIT values.\\ 4 \Tstrut & PRIOR & Expected value for the parameter. This value is ignored if the prior type is 0 (no prior) or 1 (symmetric beta). If the selected prior type (described below) is lognormal, this value is entered in log space. \\ - 5 \Tstrut & PRIOR SD & Standard deviation for the prior, used to calculate likelihood of the current parameter value. This value is ignored if prior type is 0. The standard deviation is in regular space regardless of the prior type.\\ - 6 \Tstrut & \hyperlink{PriorDescrip}{PRIOR TYPE} & 0 = none, \\ + 5 \Tstrut & PRIOR SD & Standard deviation for the prior, used to calculate likelihood of the current parameter value. This value is ignored if prior type is 0. The standard deviation is in regular space regardless of the prior type. \\ + 6 \Tstrut & \hyperlink{PriorDescrip}{PRIOR TYPE} & 0 = none; \\ & & 1 = symmetric beta; \\ & & 2 = full beta; \\ & & 3 = lognormal without bias adjustment; \\ & & 4 = lognormal with bias adjustment; \\ & & 5 = gamma; and \\ & & 6 = normal. \\ - 7 \Tstrut & PHASE & Phase in which parameter begins to be estimated. A negative value causes the parameter to retain its INIT value (or value read from the ss.par file).\Bstrut\\ - 8 \Tstrut & Env var \& Link & Create a linkage to an input environmental time-series\\ + 7 \Tstrut & PHASE & Phase in which parameter begins to be estimated. A negative value causes the parameter to retain its INIT value (or value read from the ss.par file). \Bstrut\\ + 8 \Tstrut & Env var \& Link & Create a linkage to an input environmental time-series \\ 9 \Tstrut & Dev link & Invokes use of the deviation vector in the linkage function \\ 10 \Tstrut & Dev min yr & Beginning year for the deviation vector \\ - 11 \Tstrut & Dev max yr & Ending year for the deviation vector\\ + 11 \Tstrut & Dev max yr & Ending year for the deviation vector \\ 12 \Tstrut & Dev phase & Phase for estimation for elements in the deviation vector \\ 13 \Tstrut & Block & Time block or trend to be applied \\ 14 \Tstrut & Block function & Functional form for the block offset. \Bstrut\\ @@ -78,7 +78,7 @@ \subsection{Parameter Line Elements} \end{tabular} \end{center} -Note that relative to SS3 v.3.24, the order of PRIOR SD and PRIOR TYPE have been switched and the PRIOR TYPE options have been renumbered. +Note that relative to Stock Synthesis v.3.24, the order of PRIOR SD and PRIOR TYPE have been switched and the PRIOR TYPE options have been renumbered. The full parameter line (14 in length) syntax for the mortality-growth, spawn-recruitment, catchability, and selectivity sections provides additional controls to give the parameter time-varying properties. If a parameter (a full parameter line of length 14) is set up to be time-varying (i.e., parameter time blocks, annual deviations), short parameter lines, the first 7 elements, are required to be specified immediately after the main parameter block (i.e., mortality-growth parameter section). Additional information regard time-varying parameters and how to implement them is in the \hyperlink{TVpara}{Using Time-Varying Parameters} section. @@ -88,7 +88,7 @@ \subsection{Terminology} \subsection{Beginning of Control File Inputs} \begin{center} - \begin{longtable}{p{0.5cm} p{2cm} p{12cm}} + \begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \hline \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline @@ -104,7 +104,7 @@ \subsection{Beginning of Control File Inputs} \endlastfoot - \multicolumn{2}{l}{\#C comment }\Tstrut & Comments beginning with \#C at the top of the file will be retained and included in output. \Bstrut\\ + \multicolumn{2}{l}{\#C comment} \Tstrut & Comments beginning with \#C at the top of the file will be retained and included in output. \Bstrut\\ \hline 0 & & 0 = Do not read the weight-at-age (wtatage.ss) file; \Tstrut\\ @@ -128,6 +128,7 @@ \subsection{Beginning of Control File Inputs} & 0.2 0.6 0.2 & Distribution among platoons. Enter either a custom vector or enter a vector of length N with the first value of -1 to get a normal approximation: (0.15, 0.70, 0.15) for 3 platoons, or 5 platoons (0.031, 0.237, 0.464, 0.237, 0.031). \Bstrut\\ \hline \end{longtable} + \vspace*{-\baselineskip} \end{center} \subsubsection{Weight-at-Age} @@ -139,14 +140,14 @@ \subsubsection{Settlement Timing for Recruits and Distribution} Additional control of the seasonal timing was added in v.3.30 and now there now is an explicit elapsed time between spawning and recruitment. Spawning still occurs, just once per year, which defines a single spawning biomass for the stock-recruitment curve but its timing can be at any specified time, not just the beginning of a season. Recruitment of the progeny from an annual spawning can now enter the population in one or more settlement events, at some point after spawning as defined by the user. -\begin{longtable}{p{1.25cm} p{1.25cm} p{1cm} p{11cm}} +\begin{longtable}{p{1.25cm} p{1.25cm} p{1cm} p{11.5cm}} \hline - \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options} \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options} \Tstrut\Bstrut\\ \hline \endhead @@ -155,8 +156,10 @@ \subsubsection{Settlement Timing for Recruits and Distribution} \endlastfoot - 1 \Tstrut & &\multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{Recruitment distribution method. This section controls which combinations of growth pattern x area x settlement will get a portion of the total recruitment coming from each spawning. Options:}} \\ \\ \\ - & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{1 = no longer available (used the SS3 v.3.24 or earlier setup);}} \\ + 1 \Tstrut & &\multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{Recruitment distribution method. This section controls which combinations of growth pattern x area x settlement will get a portion of the total recruitment coming from each spawning. Options:}} \\ + & & \\ + & & \\ + & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{1 = no longer available (used the Stock Synthesis v.3.24 or earlier setup);}} \\ & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{2 = main effects for growth pattern, settle timing, and area;}} \\ & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{3 = each settle entity; and}} \\ & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{4 = none, no parameters (only if growth pattern x settlement x area = 1).}} \Bstrut\\ @@ -164,26 +167,28 @@ \subsubsection{Settlement Timing for Recruits and Distribution} \hline 1 & & \multicolumn{2}{l}{Spawner-Recruitment (not implement yet, but required), options:} \Tstrut\\ & & \multicolumn{2}{l}{1 = global; and} \\ - & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{2 = by area (by area is not yet implemented; there is a conceptual challenge to doing the equilibrium calculation when there is fishing).}} \Bstrut\\\\ + & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{2 = by area (by area is not yet implemented; there is a conceptual challenge to doing the equilibrium calculation when there is fishing).}} \Bstrut\\ \hline - 1 \Tstrut & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{Number of recruitment settlement assignments. Must be at least 1 even if only 1 settlement and 1 area because the timing of that settlement must be specified.}} \Bstrut\\\\\\ + 1 \Tstrut & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{Number of recruitment settlement assignments. Must be at least 1 even if only 1 settlement and 1 area because the timing of that settlement must be specified.}} \Bstrut\\ \\ \hline 0 \Tstrut & & \multicolumn{2}{l}{Future feature, not implement yet but required.} \Bstrut\\ \hline - Growth Pattern & Month & Area & Age at settlement \Tstrut \\ + Growth Pattern & Month & Area & Age at settlement \Tstrut\\ \hline 1 & 5.5 & 1 & 0 \Bstrut\\ \hline -\end{longtable} +\end{longtable} +\vspace*{-\baselineskip} The above example specifies settlement to mid-May (month 5.5). Note that normally the calendar age at settlement is 0 if settlement happens between the time of spawning and the end of that year, and at age 1 if settlement is in the year after spawning. Below is an example set-up where there are multiple settlement events, with one occurring the following year after spawning: \begin{center} - \begin{tabular}{p{3cm} p{2cm} p{2cm} p{7cm}} + \vspace*{-\baselineskip} + \begin{tabular}{p{3cm} p{3cm} p{2cm} p{7cm}} \hline 3 & \multicolumn{3}{l}{Number of recruitment settlement events} \Tstrut\\ 0 & \multicolumn{3}{l}{Unused option} \Bstrut\\ @@ -241,7 +246,7 @@ \subsubsection{Settlement Timing for Recruits and Distribution} \subsubsection{Movement} Here the movement of fish between areas are defined. This is a box transfer with no explicit adjacency of areas, so fish can move from any area to any other area in each time step. While not incorporated yet, there is a desire for future versions of SS3 to have the capability to allow sex-specific movement, and also to allow some sort of mirroring so that sexes and growth patterns can share the same movement parameters if desired. -\begin{longtable}{p{0.5cm} p{2cm} p{12cm}} +\begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \hline \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline @@ -270,6 +275,8 @@ \subsubsection{Movement} \\ \hline \end{longtable} + \vspace*{-\baselineskip} + Two parameters will be entered later for each growth pattern, area pair, and season. \begin{itemize} @@ -294,7 +301,7 @@ \subsubsection{Movement} \subsubsection{Time Blocks} \hypertarget{timeblocks}{} -\begin{longtable}{p{0.5cm} p{2cm} p{12cm}} +\begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \hline \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline @@ -317,6 +324,7 @@ \subsubsection{Time Blocks} & \multirow{1}{2cm}[-0.1cm]{1999 2002} & \multirow{1}{12cm}[-0.10cm]{Beginning and ending years for blocks in design 3.} \Bstrut\\ \hline \end{longtable} +\vspace*{-\baselineskip} Blocks and other time-vary parameter controls are operative during forecast years, so care should be taken when setting the end year of the last block in a pattern. If that end year is set to the last year in the time series, then the parameter will revert to the base value for the forecast. If the user wants to continue the last block through the forecast, it is advisable to set the last block's end year value to -2 to cause SS3 to reset it to the last year of the forecast. Using the value -1 will set the block's end year to the last year of the time series and leave the forecast at the base parameter value. Note that additional controls on time-varying parameters in forecast years are in the forecast section. @@ -325,14 +333,14 @@ \subsubsection{Auto-generation} Auto-generation is a useful way to automatically create the required short time-varying parameter lines which will be written in the control.ss\_new file. These parameter lines can then be copied into the control file and modified as needed. As example, if you want to add a block to natural mortality, modify the block and block function entry of the mortality parameter line, ensure that auto-generation is set to 0 (for the biology section at least) and run the model without estimation. The control.ss\_new file will now show the required block parameter line specification for natural mortality and this line can be copied into the main control file. Note, that if auto-generation is on (set to 0), the model will not expect to read the time-varying parameters in that section of the control file and will error out if they are present -\begin{longtable}{p{0.5cm} p{2cm} p{12cm}} +\begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endhead @@ -343,12 +351,12 @@ \subsubsection{Auto-generation} 1 & & Environmental/Block/Deviation adjust method for all time-varying parameters. \Tstrut\\ & & 1 = warning relative to base parameter bounds; and \\ - & & 3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., SS3 v.3.24) is no longer an option.\Bstrut\\ + & & 3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., v.3.24) is no longer an option. \Bstrut\\ - \multicolumn{2}{l}{1 1 1 1 1} & Auto-generation of time-varying parameter lines. Five values control auto-generation for parameter block sections: 1-biology, 2-spawn-recruitment, 3-catchability, 4-tag (future), and 5-selectivity.\\ - & & The accepted values are:\\ - & & 0 = auto-generate all time-varying parameters (no time-varying parameters are expected);\\ - & & 1 = read each time-varying parameter line as exists in the control file; and\\ + \multicolumn{2}{l}{1 1 1 1 1} & Auto-generation of time-varying parameter lines. Five values control auto-generation for parameter block sections: 1-biology, 2-spawn-recruitment, 3-catchability, 4-tag (future), and 5-selectivity. \\ + & & The accepted values are: \\ + & & 0 = auto-generate all time-varying parameters (no time-varying parameters are expected); \\ + & & 1 = read each time-varying parameter line as exists in the control file; and \\ & & 2 = read each line and auto-generate if read if the time-varying parameter value for LO = -12345. Useful to generate reasonable starting values. \Bstrut\\ \hline \end{longtable} @@ -363,7 +371,7 @@ \subsubsection{Natural Mortality} \myparagraph{Age-specific M Linked to Age-Specific Length and Maturity} -This is an experimental option available as of 3.30.17. +This is an experimental option available as of v.3.30.17. A general model for age- and sex-specific natural mortality expands a model developed by \citet{maunder2010bigeye} and \citet{maunder2011M} and is based on the following some assumptions: @@ -401,12 +409,12 @@ \subsubsection{Natural Mortality} \myparagraph{Natural Mortality Options} \begin{longtable}{p{0.5cm} p{2cm} p{12.75cm}} \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endhead \hline @@ -415,13 +423,13 @@ \subsubsection{Natural Mortality} \endlastfoot - 1 & & Natural Mortality Options:\Tstrut\\ - & & 0 = A single parameter;\\ - & & 1 = N breakpoints;\\ + 1 & & Natural Mortality Options: \Tstrut\\ + & & 0 = A single parameter; \\ + & & 1 = N breakpoints; \\ & & 2 = Lorenzen; \\ - & & 3 = Read age specific M and do not do seasonal interpolation;\\ - & & 4 = Read age specific and do seasonal interpolation, if appropriate;\\ - & & 5 = age-specific M linked to age-specific length and maturity (experimental);\\ + & & 3 = Read age specific M and do not do seasonal interpolation; \\ + & & 4 = Read age specific and do seasonal interpolation, if appropriate; \\ + & & 5 = age-specific M linked to age-specific length and maturity (experimental); \\ & & 6 = Age-range Lorenzen. \Bstrut\\ \hline @@ -429,7 +437,7 @@ \subsubsection{Natural Mortality} \hline \multicolumn{2}{l}{COND = 1} & \Tstrut\Bstrut\\ - & 4 & Number of breakpoints. Then read a vector of ages for these breakpoints. Later, per sex x GP, read N parameters for the natural mortality at each breakpoint.\\ + & 4 & Number of breakpoints. Then read a vector of ages for these breakpoints. Later, per sex x GP, read N parameters for the natural mortality at each breakpoint. \\ \multicolumn{2}{r}{2.5 4.5 9.0 15.0} & Vector of age breakpoints. \Bstrut\\ \hline @@ -440,19 +448,19 @@ \subsubsection{Natural Mortality} \multicolumn{2}{l}{COND = 3 or 4} \Tstrut & Do not read any natural mortality parameters in the mortality growth parameter section. With option 3, these M values are held fixed for the integer age (no seasonality or birth season considerations). With option 4, there is seasonal interpolation based on real age, just as in options 1 and 2.\\ - & 0.20 0.25 ... 0.20 0.23 ... & Age-specific M values where in a 2 sex model the first row is female and the second row is male. If there are multiple growth patterns female growth pattern 1-N is read first followed by males 1-N growth pattern.\Bstrut\\ + & 0.20 0.25 ... 0.20 0.23 ... & Age-specific M values where in a 2 sex model the first row is female and the second row is male. If there are multiple growth patterns female growth pattern 1-N is read first followed by males 1-N growth pattern. \Bstrut\\ \hline \multicolumn{2}{l}{COND = 5} \Tstrut & age-specific M linked to age-specific length and maturity suboptions. \\ & & 1 = Requires 4 long parameter lines per sex x growth pattern using maturity. Must be used with maturity option 1; \\ & & 2 = reserved for future option; \\ - & & 3 = Requires 6 long parameter lines per sex x growth pattern\Bstrut\\ + & & 3 = Requires 6 long parameter lines per sex x growth pattern \Bstrut\\ \hline \multicolumn{2}{l}{COND = 6} \Tstrut & Read two additional integer values that are the age range for average M. Later, read one long parameter line for each sex x growth pattern that will be the average M over the reference age range. \\ - & 0 \Tstrut & Minimum age of average M range for calculating Lorenzen natural mortality.\\ - & 10 \Tstrut & Maximum age of average M range for calculating Lorenzen natural mortality.\\ + & 0 \Tstrut & Minimum age of average M range for calculating Lorenzen natural mortality. \\ + & 10 \Tstrut & Maximum age of average M range for calculating Lorenzen natural mortality. \\ \hline \end{longtable} @@ -475,13 +483,13 @@ \subsubsection{Growth} with parameters $L_{1}$, $L_\infty$, and $k$. The $L_\infty$ is calculated as: \begin{equation} - L_\infty = L_{1} + \frac{(L_2 - L_1)}{e^{-k(A2-A1)}} + L_\infty = L_{1} + \frac{(L_2 - L_1)}{1-e^{-k(A2-A1)}} \end{equation} based on the input values of fixed age for first size-at-age ($A_1$) and fixed age for second size-at-age ($A_2$). \myparagraph{Schnute/Richards growth function} -The \citet{richards1959growth} growth model as parameterized by \citet{schnute1981growth} provides a flexible growth parameterization that allows for not only asymptotic growth but also linear, quadratic or exponential growth. The Schnute/Richards growth is invoked by entering option 2 in the growth type field. The Schnute/Richards growth function uses the standard growth parameters (e.g, Lmin, Linf, and $k$) and a fourth parameter that is read after reading the von Bertalanffy growth coefficient parameter ($k$). When this fourth parameter has a value of 1.0, it is equivalent to the standard von Bertalanffy growth curve. When this function was first introduced, it was required that A0 parameter be set to 0.0. +The \citet{richards1959growth} growth model as parameterized by \citet{schnute1981growth} provides a flexible growth parameterization that allows for not only asymptotic growth but also linear, quadratic or exponential growth. The Schnute/Richards growth is invoked by entering option 2 in the growth type field. The Schnute/Richards growth function uses the standard growth parameters (e.g, Lmin, Linf, and $k$) and a fourth parameter that is read after reading the von Bertalanffy growth coefficient parameter ($k$). When this fourth parameter =has a value of 1.0, it is equivalent to the standard von Bertalanffy growth curve. When this function was first introduced, it was required that A0 parameter be set to 0.0. The Schnute/Richards growth model is parameterized as: @@ -501,7 +509,7 @@ \subsubsection{Growth} \myparagraph{Mean size-at-maximum age} -The mean size of fish in the max age age bin depends upon how close the growth curve is to Linf by the time it reaches max age and the mortality rate of fish after they reach max age. Users specify the mortality rate to use in this calculation during the initial equilibrium year. This must be specified by the user and should be reasonably close to M plus initial F. In SS3 v.3.30, this uses the von Bertalanffy growth out to 3 times the maximum population age and decays the numbers at age by exp(-value set here). For subsequent years of the time series, the model should update the size-at-maximum age according to the weighted average mean size of fish already at maximum age and the size of fish just graduating into maximum age. Unfortunately, this updating is only happening in years with time-varying growth. This will hopefully be fixed in a the future version. +The mean size of fish in the max age age bin depends upon how close the growth curve is to Linf by the time it reaches max age and the mortality rate of fish after they reach max age. Users specify the mortality rate to use in this calculation during the initial equilibrium year. This must be specified by the user and should be reasonably close to M plus initial F. In v.3.30, this uses the von Bertalanffy growth out to 3 times the maximum population age and decays the numbers at age by exp(-value set here). For subsequent years of the time series, the model should update the size-at-maximum age according to the weighted average mean size of fish already at maximum age and the size of fish just graduating into maximum age. Unfortunately, this updating is only happening in years with time-varying growth. This will hopefully be fixed in a the future version. \myparagraph{Age-specific K} This option creates age-specific K multipliers for each age of a user-specified age range, with independent multiplicative factors for each age in the range and for each growth pattern / sex. The null value is 1.0 and each age's K is set to the next earlier age's K times the value of the current age's multiplier. Each of these multipliers is entered as a full parameter line, so inherits all time-varying capabilities of full parameters. The lower end of this age range cannot extend younger than the specified age for which the first growth parameter applies. This is a beta model feature, so examine output closely to assure you are getting the size-at-age pattern you expect. Beware of using this option in a model with seasons within year because the K deviations are indexed solely by integer age according to birth year. There is no offset for birth season timing effects, nor is there any seasonal interpolation of the age-varying K. @@ -510,16 +518,16 @@ \subsubsection{Growth} \myparagraph{Growth cessation} A growth cessation model was developed for the application to tropical tuna species \citep{maunder-growth-2018}. Growth cessation allows for a linear relationship between length and age, followed by a marked reduction of growth after the onset of sexual maturity by assuming linear growth for the youngest individuals and then a logistic function to model the decreasing growth rate at older ages. - -\begin{longtable}{p{0.5cm} p{2cm} p{12cm}} +\vspace*{-\baselineskip} +\begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \multicolumn{3}{l}{Example growth specifications:} \Tstrut\Bstrut\\ \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endhead \hline @@ -529,41 +537,49 @@ \subsubsection{Growth} \endlastfoot 1 & & Growth Model: \Tstrut\\ - & & 1 = von Bertalanffy (3 parameters);\\ + & & 1 = von Bertalanffy (3 parameters); \\ & & 2 = Schnute's generalized growth curve (aka Richards curve) with 3 parameters. Third parameter has null value of 1.0; \\ - & & 3 = von Bertalanffy with age-specific K multipliers for specified range of ages, requires additional inputs below following the placeholder for future growth feature;\\ - & & 4 = age specific K. Set base K as K for age = nages and working backwards and the age-specific K = K for the next older age * multiplier, requires additional inputs below following the placeholder for future growth feature; \\ + & & 3 = von Bertalanffy with age-specific K multipliers for specified range of ages, requires additional inputs below following the placeholder for future growth feature; \\ + & & 4 = age-specific K. Set base K as K for age = nages and working backwards and the age-specific K = K for the next older age * multiplier, requires additional inputs below following the placeholder for future growth feature; \\ & & 5 = age specific K. Set base K as K for nages and work backwards and the age-specific K = base K * multiplier, requires additional inputs below following the placeholder for future growth feature; \\ & & 6 = not implemented; \\ & & 7 = not implemented; and \\ - & & 8 = growth cessation. Decreases the K for older fish. If implemented, the Amin and Amax parameters, the next two lines, need to be set at 0 and 999 respectively. The mortality-growth parameter section requires the base K parameter line which is interpreted as the steepness of the logistic function that models the reduction in the growth increment by age followed by a second parameter line which is the parameter related to the maximum growth rate. \Bstrut \\ + & & 8 = growth cessation. Decreases the K for older fish. If implemented, the Amin and Amax parameters, the next two lines, need to be set at 0 and 999 respectively. The mortality-growth parameter section requires the base K parameter line which is interpreted as the steepness of the logistic function that models the reduction in the growth increment by age followed by a second parameter line which is the parameter related to the maximum growth rate. \Bstrut\\ \hline - \Tstrut 1 & & Growth Amin (A1): Reference age for first size-at-age (post-settlement) parameter. \Bstrut\\ + \Tstrut 1 & & Growth Amin (A1): Reference age for first size-at-age L1 (post-settlement) parameter. First growth parameter is size at this age; linear growth below this. \Bstrut\\ %\hline - \Tstrut 25 & & Growth Amax (A2): Reference age for second size-at-age parameter (999 to use as L infinity). \Bstrut\\ + \Tstrut 25 & & Growth Amax (A2): Reference age for second size-at-age L2 (post-settlement) parameter. Use 999 to treat as L infinity. \Bstrut\\ \hline - \Tstrut 0.20 & & Exponential decay for growth above maximum age (plus group: fixed at 0.20 in SS3 v.3.24; should approximate initial Z). Alternative Options: \\ - & & -998 = Disable growth above maximum age (plus group) similar to earlier versions of SS3 (prior to SS3 v.3.24); and \\ - & & -999 = Replicate the simpler calculation done in SS3 v.3.24. \Bstrut\\ + \Tstrut 0.20 & & Exponential decay for growth above maximum age (plus group: fixed at 0.20 in v.3.24; should approximate initial Z). Alternative Options: \\ + & & -998 = Disable growth above maximum age (plus group) similar to earlier versions of SS3 (prior to v.3.24); and \\ + & & -999 = Replicate the simpler calculation done in v.3.24. \Bstrut\\ \hline 0 & & Placeholder for future growth feature. \Tstrut\Bstrut\\ \hline - \multicolumn{2}{l}{COND = 3} & Growth model: age-specific K \Tstrut\\ - 2 & & Number of K multipliers to read; \\ + \multicolumn{2}{l}{COND = 3} & Growth model: age-specific K age-specific K where the age-specific K parameter values are multipliers of the age - 1 K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 5 is equal to 0.20 * age-5 multiplier. Subsequently, age 6 K value is equal to age 5 K (0.20 * age-5 multiplier) multiplied by the age-6 multiplier. All ages above the maximum age with age-specific K are equal to the maximum age-specific K. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section.\Tstrut\\ + 3 & & Number of K multipliers to read; \\ & 5 & Minimum age for age-specific K; and \\ + & 6 & Second age for age-specific K; and \\ & 7 & Maximum age for age-specific K. \Bstrut\\ - \multicolumn{2}{l}{COND = 4 or 5} & Growth model: age-specific K \Tstrut\\ - 2 & & Number of K multipliers to read; \\ + \multicolumn{2}{l}{COND = 4} & Growth model: age-specific K where the age-specific K parameter values are multipliers of the age + 1 K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal to age 7 K (0.20 * age-7 multiplier) multiplied by the age-6 multiplier. All ages below the minimum age with age-specific K are equal to the minimum age-specific K. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section. \Tstrut\\ + 3 & & Number of K multipliers to read; \\ & 7 & Maximum age for age-specific K; \\ & 6 & Second age for age-specific K; and \\ & 5 & Minimum age for age-specific K. \Bstrut\\ \hline + + \multicolumn{2}{l}{COND = 5} & Growth model: age-specific K where the age-specific K parameter values are multipliers of the base K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal 0.20 * age-6 multiplier. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section. \Tstrut\\ + 3 & & Number of K multipliers to read; \\ + & 7 & Maximum age for age-specific K; \\ + & 6 & Second age for age-specific K; and \\ + & 5 & Minimum age for age-specific K. \Bstrut\\ + \hline \Tstrut 0 & & Standard deviation added to length-at-age: Enter 0.10 to mimic SS2 V1.xx. Recommend using a value of 0.0. \Bstrut\\ \hline @@ -624,7 +640,7 @@ \subsubsection{Maturity-Fecundity} \hline \end{longtable} -\pagebreak +% \pagebreak \subsubsection{Hermaphroditism} @@ -663,6 +679,7 @@ \subsubsection{Hermaphroditism} & & 1 = simple addition of males to females. \Bstrut\\ \hline \end{longtable} +\vspace*{-\baselineskip} The hermaphroditism option requires three full parameter lines in the mortality growth section: \begin{enumerate} @@ -717,7 +734,7 @@ \subsubsection{Catch Multiplier} where $C_{obs}$ is the input catch by fleet (observed catch) within the data file and $c_{mult}$ is the estimated (or fixed) catch multiplier. It has year-specific, not season-specific, time-varying capabilities. In the catch likelihood calculation, expected catch is multiplied by the catch multiplier by year and fishery to get $C_{obs}$ before being compared to the observed retained catch as modified by the $c_{mult}$. \subsubsection{Ageing Error Parameters} -These parameters are only included in the control file if one of the ageing error definitions in the data file has requested this feature (by putting a negative value for the ageing error of the age zero fish of one ageing error definition). As of version 3.30.12, these parameters now have time-varying capability. Seven additional full parameter lines are required. The parameter lines specify: +These parameters are only included in the control file if one of the ageing error definitions in the data file has requested this feature (by putting a negative value for the ageing error of the age zero fish of one ageing error definition). As of v.3.30.12, these parameters now have time-varying capability. Seven additional full parameter lines are required. The parameter lines specify: \begin{enumerate} \item Age at which the estimated pattern begins (just linear below this age), this is the start age. \item Bias at start age (as additive offset from unbiased age). @@ -768,23 +785,23 @@ \subsubsection{Read Biology Parameters} \multicolumn{2}{l}{Females}\Tstrut & Female natural mortality and growth parameters in the following order by growth pattern. \\ & M & Natural mortality for female growth pattern 1, where the number of natural mortality parameters depends on the option selected. \Bstrut\\ \hline - \multicolumn{2}{l}{COND if M option = 1 } & \Tstrut\\ + \multicolumn{2}{l}{COND if M option = 1} & \Tstrut\\ & N breakpoints & N-1 parameter lines as an exponential offsets from the previous reference age. \Bstrut\\ \hline & Lmin & Length at Amin (units in cm) for female, growth pattern 1. \\ & Lmax & Length at Amax (units in cm) for female, growth pattern 1. \\ - & VBK & von Bertanlaffy growth coefficient (units are per year) for females, growth pattern 1. \Bstrut\\ + & VBK & von Bertalanffy growth coefficient (units are per year) for females, growth pattern 1. \Bstrut\\ \hline - \multicolumn{2}{l}{COND if growth type = 2 } & \Tstrut\\ + \multicolumn{2}{l}{COND if growth type = 2} & \Tstrut\\ & Richards Coefficient & Only include this parameter if Richards growth function is used. If included, a parameter value of 1.0 will have a null effect and produce a growth curve identical to von Bertalanffy. \\ - \multicolumn{2}{l}{COND if growth type >=3 } & Age-Specific K \\ + \multicolumn{2}{l}{COND if growth type >=3} & Age-Specific K \\ & \multicolumn{2}{l}{N parameter lines equal to the number K deviations for the ages specified above.} \Bstrut\\ \hline - \Tstrut & CV young & Variability for size at age <= Amin for females, growth pattern 1. Note that CV cannot vary over time, so do not set up env-link or a deviation vector. Also, units are either as CV or as standard deviation, depending on assigned value of CV pattern.\\ + \Tstrut & CV young & Variability for size at age <= Amin for females, growth pattern 1. Note that CV cannot vary over time, so do not set up env-link or a deviation vector. Also, units are either as CV or as standard deviation, depending on assigned value of CV pattern. \\ & CV old & Variability for size at age >= Amax for females, growth pattern 1. For intermediate ages, do a linear interpolation of CV on means size-at-age. Note that the units for CV will depend on the CV pattern and the value of mortality-growth parameter as offset. The CV value cannot vary over time. \Bstrut\\ \hline @@ -808,7 +825,7 @@ \subsubsection{Read Biology Parameters} & Lmin & Length at Amin (units in cm) for male, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same Lmin as the female parameter value. \\ & Lmax & Length at Amax (units in cm) for male, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same Lmax as the female parameter value. \\ - & VBK & von Bertanlaffy growth coefficient (units are per year) for males, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same k as the female parameter value. \Bstrut\\ + & VBK & von Bertalanffy growth coefficient (units are per year) for males, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same k as the female parameter value. \Bstrut\\ \hline \multicolumn{2}{l}{COND if growth type = 2} & \Tstrut\\ @@ -849,7 +866,7 @@ \subsubsection{Read Biology Parameters} \multicolumn{2}{l}{Recruitment Dist. 2} & Recruitment apportionment parameter for the 2nd settlement event. \Bstrut\\ \hline - \multicolumn{2}{l}{Cohort growth deviation} \Tstrut & Set equal to 1.0 and do not estimate; it is deviations from this base that matter.\Bstrut\\ + \multicolumn{2}{l}{Cohort growth deviation} \Tstrut & Set equal to 1.0 and do not estimate; it is deviations from this base that matter. \Bstrut\\ \hline \multicolumn{2}{l}{2 x N selected movement pairs} & Movement parameters \Tstrut\Bstrut\\ @@ -992,8 +1009,8 @@ \subsection{Spawner-Recruitment} & & 6: Beverton-Holt with flat-top beyond Bzero, 2 parameters: ln(R0) and steepness; \\ & & 7: \hyperlink{Survivorship}{Survivorship function}: 3 parameters: ln(R0), $z_{frac}$, and $\beta$, suitable for sharks and low fecundity stocks to assure recruits are <= population production; \\ %& & 8: \hyperlink{Shepherd}{Shepherd}: 3 parameters: ln(R0), steepness, and shape parameter, $c$;\\ - & & 8: \hyperlink{Shepherd}{Shepherd re-parameterization}: 3 parameters: ln(R0), steepness, and shape parameter, $c$ (added to version 3.30.11 and is in beta mode); and \\ - & & 9: \hyperlink{Ricker2}{Ricker re-parameterization}: 3 parameters: ln(R0), steepness, and Ricker power, $\gamma$ (added to version 3.30.11 and is in beta mode). \Bstrut\\ + & & 8: \hyperlink{Shepherd}{Shepherd re-parameterization}: 3 parameters: ln(R0), steepness, and shape parameter, $c$ (added to v.3.30.11 and is in beta mode); and \\ + & & 9: \hyperlink{Ricker2}{Ricker re-parameterization}: 3 parameters: ln(R0), steepness, and Ricker power, $\gamma$ (added to v.3.30.11 and is in beta mode). \Bstrut\\ \hline 1 \Tstrut & Equilibrium recruitment & Use steepness in initial equilibrium recruitment calculation \\ @@ -1131,7 +1148,7 @@ \subsubsection{Spawner-Recruitment Parameter Setup} 0.60 \Tstrut & $\sigma_R$ & Standard deviation of natural log recruitment. This parameter has two related roles. It penalizes deviations from the spawner-recruitment curve, and it defines the offset between the arithmetic mean spawner-recruitment curve (as calculated from ln(R0) and steepness) and the expected geometric mean (which is the basis from which the deviations are calculated. Thus the value of $\sigma_R$ must be selected to approximate the true average recruitment deviation. See \hypertarget{TuneSigmaR}{Tuning $\sigma_R$} section below for additional guidance on how to tune $\sigma_R$. \Bstrut\\ %\hline - 0\Tstrut & Regime Parameter & This replaces the R1 offset parameter. It can have a block for the initial equilibrium year, so can fully replicate the functionality of the previous R1 offset approach. The SR regime parameter is intended to have a base value of 0.0 and not be estimated. Similar to cohort-growth deviation, it serves simply as a base for adding time-varying adjustments. This concept is similar to the old environment effect on deviates feature in SS3 v.3.24 and earlier. \Bstrut\\ + 0\Tstrut & Regime Parameter & This replaces the R1 offset parameter. It can have a block for the initial equilibrium year, so can fully replicate the functionality of the previous R1 offset approach. The SR regime parameter is intended to have a base value of 0.0 and not be estimated. Similar to cohort-growth deviation, it serves simply as a base for adding time-varying adjustments. This concept is similar to the old environment effect on deviates feature in v.3.24 and earlier. \Bstrut\\ \hline 0 & Autocorrelation & Autocorrelation in recruitment. \Tstrut\Bstrut\\ @@ -1154,6 +1171,7 @@ \subsubsection{Spawner-Recruitment Parameter Setup} \hline \end{longtable} \end{center} +\vspace*{-1.7\baselineskip} \subsubsection{Spawner-Recruitment Time-Varying Parameters} @@ -1189,8 +1207,8 @@ \subsubsection{Recruitment Deviation Setup} 1 \Tstrut & Do Recruitment Deviations & This selects the way in which recruitment deviations are coded: \\ & & 0: None (so all recruitments come from spawner recruitment curve). \\ - & & 1: Deviation vector (previously the only option): the deviations are encoded as a deviation vector, so ADMB enforces a sum-to-zero constraint. \\ - & & 2: Simple deviations: the deviations do not have an explicit constraint to sum to zero, although they still should end up having close to a zero sum. The difference in model performance between options (1) and (2) has not been fully explored to date. \\ + & & 1: Deviation vector (previously the only option): the deviations during the main period are encoded as a deviation vector that enforces them to sum to zero for this period. \\ + & & 2: Simple deviations: the deviations do not have an explicit constraint to sum to zero, although they still should end up having close to a zero sum. The difference in model performance between options (1) and (2) has not been fully explored to date. This is the recommended option if doing MCMC (see the \href{https://github.com/admb-project/admb/issues/107}{issue 107} in the ADMB GitHub Repository for more information on this). \\ & & 3: Deviation vector (added in v.3.30.13) where the estimated recruitment is equal to the R0 adjusted for blocks multiplied by a simple deviation vector of unconstrained deviations. The negative log likelihood from the deviation vector is equal to the natural log of the estimated recruitment divided by the expected recruitment by year adjusted for the spawner-recruit curve, regimes, environmental parameters, and bias-adjustment. The negative log likelihood between option 2 and 3 is approximately equal. \\ & & 4: Similar to option 3 but includes a penalty based on the sum of the deviations (added in v.3.30.13). \\ %& & Note: As of version 3.30.13 there is now an option to retain the last deviation estimated and apply that value into the forecast period. To specify this option add the value 2 before the deviation vector option (i.e., 21, 22, 23, or 24).\Bstrut\\ @@ -1199,7 +1217,7 @@ \subsubsection{Recruitment Deviation Setup} 1971 \Tstrut & Main recruitment deviations begin year & If begin year is less than the model start year, then the early deviations are used to modify the initial age composition. However, if set to be more than the population maximum age before start year, it is changed to equal to the maximum age before start year. \Bstrut\\ \hline - 2017 \Tstrut & Main recruitment deviations end year & If recruitment deviations end year is later than retro year, it is reset to equal retro year. The final year to estimate main recruitment deviations should be set to a year where information about young fish in the data becomes limited. As example, if the model end year is 2020 and the fleet/survey only starts observing fish of age 2+, the last year to estimate main recruitment deviations could be set to 2018. Years after the main period but before the end model year will be estimated as late deviations \Bstrut\\ + 2017 \Tstrut & \hypertarget{RecDevEndYear}{Main recruitment deviations end year} & The final year to estimate main recruitment deviations should be set to a year where information about young fish in the data becomes limited. For example, if the model end year is 2020 and the fleet/survey only starts observing fish of age 2+, the last year to estimate main recruitment deviations could be set to 2018. Years after the main period but before the end model year will be estimated as late deviations. If recruitment deviations end year is later than retro year, it is reset to equal retro year. \Bstrut\\ \hline 3 \Tstrut & Main recruitment deviations phase. & \Bstrut\\ @@ -1210,17 +1228,17 @@ \subsubsection{Recruitment Deviation Setup} \hline \multicolumn{3}{l}{COND = 1 Beginning of advanced options} \Tstrut\Bstrut\\ - & 1950 & Early Recruitment Deviation Start Year: \\ + & 1950 & Early Recruitment Deviations Start Year: \\ & & 0: skip (default); \\ & & +year: absolute year (must be less than begin year of main recruitment deviations); and \\ & & -integer: set relative to main recruitment deviations start year. \\ & & Note: because this is a deviation vector, it should be long enough so that recruitment deviations for individual years are not unduly constrained. \\ - \Tstrut & 6 & Early Recruitment Deviation Phase: \\ + \Tstrut & 6 & Early Recruitment Deviations Phase: \\ & & Negative value: default value to not estimate early deviations. \\ & & Users may want to set to a late phase if there is not much early data. \\ - \Tstrut & 0 & Forecast Recruitment Phase: \\ + \Tstrut & 0 & \hypertarget{FcastRecDevPhase}{Forecast Recruitment Deviations Phase}: \\ & & 0 = Default value. \\ & & Forecast recruitment deviations always begin in the first year after the end of the main recruitment deviations. Recruitment in the forecast period is deterministic derived from the specified stock-recruitment relationship. Setting their phase to 0 causes their phase to be set to max lambda phase +1 (so that they become active after rest of parameters have converged.). However, it is possible here to set an earlier phase for their estimation, or to set a negative phase to keep the forecast recruitment deviations at a constant level. \Bstrut\\ @@ -1313,11 +1331,11 @@ \subsubsection{Recruitment Deviation Setup} A non-equilibrium initial age composition is achieved by setting the first year of the recruitment deviations before the model start year. These pre-start year recruitment deviations will be applied to the initial equilibrium age composition to adjust this composition before starting the time series. The model first applies the initial F level to an equilibrium age composition to get a preliminary N-at-age vector and the catch that comes from applying the F's to that vector, then it applies the recruitment deviations for the specified number of younger ages in this vector. If the number of estimated ages in the initial age composition is less than maximum age, then the older ages will retain their equilibrium levels. Because the older ages in the initial age composition will have progressively less information from which to estimate their true deviation, the start of the bias adjustment should be set accordingly. \subsection{Fishing Mortality Method} -There are four methods available for calculation of fishing mortality (F): 1) Pope's approximation, 2) Baranov's continuous F with each F as a model parameter, 3) a hybrid F method, and 4) a fleet-specific parameter hybrid F approach (introduced in version 3.30.18). +There are four methods available for calculation of fishing mortality (F): 1) Pope's approximation, 2) Baranov's continuous F with each F as a model parameter, 3) a hybrid F method, and 4) a fleet-specific parameter hybrid F approach (introduced in v.3.30.18). -A new fleet-specific parameter hybrid F approach was introduced in version 3.30.18 and is now the recommended approach for most models. With this approach, some fleets can stay in hybrid F mode while others transition to parameters. For example, bycatch fleets must start with parameters in phase 1, while other fishing fleets can use hybrid F or start with hybrid and transition to parameters at a fleet-specific designated phase. We believe this new method 4 is a superior super-set to current methods 2 (all use parameters and all can start hybrid then switch to parameters) and method 3 (all hybrid for all phases). However, during testing specific situations were identified when this approach may not be the best selection. If there is uncertainty around annual input catch values (e.g., se = 0.15) and some fleets have discard data being fit to as well, the treatment of F as parameters (method 2) may allow for better model fits to the data. +A new fleet-specific parameter hybrid F approach was introduced in v.3.30.18 and is now the recommended approach for most models. With this approach, some fleets can stay in hybrid F mode while others transition to parameters. For example, bycatch fleets must start with parameters in phase 1, while other fishing fleets can use hybrid F or start with hybrid and transition to parameters at a fleet-specific designated phase. We believe this new method 4 is a superior super-set to current methods 2 (all use parameters and all can start hybrid then switch to parameters) and method 3 (all hybrid for all phases). However, during testing specific situations were identified when this approach may not be the best selection. If there is uncertainty around annual input catch values (e.g., se = 0.15) and some fleets have discard data being fit to as well, the treatment of F as parameters (method 2) may allow for better model fits to the data. -The hybrid F method does a Pope's approximation to provide initial values for iterative adjustment of the Baranov continuous F values to closely approximate the observed catch. Prior to version 3.30.18, the hybrid method (method 3) was recommended in most cases. With the hybrid method, the final values are in terms of continuous F, but do not need to be specified as full parameters. In a 2 fishery model, low F case (e.g., similar to natural mortality or lower), the hybrid method is just as fast as the Pope approximation and produces identical results. +The hybrid F method does a Pope's approximation to provide initial values for iterative adjustment of the Baranov continuous F values to closely approximate the observed catch. Prior to v.3.30.18, the hybrid method (method 3) was recommended in most cases. With the hybrid method, the final values are in terms of continuous F, but do not need to be specified as full parameters. In a 2 fishery model, low F case (e.g., similar to natural mortality or lower), the hybrid method is just as fast as the Pope approximation and produces identical results. However, when F is very high, the problem becomes quite computationally stiff for Pope's approximation and the hybrid method so convergence in ADMB may slow due to more sensitive gradients in the log likelihood. In these high F cases it may be better to use F option 2, continuous F as full parameters. It is also advisable to allow the model to start with good values for the F parameters. This can be done by specifying a later phase (>1) under the conditional input for F method = 2 where early phases will use the hybrid method, then switch to F as parameter in later phases and transfer the hybrid F values to the parameter initial values. @@ -1373,7 +1391,7 @@ \subsection{Fishing Mortality Method} \hline \multicolumn{3}{l}{COND: F method = 4} \Tstrut\\ - & & Read list of fleets needing parameters, starting F values, and phases. To treat a fleet F as hybrid only select a phase of 99. A parameter line is not required for all fleets and if not specified will be treated as hybrid across all phases, except for bycatch fleets which are required to have an input parameter line. Use a negative phase to set F as constant (i.e., not estimated) in v. 3.30.19 and higher. \Tstrut\\ + & & Read list of fleets needing parameters, starting F values, and phases. To treat a fleet F as hybrid only select a phase of 99. A parameter line is not required for all fleets and if not specified will be treated as hybrid across all phases, except for bycatch fleets which are required to have an input parameter line. Use a negative phase to set F as constant (i.e., not estimated) in v.3.30.19 and higher. \Tstrut\\ Fleet & Parameter Value & Phase \Tstrut\\ 1 & 0.05 & 1 \\ 2 & 0.01 & 1 \\ @@ -1436,7 +1454,7 @@ \subsection{Catchability} \begin{enumerate} \item 1 = simple Q, proportional assumption about Q: $y=q*x$. \item 2 = mirror simple Q - this will mirror the Q value from another fleet. Mirror in Q must refer to a lower number fleet relative to the fleet with the mirrored Q (example: fleet 3 mirror fleet 2). Requires a Q parameter line for the fleet but will not be used. - \item 3 = Q with power, 2 parameters establish a parameter for non-linearity in survey-abundance linkage. Assumes proportional with offset and power function: $y=qx^c$ where $q = exp(lnQ_{base}))$ thus the $c$ is not related to expected biomass but vulnerable biomass to Q. Therefore, $c$ $<$ 0 leads to hyper-stability and $c > 0$ leads to hyper-depletion. + \item 3 = Q with power, 2 parameters establish a parameter for non-linearity in survey-abundance linkage. Assumes proportional with offset and power function: $y=qx^c$ where $q = exp(lnQ_{base}))$ thus the $c$ is not related to expected biomass but vulnerable biomass to Q. Therefore, $c > 0$ leads to hyper-stability and $c < 0$ leads to hyper-depletion. \item 4 = mirror Q with offset (2 parameter lines required). The mirrored Q with offset for with be reported as base Q + offset value. Mirror in Q must refer to a lower number fleet relative to the fleet with the mirrored Q. See \hyperlink{MirrorQoffset}{mirrored Q with offset} below for example set up. \item If the parameter is for an index of a deviation vector (index units = 35), use this column to enter the index of the deviation vector to which the index is related. \end{enumerate} @@ -1541,9 +1559,9 @@ \subsubsection{Mirrored Q with offset} \subsubsection{Float Q} The use and development of float in Q has evolved over time within SS3. The original approach in earlier versions of SS3 (version 3.24 and before) is that with Q ``float'' the units of the survey or fishery CPUE were treated as dimensionless so the Q was adjusted within each model iteration to maintain a mean difference of 0.0 between observed and expected (usually in natural log space). In contrast, Q as a parameter (float = 0) one had the ability to interpret the absolute scaling of Q and put a prior on it to help guide the model solution. Also, with Q as a parameter the code allowed for Q to be time-varying. -Then midway through the evolution of the SS3 v.3.24 code lineage a new Q option was introduced based on user recommendations. This option allowed Q to float and to compare the resulting Q value to a prior, hence the information in that prior would pull the model solution in direction of a floated Q that came close to the prior. +Then midway through the evolution of the v.3.24 code lineage a new Q option was introduced based on user recommendations. This option allowed Q to float and to compare the resulting Q value to a prior, hence the information in that prior would pull the model solution in direction of a floated Q that came close to the prior. -Currently, in 3.30, that float with prior capability is fully embraced. All fleets that have any survey or CPUE options need to have a catchability specification and get a base Q parameter in the list. Any of these Q's can be either: +Currently, in v.3.30, that float with prior capability is fully embraced. All fleets that have any survey or CPUE options need to have a catchability specification and get a base Q parameter in the list. Any of these Q's can be either: \begin{itemize} \item Fixed: by not floating and not estimating. @@ -1556,13 +1574,13 @@ \subsubsection{Float Q} Q relates the units of the survey or CPUE to the population abundance, not the population density per unit area. But many surveys and most fishery CPUE is a proportional to mean fish density per unit area. This does not have any impact in a one area model because the role of area is absorbed into the value of Q. In a multi-area model, one may want to assert that the relative difference in CPUE between two areas is informative about the relative abundance between the areas. However, CPUE is a measure of fish density per unit area, so one may want to multiply CPUE by area before putting the data into the model so that asserting the same Q for the two areas will be informative about relative abundance. -In SS3 v.3.30.13, a new catchability option has been added that allows Q to be mirrored and to add an offset to ln(Q) of the primary area when calculating the ln(Q) for the dependent area. The offset is a parameter and, hence, can be estimated and have a prior. This option allows the CPUE data to stay in density units and the effect of relative stock area is contained in the value of the ln(Q) offset. +In v.3.30.13, a new catchability option has been added that allows Q to be mirrored and to add an offset to ln(Q) of the primary area when calculating the ln(Q) for the dependent area. The offset is a parameter and, hence, can be estimated and have a prior. This option allows the CPUE data to stay in density units and the effect of relative stock area is contained in the value of the ln(Q) offset. \subsubsection{Catchabilty Time-Varying Parameters} Time-Varying catchability can be used. Details on how to specify time-varying parameters can be found in the \hyperlink{tvOrder}{Time-Varying Parameter Specification and Setup} section. -\subsubsection{Q Conversion Issues Between SS3 v.3.24 and v.3.30} -In SS3 v.3.24 it was common to use the deviation approach implemented as if it was survey specific blocks to create a time-varying Q for a single survey. In some cases, only one year's deviation was made active in order to implement, in effect, a block for Q. The transition executable (sstrans.exe) cannot convert this, but an analogous approach is available in SS3 v.3.30 because true blocks can now be used, as well as environmental links and annual deviations. Also note that deviations in SS3 v.3.24 were survey specific (so no parameter for years with no survey). In SS3 v.3.30, deviations are always year-specific, so you might have a deviation created for a year with no survey. +\subsubsection{Q Conversion Issues Between Stock Synthesis v.3.24 and v.3.30} +In v.3.24 it was common to use the deviation approach implemented as if it was survey specific blocks to create a time-varying Q for a single survey. In some cases, only one year's deviation was made active in order to implement, in effect, a block for Q. The transition executable (sstrans.exe) cannot convert this, but an analogous approach is available in v.3.30 because true blocks can now be used, as well as environmental links and annual deviations. Also note that deviations in v.3.24 were survey specific (so no parameter for years with no survey). In v.3.30, deviations are always year-specific, so you might have a deviation created for a year with no survey. \subsection{Selectivity and Discard} For each fleet and survey, read a definition line for size selectivity and retention. @@ -1619,7 +1637,7 @@ \subsection{Selectivity and Discard} \myparagraph{Age Selectivity} For each fleet and survey, read a definition line for age selectivity. The 4 values to be read are the same as for the size-selectivity. -As of SS3 v.3.30.15, for some selectivity patterns the user can specify the minimum age of selected fish. Most selectivity curves by default select age 0 fish (i.e., inherently specify the minimum age of selected fish as 0). However, it is fairly common for the age bins specified in the data file to start at age 1. This means that any age 0 fish selected are pooled up into the age 1' bin, which will have a detrimental effect on fitting age-composition data. In order to prevent the selection of age 0 (or older) fish, the user can specify the minimum selected age for some selectivity patterns (12, 13, 14, 16, 18, 26, or 27) in versions of SS3 v.3.30.15 and later. For example, if the minimum selected age is 1 (so that age 0 fish are not selected), selectivity pattern type can be specified as 1XX, where XX is the selectivity pattern. A more specific example is if selectivity is age-logistic and the minimum selected age desired is 1, the selectivity pattern would be specified as 112 (the regular age-logistic selectivity pattern is option 12). The user can also select higher minimum selected ages, if desired; for example, 212 would be the age-logistic selectivity pattern with a minimum selected age of 2 (so that age 0 and 1 fish are not selected). +As of v.3.30.15, for some selectivity patterns the user can specify the minimum age of selected fish. Most selectivity curves by default select age 0 fish (i.e., inherently specify the minimum age of selected fish as 0). However, it is fairly common for the age bins specified in the data file to start at age 1. This means that any age 0 fish selected are pooled up into the age 1' bin, which will have a detrimental effect on fitting age-composition data. In order to prevent the selection of age 0 (or older) fish, the user can specify the minimum selected age for some selectivity patterns (12, 13, 14, 16, 18, 26, or 27) in versions of v.3.30.15 and later. For example, if the minimum selected age is 1 (so that age 0 fish are not selected), selectivity pattern type can be specified as 1XX, where XX is the selectivity pattern. A more specific example is if selectivity is age-logistic and the minimum selected age desired is 1, the selectivity pattern would be specified as 112 (the regular age-logistic selectivity pattern is option 12). The user can also select higher minimum selected ages, if desired; for example, 212 would be the age-logistic selectivity pattern with a minimum selected age of 2 (so that age 0 and 1 fish are not selected). \subsubsection{Reading the Selectivity and Retention Parameters} Read the required number of parameter setup lines as specified by the definition lines above. The complete order of the parameter setup lines is: @@ -1767,7 +1785,7 @@ \subsubsection{Selectivity Pattern Details} \end{itemize} \myparagraph{Pattern 2 (size) - Older version of selectivity pattern 24 for backward compatibility} -Pattern 2 differs from pattern 24 only in the treatment of sex-specific offset parameter 5. See note in \hyperlink{MaleSelectivityOffset}{Male Selectivity Estimated as Offsets from Female Selectivity} for more information. Pattern 24 was changed in version 3.30.19 with the old parameterization now provided in Pattern 2. +Pattern 2 differs from pattern 24 only in the treatment of sex-specific offset parameter 5. See note in \hyperlink{MaleSelectivityOffset}{Male Selectivity Estimated as Offsets from Female Selectivity} for more information. Pattern 24 was changed in v.3.30.19 with the old parameterization now provided in Pattern 2. \myparagraph{Pattern 5 (size) - Mirror Selectivity} Two parameters select the min and max bin number (not min max size) of the source selectivity pattern. If first parameter has value <=0, then interpreted as a value of 1 (e.g., first bin). If second parameter has value <=0, then interpreted as maximum length bin (e.g., last bin specified in the data file). The mirrored selectivity pattern must have be from a lower fleet number (e.g., already specified before the mirrored fleet). @@ -1966,9 +1984,9 @@ \subsubsection{Selectivity Pattern Details} For a 3 node setup, the input parameters would be: \begin{itemize} - \item p1 - Code for initial set-up which controls whether or not auto-generation is applied (input options are 0, 1, 2, 10, 11, or 12) as explained below - \item p2 - Gradient at the first node (should be a small positive value, or fixed at 1e30 to implement a ``natural cubic spline'') - \item p3 - Gradient at the last node (should be zero, a small negative value, or fixed at 1e30 to implement a ``natural cubic spline'') + \item p1 - Code for initial set-up which controls whether or not auto-generation is applied (input options are 0, 1, 2, 10, 11, or 12) as explained below + \item p2 - Gradient at the first node (should be a small positive value, or fixed at 1e30 to implement a ``natural cubic spline'') + \item p3 - Gradient at the last node (should be zero, a small negative value, or fixed at 1e30 to implement a ``natural cubic spline'') \item p4-p6 - The nodes in units of cm; must be in rank order and inside of the range of the population length bins. These must be held constant (not estimated, e.g., negative phase value) during a model run. \item p7-p9 - The values at the nodes. Units are ln(selectivity) before rescaling. \end{itemize} @@ -2191,7 +2209,7 @@ \subsubsection{Retention} \begin{itemize} \item p1 - ascending inflection, \item p2 - ascending slope, - \item p3 - maximum retention controlling the height of the asymptote (smaller values result in lower asymptotes), often a time-varying quantity to match the observed amount of discard. As of v. 3.30.01, this parameter is now input in logit space ranging between -10 and 10. A fixed value of -999 would assume no retention of fish and a value of 999 would set asymptotic retention equal to 1.0, + \item p3 - maximum retention controlling the height of the asymptote (smaller values result in lower asymptotes), often a time-varying quantity to match the observed amount of discard. As of v.3.30.01, this parameter is now input in logit space ranging between -10 and 10. A fixed value of -999 would assume no retention of fish and a value of 999 would set asymptotic retention equal to 1.0, \item p4 - male offset to ascending inflection (arithmetic, not multiplicative), \end{itemize} \item Dome-shaped (add the following 3 parameters): @@ -2264,7 +2282,7 @@ \subsubsection{Sex-Specific Selectivity} Notes: \begin{itemize} \item Male selectivity offsets currently cannot be time-varying because they are offsets from female selectivity, they inherit the time-varying characteristics of the female selectivity. - \item Prior to version 3.30.19 male parameter 5 in pattern 24 scaled only the apical selectivity. This sometimes resulted in strange shapes when the final selectivity, which was shared between females and males in that parameterization, was higher than the estimated apical selectivity. For backwards compatibility to the pattern 24 parameterization prior to 3.30.19, use selectivity pattern 2. + \item Prior to v.3.30.19 male parameter 5 in pattern 24 scaled only the apical selectivity. This sometimes resulted in strange shapes when the final selectivity, which was shared between females and males in that parameterization, was higher than the estimated apical selectivity. For backwards compatibility to the pattern 24 parameterization prior to v.3.30.19, use selectivity pattern 2. \end{itemize} \hypertarget{Dirichletparameter}{} @@ -2430,10 +2448,10 @@ \subsection{Tag Recapture Parameters} Currently, tag parameters cannot be time-varying. -A shortcoming was identified in the recapture calculations when using Pope's F Method and multiple seasons in SS3 prior to v.3.30.14. The internal calculations were corrected in version 3.30.14. Now the Z-at-age is applied internally for calculations of fishing pressure on the population when using the Pope calculations. +A shortcoming was identified in the recapture calculations when using Pope's F Method and multiple seasons in SS3 prior to v.3.30.14. The internal calculations were corrected in v.3.30.14. Now the Z-at-age is applied internally for calculations of fishing pressure on the population when using the Pope calculations. \myparagraph{Mirroring of Tagging Parameters} -In version 3.30.14, the ability to mirror the tagging parameters from another tag group or fleet was added. With this approach, the user can have just one parameter value for each of the five tagging parameter types and mirror all other parameters. Note that parameter lines are still required for the mirrored parameters and only lower numbered parameters can be mirrored. Mirroring is evoked through the phase input in the tagging parameter section. The options are: +In v.3.30.14, the ability to mirror the tagging parameters from another tag group or fleet was added. With this approach, the user can have just one parameter value for each of the five tagging parameter types and mirror all other parameters. Note that parameter lines are still required for the mirrored parameters and only lower numbered parameters can be mirrored. Mirroring is evoked through the phase input in the tagging parameter section. The options are: \begin{itemize} \item No mirroring among tag groups or fleets: phase > -1000, \item Mirror the next lower (i.e., already specified) tag group or fleet: phase = -1000 and set other parameter values the same as next lower Tag Group or fleet, @@ -2446,17 +2464,16 @@ \subsection{Tag Recapture Parameters} \subsection{Variance Adjustment Factors} When doing iterative re-weighting of the input variance factors, it is convenient to do this in the control file, rather than the data file. This section creates that capability. +\begin{longtable}{p{3cm} p{3cm} p{2.5cm} p{6.25cm}} -\begin{longtable}{p{3cm} p{3cm} p{2.5cm} p{6.25cm} } - - \multicolumn{4}{l}{Read variance adjustment factors to be applied:}\\ + \multicolumn{4}{l}{Read variance adjustment factors to be applied:} \\ \hline Factor & Fleet & Value & Description \Tstrut\Bstrut\\ \hline 1 & 2 & 0.5 & \# Survey CV for survey/fleet 2 \Tstrut\\ 4 & 1 & 0.25 & \# Length data for fleet 1 \\ - 4 & 2 & 0.75 & \# Length data for fleet 2\\ - -9999 & 0 & 0 & \# End read\Bstrut\\ + 4 & 2 & 0.75 & \# Length data for fleet 2 \\ + -9999 & 0 & 0 & \# End read \Bstrut\\ \hline \end{longtable} @@ -2511,14 +2528,14 @@ \subsection{Lambdas (Emphasis Factors)} \myparagraph{Lambda Usage Notes} \hypertarget{SaAlambda}{If} the CV for size-at-age is being estimated and the model contains mean size-at-age data, then the flag for inclusion of the + ln(stddev) term in the likelihood must be included. Otherwise, the model will always get a better fit to the mean size-at-age data by increasing the parameter for CV of size-at-age. -The reading of the lambda values has been substantially altered with SS3 v.3.30. Instead of reading a matrix containing all the needed lambda values, the model now just reads those elements that will be given a value other than 1.0. After reading the datafile, the model sets lambda equal to 0.0 if there are no data for a particular fleet/data type, and a value of 1.0 if data exist. So beware if your data files had data but you had set the lambda to 0.0 in a previous version of SS3. First read an integer for the number of changes. +The reading of the lambda values has been substantially altered with v.3.30. Instead of reading a matrix containing all the needed lambda values, the model now just reads those elements that will be given a value other than 1.0. After reading the datafile, the model sets lambda equal to 0.0 if there are no data for a particular fleet/data type, and a value of 1.0 if data exist. So beware if your data files had data but you had set the lambda to 0.0 in a previous version of SS3. First read an integer for the number of changes. \begin{longtable}{p{3cm} p{3cm} p{2cm} p{3cm} p{3cm}} - \multicolumn{5}{l}{Read the lambda adjustments by fleet and data type:}\\ + \multicolumn{5}{l}{Read the lambda adjustments by fleet and data type:} \\ \hline - Likelihood & & & Lambda & SizeFreq\Tstrut\\ + Likelihood & & & Lambda & SizeFreq \Tstrut\\ Component & Fleet & Phase & Value & Method \Bstrut\\ \hline 1 & 2 & 2 & 1.5 & 1 \Tstrut\\ @@ -2534,19 +2551,19 @@ \subsection{Lambdas (Emphasis Factors)} \multicolumn{2}{l}{The codes for component are:}\\ \hline 1 = survey & 10 = recruitment deviations \Tstrut\\ - 2 = discard & 11 = parameter priors\\ - 3 = mean weight & 12 = parameter deviations\\ - 4 = length & 13 = crash penalty\\ - 5 = age & 14 = morph composition\\ - 6 = size frequency & 15 = tag composition\\ - 7 = size-at-age & 16 = tag negative binomial\\ - 8 = catch & 17 = F ballpark\\ + 2 = discard & 11 = parameter priors \\ + 3 = mean weight & 12 = parameter deviations \\ + 4 = length & 13 = crash penalty \\ + 5 = age & 14 = morph composition \\ + 6 = size frequency & 15 = tag composition \\ + 7 = size-at-age & 16 = tag negative binomial \\ + 8 = catch & 17 = F ballpark \\ 9 = initial equilibrium catch (see note below) & 18 = regime shift \Bstrut\\ \hline \end{longtable} \end{center} -Starting in SS3 v.3.30.16, the application of a lambda to initial equilibrium catch is now fleet specific. In previous versions, a single lambda was applied in the same manner across all fleets with an initial equilibrium catch specified. +Starting in v.3.30.16, the application of a lambda to initial equilibrium catch is now fleet specific. In previous versions, a single lambda was applied in the same manner across all fleets with an initial equilibrium catch specified. \pagebreak @@ -2556,15 +2573,15 @@ \subsection{Controls for Variance of Derived Quantities} \begin{longtable}{p{1.1cm} p{1.4cm} p{1.2cm} p{1.2cm} p{1.3cm} p{1.6cm} p{1.4cm} p{1.4cm} p{1.4cm}} \hline - \multicolumn{3}{l}{Typical Value} & \multicolumn{6}{l}{Description and Options}\Tstrut\Bstrut\\ + \multicolumn{3}{l}{Typical Value} & \multicolumn{6}{l}{Description and Options} \Tstrut\Bstrut\\ \hline \endfirsthead \multicolumn{3}{l}{0} & \multicolumn{6}{l}{0 = No additional std dev reporting;} \Tstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{1 = read specification for reporting stdev for selectivity, size, numbers; and}\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{2 = read specification for reporting stdev for selectivity, size, numbers, }\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{natural mortality, dynamic B0, and Summary Bio}\Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{1 = read specification for reporting stdev for selectivity, size, numbers; and} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{2 = read specification for reporting stdev for selectivity, size, numbers,} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{natural mortality, dynamic B0, and Summary Bio} \Bstrut\\ \hline \end{longtable} @@ -2632,13 +2649,13 @@ \subsection{Controls for Variance of Derived Quantities} \begin{longtable}{p{1.1cm} p{1.4cm} p{1.2cm} p{1.2cm} p{1.3cm} p{1.6cm} p{1.4cm} p{1.4cm} p{1.4cm}} \hline - \multicolumn{9}{l}{Example Input:}\Tstrut\Bstrut\\ + \multicolumn{9}{l}{Example Input:} \Tstrut\Bstrut\\ \hline \multicolumn{3}{l}{2} & \multicolumn{6}{l}{\# 0 = No additional std dev reporting;} \Tstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 1 = read values below; and}\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 2 = read specification for reporting stdev for selectivity, size,numbers, and }\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# natural mortality.}\Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 1 = read values below; and} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 2 = read specification for reporting stdev for selectivity, size,numbers, and} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# natural mortality.} \Bstrut\\ \hline \multicolumn{4}{l}{1 1 -1 5} & \multicolumn{5}{l}{\# Selectivity} \Bstrut\\ @@ -2648,12 +2665,12 @@ \subsection{Controls for Variance of Derived Quantities} \multicolumn{4}{l}{1} & \multicolumn{5}{l}{\# Dynamic Bzero} \Bstrut\\ \multicolumn{4}{l}{1} & \multicolumn{5}{l}{\# Summary Biomass} \Bstrut\\ - \multicolumn{4}{l}{5 15 25 35 38} & \multicolumn{5}{l}{\# Vector with selectivity std bins (-1 in first bin to self-generate)}\Bstrut\\ - \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with growth std ages picks (-1 in first bin to self-generate)}\Bstrut\\ - \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with numbers-at-age std ages (-1 in first bin to self-generate)}\Bstrut\\ - \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with M-at-age std ages (-1 in first bin to self-generate)}\Bstrut\\ + \multicolumn{4}{l}{5 15 25 35 38} & \multicolumn{5}{l}{\# Vector with selectivity std bins (-1 in first bin to self-generate)} \Bstrut\\ + \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with growth std ages picks (-1 in first bin to self-generate)} \Bstrut\\ + \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with numbers-at-age std ages (-1 in first bin to self-generate)} \Bstrut\\ + \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with M-at-age std ages (-1 in first bin to self-generate)} \Bstrut\\ \hline - \bfseries{999} & \multicolumn{8}{l}{\#End of the control file input}\Tstrut\Bstrut\\ + \bfseries{999} & \multicolumn{8}{l}{\#End of the control file input} \Tstrut\Bstrut\\ \hline \end{longtable} diff --git a/README.md b/README.md index f8e791e9..2e6cf24d 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,18 @@ # doc -Source code for the stock synthesis manual and other supplementary documentation +Source code for the stock synthesis manual and other supplementary documentation. ## What documentation is available in this repository? The documentation includes: - The Stock Synthesis user manual source code, in .tex files -- Getting started guide and Introduction to building an ss model guide, available in the [User_Guides subdirectory](https://github.com/nmfs-stock-synthesis/doc/tree/main/User_Guides) +- Getting started guide and Introduction to building an ss3 model guide, available in the [User_Guides subdirectory](https://github.com/nmfs-stock-synthesis/doc/tree/main/User_Guides) ## Where can I find compiled versions of the documentation? -See the [documentation index page](https://nmfs-stock-synthesis.github.io/doc/) for links to the latest compiled documentation. +See the [documentation index page](https://nmfs-stock-synthesis.github.io/doc/) for links to a PDF and HTML version of the compiled documentation from the latest Stock Synthesis release. -PDF versions of the user manual are available in the [ss3 manuals folder on the vlab website](https://vlab.noaa.gov/web/stock-synthesis/document-library/-/document_library/0LmuycloZeIt/view/4513132). +PDF versions of the user manual are also available in the [ss3 manuals folder on the vlab website](https://vlab.noaa.gov/web/stock-synthesis/document-library/-/document_library/0LmuycloZeIt/view/4513132). The [contributing guide](https://github.com/nmfs-stock-synthesis/doc/blob/main/CONTRIBUTING.md) contains information on [how to compile locally or with github actions](https://github.com/nmfs-stock-synthesis/doc/blob/main/CONTRIBUTING.md#compiling-the-stock-synthesis-manual). @@ -22,7 +22,7 @@ Please open an [issue](https://github.com/nmfs-stock-synthesis/doc/issues) or su ## Where should I ask a question about Stock Synthesis? -If you have a question related to Stock Synthesis, please ask it on the [Stock Synthesis forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums), post an issue [in the Stock Synthesis github repository for model questions](https://github.com/nmfs-stock-synthesis/stock-synthesis/issues) and [in the Stock Synthesis document github repository for questions about the manual](https://github.com/nmfs-stock-synthesis/doc/issues), or email it to the Stock Synthesis team at nmfs.stock.synthesis@noaa.gov . +If you have a question related to Stock Synthesis, please post an issue [in the Stock Synthesis GitHub repository for model questions](https://github.com/nmfs-stock-synthesis/stock-synthesis/issues) or ask it on the [Stock Synthesis forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums). If you have a question about the manual, please post an issue in the [Stock Synthesis documentation GitHub repository](https://github.com/nmfs-stock-synthesis/doc/issues), or email it to the Stock Synthesis team at nmfs.stock.synthesis@noaa.gov. ## How can I contribute to the Stock Synthesis documentation? diff --git a/SS.bib b/SS3.bib similarity index 99% rename from SS.bib rename to SS3.bib index c0a35dc3..333b7789 100644 --- a/SS.bib +++ b/SS3.bib @@ -114,6 +114,7 @@ @article{johnson-can-2016 @article{punt_insights_2016, title = {Some insights into data weighting in integrated stock assessments}, + volume = {192}, issn = {01657836}, url = {http://linkinghub.elsevier.com/retrieve/pii/S0165783615301582}, doi = {10.1016/j.fishres.2015.12.006}, @@ -123,6 +124,7 @@ @article{punt_insights_2016 author = {Punt, André E.}, month = jan, year = {2016}, + pages = {52--65}, file = {Punt - 2016 - Some insights into data weighting in integrated st.pdf:C\:\\Users\\Chantel.Wetzel\\Zotero\\storage\\Q4HVWB5J\\Punt - 2016 - Some insights into data weighting in integrated st.pdf:application/pdf}, } @@ -380,7 +382,7 @@ @incollection{maunder2011M @article{schnute1981growth, title={A versatile growth model with statistically stable parameters}, author={Schnute, Jon}, - journal={Canadian Joural of Fisheries and Aquatic Science}, + journal={Canadian Journal of Fisheries and Aquatic Science}, volume={38}, pages={1128--1140}, year={1981} diff --git a/SS330_User_Manual.tex b/SS330_User_Manual.tex index 3cda1ae3..cebc2c2d 100644 --- a/SS330_User_Manual.tex +++ b/SS330_User_Manual.tex @@ -199,7 +199,7 @@ % ======== Section 11: Likelihoods \input{11likelihoods} %========= Section 12: Running SS - \input{12runningSS} + \input{12runningSS3} % ======== Section 13: Output Files \input{13output} %========= Section 14: R4SS @@ -210,7 +210,7 @@ \input{16essays} %========= Reference Section \newpage - \bibliography{SS} + \bibliography{SS3} \bibliographystyle{JournalBiblio/cjfas} \newpage diff --git a/User_Guides/getting_started/Getting_Started_SS.Rmd b/User_Guides/getting_started/Getting_Started_SS3.Rmd similarity index 86% rename from User_Guides/getting_started/Getting_Started_SS.Rmd rename to User_Guides/getting_started/Getting_Started_SS3.Rmd index 1c024968..683181fb 100644 --- a/User_Guides/getting_started/Getting_Started_SS.Rmd +++ b/User_Guides/getting_started/Getting_Started_SS3.Rmd @@ -40,7 +40,7 @@ SS3 uses text input files and produces text output files. In this section, the S ## SS3 files: Required inputs -Four required input files are read by the SS3 executable. Throughout this document, we will refer to the SS3 executable as ss.exe. Keep in mind that the Linux and Mac versions of SS3 have no file extension (e.g., ss), and the executable can be renamed by the user as desired (e.g., ss_win.exe, ss_3.30.18.exe). These input files are: +Four required input files are read by the SS3 executable. Throughout this document, we will refer to the SS3 executable as ss3.exe. Keep in mind that the Linux and Mac versions of SS3 have no file extension (e.g., ss3), and the executable can be renamed by the user as desired (e.g.,ss3.exe, ss_win.exe, ss_3.30.18.exe). These input files are: 1. **starter.ss:** Required file containing file names of the data file and the control file plus other run controls. Must be named starter.ss. 2. **data file:** File containing model dimensions and the data. The data file can have any name, as specified in the starter file, but typically ends in .ss or .dat. @@ -68,11 +68,13 @@ Many output text files are created during a model run. The most useful output fi + **r4ss**: An R package to plot SS3 model results and manipulate SS3 input and output files. Available at: https://github.com/r4ss/r4ss -+ **SSI**: Stock Synthesis Interface (i.e., the SS3 GUI). The [latest version of SSI](https://github.com/nmfs-stock-synthesis/ssi/releases/latest) can be downloaded from GitHub. SSI can be used to edit, save, run, and visualize model inputs and outputs. ++ **SSI**: Stock Synthesis Interface (i.e., the SS3 GUI). The [latest version of SSI](https://github.com/nmfs-stock-synthesis/ssi/releases/latest) can be downloaded from GitHub. SSI can be used to edit, save, run, and visualize model inputs and outputs. Note that SSI is not maintained for Stock Synthesis versions after v.3.30.21. + ++ **Stock Assessment Continuum Tool**: Available through github at https://github.com/shcaba/SS-DL-tool, the Stock Assessment Continuum Tool (previously known as the Stock Synthesis Data-limited Tool) is a Shiny-based application that provides an interface to upload catch time-series, age composition, length composition, and abundance index data and define model options within the application, which then writes the Stock Synthesis input files. # Running SS3 -SS3 is typically run through the command line (although it can also be run indirctly via the commandline through an R console). We will introduce the one folder approach, where SS3 is in the same folder as the model files. Other possible approaches to running SS3 include, which are detailed in the ["Running Stock Synthesis" section of the user manual](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS). +SS3 is typically run through the command line (although it can also be run indirctly via the commandline through an R console). We will introduce the one folder approach, where SS3 is in the same folder as the model files. Other possible approaches to running SS3 include, which are detailed in the ["Running Stock Synthesis" section of the user manual](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS3). ## The one folder approach and demonstration of an SS3 model run @@ -85,7 +87,7 @@ Create a folder and add: + Control File (Must match name in starter.ss) + Data File (Must match name in starter.ss) + forecast.ss -+ ss.exe ++ ss3.exe + starter.ss + Conditional files: wtatage.ss (if doing empirical wt-at-age approach) and/or ss.par (to continue from a previous run) @@ -97,7 +99,7 @@ For example, here is what should be included for a model with no conditional fil Once all of the model files and the SS3 executable are in the same folder, you can open your command window of choice at the location of the model files. -To do this, you can typically click to highlight the folder the model files are in, then shift + right click on the same folder and select the option from the menu to open the command line of choice (e.g., Windows Powershell). This should bring up a command window. Then, type `ss` (or other name of the ss exe) into the command prompt and hit enter. Note that if you are using Windows Powershell, you will need to type `./ss`. +To do this, you can typically click to highlight the folder the model files are in, then shift + right click on the same folder and select the option from the menu to open the command line of choice (e.g., Windows Powershell). This should bring up a command window. Then, type `ss3` (or other name of the ss3 exe) into the command prompt and hit enter. Note that if you are using Windows Powershell, you will need to type `./ss3`. The exact instructions for running SS3 can differ depending on the command window used. If you have trouble, search for resources that describe running an executable for your specific command line. @@ -134,11 +136,11 @@ Output from SS3 can be read into [r4ss](https://github.com/r4ss/r4ss) or the exc ## Command line options {#options} -ADMB options can be added to the run when calling the SS3 executable from the command line. The most commonly used option is `ss -nohess` to skip standard errors (for quicker results or to get Report.sso if the hessian does not invert). +ADMB options can be added to the run when calling the SS3 executable from the command line. The most commonly used option is `ss3 -nohess` to skip standard errors (for quicker results or to get Report.sso if the hessian does not invert). -To list all command line options, use one of these calls: `SS -?` or `SS -help`. More info about the ADMB command line options is available in the [ADMB Manual](http://www.admb-project.org/docs/manuals/) (Chapter 12: Command line options). +To list all command line options, use one of these calls: `SS3 -?` or `SS3 -help`. More info about the ADMB command line options is available in the [ADMB Manual](http://www.admb-project.org/docs/manuals/) (Chapter 12: Command line options). -To run SS3 without estimation use: `ss -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in SS3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. +To run SS3 without estimation use: `ss3 -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in v.3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. ## Using ss.par for initial values @@ -174,10 +176,11 @@ Here are some basic checks for when SS3 does not run: + Check that starter.ss references the correct names of the control and data files. + If SS3 starts to read files and then crashes, check warnings.sso and echoinput.sso. The warnings.sso will reveal potential issues with the model, while echoinput.sso will show how far SS3 was able to run. Work backwards from the bottom of echoinput.sso, looking for where SS3 stopped and if the inputs are being read corectly or not. -For further information on troubleshooting, please refer to the SS3 User Manual [“Running Stock Synthesis” subsections](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS), especially [“Re-Starting a Run”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#re-starting-a-run) and [“Debugging Tips”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#debugging-tips). +For further information on troubleshooting, please refer to the SS3 User Manual [“Running Stock Synthesis” subsections](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS3), especially [“Re-Starting a Run”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#re-starting-a-run) and [“Debugging Tips”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#debugging-tips). # Where to get additional help ++ Post to the Stock Synthesis [discussion boards on GitHub](https://github.com/nmfs-stock-synthesis/stock-synthesis/discussions) + The [SS3 vlab website](https://vlab.noaa.gov/web/stock-synthesis) resources, including the SS3 user manual and the SSI user guide + Post questions to the [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) + Email questions to nmfs.stock.synthesis@noaa.gov diff --git a/User_Guides/model_step_by_step/model_tutorial.Rmd b/User_Guides/model_step_by_step/model_tutorial.Rmd index 53214ec1..f36afebd 100644 --- a/User_Guides/model_step_by_step/model_tutorial.Rmd +++ b/User_Guides/model_step_by_step/model_tutorial.Rmd @@ -1,6 +1,6 @@ --- title: "Model building tutorial" -author: "SS Development Team" +author: "SS3 Development Team" date: "10/23/2019" output: word_document --- @@ -11,9 +11,9 @@ knitr::opts_chunk$set(echo = TRUE) # Scope -This is a tutorial illustrating how different data and parameters familiar to stock assessment scientists can be added to Stock Synthesis input files. We assume that these users have had previous population dynamics modeling experience and already understand how to run an existing SS model. +This is a tutorial illustrating how different data and parameters familiar to stock assessment scientists can be added to Stock Synthesis input files. We assume that these users have had previous population dynamics modeling experience and already understand how to run an existing SS3 model. -If you are a new SS user who is not yet comfortable running an SS model, we suggest trying to run a working example model using advice in the **Getting Started** document before attempting to develop and run your own model as outlined here. You can also get more general model building advice in the **Developing your first Stock Synthesis model** guide. +If you are a new SS3 user who is not yet comfortable running an SS3 model, we suggest trying to run a working example model using advice in the **Getting Started** document before attempting to develop and run your own model as outlined here. You can also get more general model building advice in the **Developing your first Stock Synthesis model** guide. Throughout this example, we use an even simpler version of the Stock Synthesis example model "Simple". To get the most out of this tutorial, it is best to download the model files to look at during the tutorial. It may also be useful to run the model and plot the results using the R package [r4ss](github.com/r4ss/r4ss). @@ -35,7 +35,7 @@ No aging error bias is assumed and aging is considered to be very precise. ### Natural mortality, growth, maturity, and sex ratio -All parameters are assumed to be the same over time. Natural mortality is specified at 0.1. A von Bertalanfy growth curve is used with K estimated. Maturity is assumed to be length logistic with some specified parameter values. The sex ratio is assumed 50-50 female and male. +All parameters are assumed to be the same over time. Natural mortality is specified at 0.1. A von Bertalanffy growth curve is used with K estimated. Maturity is assumed to be length logistic with some specified parameter values. The sex ratio is assumed 50-50 female and male. ### Spawner-Recruitment @@ -80,7 +80,7 @@ In the case of this example, data.ss is the name of the data file, while control This is where the data inputs are specified. At the top, general information about the model is specified: the model years, number of seasons, number of sexes, maximum age, number of areas, number of fleets: ```{R eval = FALSE} -#Stock Synthesis (SS) is a work of the U.S. Government and is not subject to copyright protection in the United States. +#Stock Synthesis (SS3) is a work of the U.S. Government and is not subject to copyright protection in the United States. #Foreign copyrights may apply. See copyright.txt for more information. 1971 #_StartYr 2001 #_EndYr @@ -88,7 +88,7 @@ This is where the data inputs are specified. At the top, general information abo 12 #_months/season 2 #_Nsubseasons (even number, minimum is 2) 1 #_spawn_month -2 #_Ngenders: 1, 2, -1 (use -1 for 1 sex setup with SSB multiplied by female_frac parameter) +2 #_Nsexes: 1, 2, -1 (use -1 for 1 sex setup with SSB multiplied by female_frac parameter) 40 #_Nages=accumulator age, first age is always age 0 1 #_Nareas 3 #_Nfleets (including surveys) @@ -129,7 +129,7 @@ Next, the catch is specified: -9999 0 0 0 0 ``` -The first line of the above code chunk shows the column headers for the catch data. Note that all catch comes from the fishery. The line `-999 1 1 0 0.01` specifies equilibirum catch for years before the model starts - in this case, there is no equilibrium catch because the catch column is 0. To terminate this catch data section the line `-9999 0 0 0 0` is needed. This tells SS that it can stop reading catch data. +The first line of the above code chunk shows the column headers for the catch data. Note that all catch comes from the fishery. The line `-999 1 1 0 0.01` specifies equilibirum catch for years before the model starts - in this case, there is no equilibrium catch because the catch column is 0. To terminate this catch data section the line `-9999 0 0 0 0` is needed. This tells SS3 that it can stop reading catch data. Next comes specification for indices of abundance. First is the setup for all of the fleets: @@ -144,7 +144,7 @@ Next comes specification for indices of abundance. First is the setup for all of 3 0 0 0 # SURVEY2 ``` -The column headers for this section are directly above the numbers. Note that all fleets are defined here (i.e., each fleet needs a line), including the fishery and are listed in the same order as when the fleet types were specified. Most importantly in this section, the units and error type that will be used when reading the the indices of abundance are specified. In this case, the fishery and survey 1 have units of biomass, wheras survey 2 is in numbers. Lognormal error is assumed for all 3 of the fleets. +The column headers for this section are directly above the numbers. Note that all fleets are defined here (i.e., each fleet needs a line), including the fishery and are listed in the same order as when the fleet types were specified. Most importantly in this section, the units and error type that will be used when reading the indices of abundance are specified. In this case, the fishery and survey 1 have units of biomass, wheras survey 2 is in numbers. Lognormal error is assumed for all 3 of the fleets. Directly after its header, the indices of abundance data is included: @@ -161,7 +161,7 @@ Directly after its header, the indices of abundance data is included: -9999 1 1 1 1 # terminator for survey observations ``` -Like the catch data, a terminator line is needed to tell SS when to stop reading the indices. +Like the catch data, a terminator line is needed to tell SS3 when to stop reading the indices. Next, discards and mean body size data could be specified, but they are 0 in this example: ```{r eval = FALSE} @@ -221,7 +221,7 @@ Age composition data follows. First, the age bins and ageerror definitions are e 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5 19.5 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5 29.5 30.5 31.5 32.5 33.5 34.5 35.5 36.5 37.5 38.5 39.5 40.5 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 ``` -For the age bins, SS reads in the number (17 in this case) and then expects that number of inputs for the age bins (the 17 values below it). Next, SS reads the age error definitions. In this case, there is only 1 definition, so SS expects 2 vectors, each which contain the max number of ages + 1 values (41 values per vector in this case). The first line defines the *bias* for the aging error, while the second vector defines the *standard deviation* of the aging error. This example has no aging bias and very high aging precision (low standard deviation), so this is close to assuming no aging error. +For the age bins, SS3 reads in the number (17 in this case) and then expects that number of inputs for the age bins (the 17 values below it). Next, SS3 reads the age error definitions. In this case, there is only 1 definition, so SS3 expects 2 vectors, each which contain the max number of ages + 1 values (41 values per vector in this case). The first line defines the *bias* for the aging error, while the second vector defines the *standard deviation* of the aging error. This example has no aging bias and very high aging precision (low standard deviation), so this is close to assuming no aging error. Next comes the age composition setup lines: ```{r eval = FALSE} @@ -246,9 +246,9 @@ which includes the length bin method for ages. Finally, the age composition data -9999 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` -One important note here is the using Lbin_lo and Lbin_hi = -1 selects the entire length bin as being used for the ages. Similar to the length composition data, SS expect 1 value for females in each data bin, followed by values for males in each data bin (in this case, there are 34 values in the data vector) +One important note here is the using Lbin_lo and Lbin_hi = -1 selects the entire length bin as being used for the ages. Similar to the length composition data, SS3 expect 1 value for females in each data bin, followed by values for males in each data bin (in this case, there are 34 values in the data vector) -SS has some additional options that we have not used here and thus set to 0: +SS3 has some additional options that we have not used here and thus set to 0: ```{r eval = FALSE} 0 #_Use_MeanSize-at-Age_obs (0/1) # @@ -266,7 +266,7 @@ SS has some additional options that we have not used here and thus set to 0: 0 # Do dataread for selectivity priors(0/1) ``` -And finally, the data file must end in `999` to tell SS to stop reading. +And finally, the data file must end in `999` to tell SS3 to stop reading. ```{r eval = FALSE} 999 ``` @@ -277,11 +277,11 @@ The control file contains the setup for model parameter values (both fixed value 0 # 0 means do not read wtatage.ss; 1 means read and use wtatage.ss and also read and use growth parameters ``` -In this case, it is not being used, so is set to 0. If empirical weight at age were used, SS would ignore all inputs relating to growth, maturity, and fecundity that are specified later in the control file (although it does still expect inputs). +In this case, it is not being used, so is set to 0. If empirical weight at age were used, SS3 would ignore all inputs relating to growth, maturity, and fecundity that are specified later in the control file (although it does still expect inputs). Next are options for number of growth patterns and platoons. These are set to 1 because we assume the whole population is the same growth pattern, and there are not platoons within the growth patterns. ```{r eval = FALSE} -1 #_N_Growth_Patterns (Growth Patterns, Morphs, Bio Patterns, GP are terms used interchangeably in SS) +1 #_N_Growth_Patterns (Growth Patterns, Morphs, Bio Patterns, GP are terms used interchangeably in SS3) 1 #_N_platoons_Within_GrowthPattern ``` @@ -327,7 +327,7 @@ Option 0 is used for natural mortality because only 1 value is being assumed. Gr 1 # GrowthModel: 1=vonBert with L1&L2; 2=Richards with L1&L2; 3=age_specific_K_incr; 4=age_specific_K_decr; 5=age_specific_K_each; 6=NA; 7=NA; 8=growth cessation 0 #_Age(post-settlement)_for_L1;linear growth below this 25 #_Growth_Age_for_L2 (999 to use as Linf) --999 #_exponential decay for growth above maxage (value should approx initial Z; -999 replicates 3.24; -998 to not allow growth above maxage) +-999 #_exponential decay for growth above maxage (value should approx initial Z; -999 replicates v.3.24; -998 to not allow growth above maxage) 0 #_placeholder for future growth feature # 0 #_SD_add_to_LAA (set to 0.1 for SS2 V1.x compatibility) @@ -342,7 +342,7 @@ Then, the setup lines for maturity, fecundity, and other specialized options: 0 #_hermaphroditism option: 0=none; 1=female-to-male age-specific fxn; -1=male-to-female age-specific fxn 1 #_parameter_offset_approach (1=none, 2= M, G, CV_G as offset from female-GP1, 3=like SS2 V1.x) ``` -The parameter lines resulting from the natural mortality, growth, and maturity (this section is sometimes called MG parms) are specified next. The number of paramter lines depends on the options selected in the setup lines. The parameters must also be specified in a particular order, with female parameters coming before male parameters in a 2-sex model: +The parameter lines resulting from the natural mortality, growth, and maturity (this section is sometimes called MG parms) are specified next. The number of parameter lines depends on the options selected in the setup lines. The parameters must also be specified in a particular order, with female parameters coming before male parameters in a 2-sex model: ```{r eval = FALSE} #_ LO HI INIT PRIOR PR_SD PR_type PHASE env_var&link dev_link dev_minyr dev_maxyr dev_PH Block Block_Fxn # Sex: 1 BioPattern: 1 NatMort @@ -383,7 +383,7 @@ The parameter lines resulting from the natural mortality, growth, and maturity ( 1e-006 0.999999 0.5 0.5 0.5 0 -99 0 0 0 0 0 0 0 # FracFemale_GP_1 ``` -Note that the first line in the block of SS input above shows the column headers. All sections with long parameter lines within the control file have these same headings. There are a lot of specifications in these long parameter lines, but a few of particular note are: +Note that the first line in the block of SS3 input above shows the column headers. All sections with long parameter lines within the control file have these same headings. There are a lot of specifications in these long parameter lines, but a few of particular note are: - Anything with negative phase (7th value in a long parameter line) is not estimated and is set at the initial value (3rd value in the line), while positivie phases are estimated. - Natural mortality for both males and females is specified at 0.1. @@ -406,7 +406,7 @@ Next, the Spawner recruitment setup and Spawner recruit long parameter lines are 0 # 0/1 to use steepness in initial equ recruitment calculation 0 # future feature: 0/1 to make realized sigmaR a function of SR curvature ``` -which effects the number of SR parameter lines that follow: +which affects the number of SR parameter lines that follow: ```{r eval = FALSE} #_ LO HI INIT PRIOR PR_SD PR_type PHASE env-var use_dev dev_mnyr dev_mxyr dev_PH Block Blk_Fxn # parm_name 3 31 8.81505 10.3 10 0 1 0 0 0 0 0 0 0 # SR_LN(R0) @@ -433,7 +433,7 @@ These define the main recruitment devitations, which in this case last from the 1900 #_last_yr_nobias_adj_in_MPD; begin of ramp 1900 #_first_yr_fullbias_adj_in_MPD; begin of plateau 2001 #_last_yr_fullbias_adj_in_MPD - 2002 #_end_yr_for_ramp_in_MPD (can be in forecast to shape ramp, but SS sets bias_adj to 0.0 for fcast yrs) + 2002 #_end_yr_for_ramp_in_MPD (can be in forecast to shape ramp, but SS3 sets bias_adj to 0.0 for fcast yrs) 1 #_max_bias_adj_in_MPD (-1 to override ramp and set biasadj=1.0 for all estimated recdevs) 0 #_period of cycles in recruitment (N parms read below) -5 #min rec_dev @@ -441,7 +441,7 @@ These define the main recruitment devitations, which in this case last from the 0 #_read_recdevs #_end of advanced SR options ``` -The advanced options allow the user to bias adjust the recruitment deviations. There is more on bias adjustment in the SS user manual, but the general idea is to account for the fact that earlier and later recruitment deviations likely have less information informing them than the ones in the middle. The bias adjustment ramp accounts for this and is typically "tuned" by looking at bias ramp in the model results after it is run, respecifying the bias ramp as needed, and rerunning the model. +The advanced options allow the user to bias adjust the recruitment deviations. There is more on bias adjustment in the SS3 user manual, but the general idea is to account for the fact that earlier and later recruitment deviations likely have less information informing them than the ones in the middle. The bias adjustment ramp accounts for this and is typically "tuned" by looking at bias ramp in the model results after it is run, respecifying the bias ramp as needed, and rerunning the model. Fishing mortality info is next specified: ```{r eval = FALSE} @@ -521,7 +521,7 @@ A selectivity pattern must be specified for both size and age selectivity for ea #_no timevary selex parameters # ``` -These paramter lines are specified in order, with the size (or length) selectivity lines specified before the age selectivity lines and the fleets in the same order as in the setup lines. Also, the selectivity pattern used determines the number of parameters needed to specify each fleets' size or age selectivity. +These parameter lines are specified in order, with the size (or length) selectivity lines specified before the age selectivity lines and the fleets in the same order as in the setup lines. Also, the selectivity pattern used determines the number of parameters needed to specify each fleets' size or age selectivity. Some special features (2DAR selectivity, tagging data, variance adjusment, lambdas, and additional standard deviation reporting) in the control file are turned off for this model: ```{r eval = FALSE} @@ -544,14 +544,14 @@ Some special features (2DAR selectivity, tagging data, variance adjusment, lambd # 0 # (0/1) read specs for more stddev reporting ``` -Varaiance adjustment factors and/or lambdas can be used for data weighting, but in this case they have not yet been used. The control file then ends with 999 so that SS knows it can stop reading: +Varaiance adjustment factors and/or lambdas can be used for data weighting, but in this case they have not yet been used. The control file then ends with 999 so that SS3 knows it can stop reading: ```{r eval = FALSE} 999 ``` # Running the model and afterwards -The model was run using Stock Synthesis 3.30.14 and no additional ADMB command line options. The model should have no issues running, but if you have issues, please see debugging sections in the **Getting Started** and **Developing your first Stock Synthesis model** guides. +The model was run using Stock Synthesis v.3.30.14 and no additional ADMB command line options. The model should have no issues running, but if you have issues, please see debugging sections in the **Getting Started** and **Developing your first Stock Synthesis model** guides. ## Checks for convergence After running the model, open the warning.sso file to check for any warnings from Stock Synthesis. This file shows no warnings: @@ -559,7 +559,7 @@ After running the model, open the warning.sso file to check for any warnings fro N warnings: 0 Number_of_active_parameters_on_or_near_bounds: 0 ``` -which suggests that the model is not misspecified in a way that SS knows to warn about. +which suggests that the model is not misspecified in a way that SS3 knows to warn about. Next, we want to quickly check for any evidence that the model did not converge. In Report.sso, underneath information about the data file and control file names is information about the convergence level: ```{r eval = FALSE} @@ -603,9 +603,9 @@ SS_writestarter(starter, dir = mydir, overwrite = TRUE) # write modified starter ``` Next, the jitter can be run: ```{r eval = FALSE} -SS_RunJitter(mydir = "simpler", model = "ss", Njitter = 100) +SS_RunJitter(mydir = "simpler", model = "ss3", Njitter = 100) ``` -The previous code assumes that the model directory `mydir` is called "simpler", which is a folder within the working directory. The `model` argument specifies the name of the ss executable, so in this case, it assumes that the SS executable is within the "simpler" folder and called "ss.exe". Finally, `Njitter` tells the function how many times to run the function. For west coast stock assessment jitters, 100 runs is a common value to use, but note that the run time is not trivial (it depends on the model, but may take an hour or more to run). +The previous code assumes that the model directory `mydir` is called "simpler", which is a folder within the working directory. The `model` argument specifies the name of the ss executable, so in this case, it assumes that the SS3 executable is within the "simpler" folder and called "ss.exe". Finally, `Njitter` tells the function how many times to run the function. For west coast stock assessment jitters, 100 runs is a common value to use, but note that the run time is not trivial (it depends on the model, but may take an hour or more to run). After the jitter is run, the final likelihood values are the most important part of the results to look at. If the original model run has found a global minimum, you would expect all likelihood values from the jitter to be the same or higher than the original model run. If there are any likehood values that are lower than the original model run, this indicates that the model run did not find a global minimum. Investigating the run or runs with lower likelihood values would be the next step in figuring out what the "final" model run will be. diff --git a/User_Guides/ss_model_tips/ss_model_tips.Rmd b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd similarity index 90% rename from User_Guides/ss_model_tips/ss_model_tips.Rmd rename to User_Guides/ss3_model_tips/ss3_model_tips.Rmd index 5a120c9a..2d2b98b6 100644 --- a/User_Guides/ss_model_tips/ss_model_tips.Rmd +++ b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd @@ -19,7 +19,7 @@ knitr::opts_chunk$set(echo = FALSE) The developing your first SS3 model guide teaches users how to develop a basic Stock Synthesis model. We assume that these users have had previous population dynamics modeling experience and already understand how to run an existing SS3 model. -If you are a new SS3 user who is not yet comfortable running an SS3 model, we suggest trying to run an example working model using advice in the [Getting Started guide](https://nmfs-stock-synthesis.github.io/doc/Getting_Started_SS.html) before attempting to develop and run your own model as outlined here. +If you are a new SS3 user who is not yet comfortable running an SS3 model, we suggest trying to run an example working model using advice in the [Getting Started guide](https://nmfs-stock-synthesis.github.io/doc/Getting_Started_SS3.html) before attempting to develop and run your own model as outlined here. By the end of using this guide, you should be able to: @@ -37,9 +37,10 @@ There are many potential workflows for developing a new SS3 model, but a common Some commonly used tools for editing the SS3 input files are: -1. [**Stock Synthesis Interface (SSI; the SS3 GUI)**](https://github.com/nmfs-stock-synthesis/ssi). The SSI allows you to read in a model, performs some checks to ensure valid inputs, make modifications to the model, and offers visualizations of inputs. You can also run models from SSI. +1. [**Stock Synthesis Interface (SSI; the SS3 GUI)**](https://github.com/nmfs-stock-synthesis/ssi). The SSI allows you to read in a model, performs some checks to ensure valid inputs, make modifications to the model, and offers visualizations of inputs. You can also run models from SSI. Note that SSI is not maintained for Stock Synthesis versions after v.3.30.21. 2. **Your favorite text editor**. 3. **The ```SS_read*``` and ```SS_write*``` functions in the R package [r4ss](https://github.com/r4ss/r4ss)**. These functions allow you to read in SS3 input files to R, manipulate them from within R, then write them out to a file. The [r4ss vignette](https://r4ss.github.io/r4ss/vignettes/r4ss-intro-vignette.html#scripting-stock-synthesis-workflows-with-r4ss) demonstrates how to use these functions +4. **Stock Assessment Continuum Tool**. Available through github at https://github.com/shcaba/SS-DL-tool. # Guidance on model specification @@ -47,9 +48,9 @@ SS3 has a rich set of features. Some required inputs are conditional on other in The [SS3 user manual](https://github.com/nmfs-stock-synthesis/doc/releases) can be used as a guide to help you edit your model. Conditional inputs are noted in the manual. The SSI can also help guide you through changes in model inputs required as you select different SS3 model options. -If you are unsure if you got the setup right (e.g., adding the correct number of parameter lines for a chosen catchability setup), try running the model with ```maxphase = 0``` in the starter file and ADMB option ```-nohess``` (or for SS3 3.30.16 and greater, run the model with command line options ```-stopph 0 -nohess```, no need to change the starter file). If the model run completes, you can compare the **control.ss_new** file and the first data set in **data.ss_new** to your SS3 input files to make sure SS3 interpreted the values as intended. If the run exits before completion, you can look at **warning.sso** and **echoinput.sso** for clues as to what was wrong with your setup. +If you are unsure if you got the setup right (e.g., adding the correct number of parameter lines for a chosen catchability setup), try running the model with ```maxphase = 0``` in the starter file and ADMB option ```-nohess``` (or for v.3.30.16 and greater, run the model with command line options ```-stopph 0 -nohess```, no need to change the starter file). If the model run completes, you can compare the **control.ss_new** file and the first data set in **data.ss_new** to your SS3 input files to make sure SS3 interpreted the values as intended. If the run exits before completion, you can look at **warning.sso** and **echoinput.sso** for clues as to what was wrong with your setup. -For additional help with model specification, please post your questions on the vlab [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) (for registered SS3 users) or send an email to the SS3 team at NMFS.Stock.Synthesis@noaa.gov. +For additional help with model specification, please post your questions in the GitHub [discussions](https://github.com/nmfs-stock-synthesis/stock-synthesis/discussions) or on the vlab [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) (for registered SS3 users) or send an email to the SS3 team at NMFS.Stock.Synthesis@noaa.gov. # Phases in SS3 and ADMB @@ -154,13 +155,13 @@ The `r4ss::SS_output()` function prints information on tuning to the R console. ## Did I fix variation in recruitment correctly? -Check the the sigmaR (i.e., recruitment devs standard deviation) information. The sigmaR parameter is typically fixed within the model, so if it is not fixed within your model, you should consider whether or not this makes sense for the population and given the quality and quantity of data. +Check the sigmaR (i.e., recruitment devs standard deviation) information. The sigmaR parameter is typically fixed within the model, so if it is not fixed within your model, you should consider whether or not this makes sense for the population and given the quality and quantity of data. For a fixed value of sigmaR, a section of the output will provide diagnostics to determine if the fixed sigmaR value is capturing the estimated variations in recruitment. The `r4ss::SS_output()` function will print a recommended value based on the variation in the estimated recruitment deviations that you may want to consider using. # Where to get additional help -+ Post questions to the [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) or send an emails to the team at NMFS.Stock.Synthesis@noaa.gov for assistance. ++ Post questions in the GitHub [discussions](https://github.com/nmfs-stock-synthesis/stock-synthesis/discussions) or to the [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) or send an emails to the team at NMFS.Stock.Synthesis@noaa.gov for assistance. + [Carvalho et al. 2021](https://doi.org/10.1016/j.fishres.2021.105959) contains guidance on developing stock assessments. diff --git a/_data_weighting.tex b/_data_weighting.tex index bcb15c82..7bf93b15 100644 --- a/_data_weighting.tex +++ b/_data_weighting.tex @@ -23,7 +23,7 @@ \subsection{Data Weighting} A convenient way to process these values into the format required by the control file is to use the function: -\texttt{ SS\_tune\_comps(replist, option = ``MI'') } +\texttt{SS\_tune\_comps(replist, option = ``MI'')} where the input ``replist'' is the object created by \texttt{SS\_output}. This function will return a table and also write a matching file called ``suggested\_tuning.ss'' to the directory where the model was run. @@ -39,7 +39,7 @@ \subsection{Data Weighting} \includegraphics[scale = 0.65]{appendixB_McAllister_Ianelli}\\ \end{center} - \caption{ The relationship between the observed sample size (the input sample number) versus the effective sample size where the effective sample size is the product of the input sample size and the data weighting applied to the data set. } + \caption{The relationship between the observed sample size (the input sample number) versus the effective sample size where the effective sample size is the product of the input sample size and the data weighting applied to the data set.} \label{(fig:mcallister)} \end{figure} @@ -90,7 +90,7 @@ \subsection{Data Weighting} \item The input-sample-size provided by the user is an upper bound on weighting for those data, such that a dummy value of 1 will cause those -data to never be assigned a weight \>1. +data to never be assigned a weight greater than 1. \item Changes in input sample size are not quite proportionally offset by changes in the estimated weighting parameter, such that @@ -192,7 +192,7 @@ \subsection{Data Weighting} \end{minipage} \end{small} -\item Reset any existing variance adjustments factors that might have been used for the McAllister-Ianelli or Francis tuning methods. In 3.24 this means setting the values to 1, in SS3 v.3.30, you can simply delete or comment-out the rows with the adjustments. +\item Reset any existing variance adjustments factors that might have been used for the McAllister-Ianelli or Francis tuning methods. In v.3.24 this means setting the values to 1, in v.3.30, you can simply delete or comment-out the rows with the adjustments. \end{itemize} The \texttt{SS\_output} function in r4ss returns table like the following: @@ -210,7 +210,7 @@ \subsection{Data Weighting} If the reported $\theta/(1+\theta)$ ratio is close to 1.0, that indicates that the model is trying to tune the sample size as high as possible. In this case, the $ln(\theta)$ parameters should be fixed at a high value, like the upper bound of 20, which will result in 100\% weight being applied to the input sample sizes. An alternative would be to manually change the input sample sizes to a higher value so that the estimated weighting will be less than 100%. -Note that a constant of integration was added to the Dirichlet-multinomial likelihood equation in SS3 v.3.30.17. This will change the likelihood value, but parameter estimates and expected values should remain the same as in previous versions of SS3. +Note that a constant of integration was added to the Dirichlet-multinomial likelihood equation in v.3.30.17. This will change the likelihood value, but parameter estimates and expected values should remain the same as in previous versions of SS3. Some challenges posed by the Dirichlet-multinomial data-weighting approach: \begin{enumerate} diff --git a/_f_mortality.tex b/_f_mortality.tex index 63605dc4..a2b6c6b2 100644 --- a/_f_mortality.tex +++ b/_f_mortality.tex @@ -30,13 +30,13 @@ \subsection{Fishing Mortality in Stock Synthesis} $F\text{std}_y$ is a standardized measure of the total fishing intensity for a year and is reported in the derived quantities, so variance is calculated for this quantity. See below for how it relates to $annF$. -Terminology and reporting of $\text{ann}F$ and $F\text{std}$ has been slightly revised for clarity in 3.30.15.00 and the description here follows the new conventions. +Terminology and reporting of $\text{ann}F$ and $F\text{std}$ has been slightly revised for clarity in v.3.30.15.00 and the description here follows the new conventions. \myparagraph{$F$ Calculation} SS3 allows for three approaches to estimate the $F'$ that will match the input values for retained catch. Note that SS3 is calculating the $F'$ to match the retained catch conditional on the fraction of total catch that is retained, e.g., the total catch is partitioned into retained and discarded portions. \begin{enumerate} - \item Pope's method decays the numbers-at-age to the middle of the season, calculates a harvest rate for each fleet, $H_{t,f}$, that is the ratio of $C_{t,f}$ to $B_{t,f}$, then decays the survivors to the end of the season. the total mortality, $Z_{t,a}$, from the ratio of survivors to initial numbers, is then calculated. The $Z$ is subsequently used for in-season interpolation to get expected values for observations. + \item Pope's method decays the numbers-at-age to the middle of the season, calculates a harvest rate for each fleet, $H_{t,f}$, that is the ratio of $C_{t,f}$ to $B_{t,f}$, then decays the survivors to the end of the season. The total mortality, $Z_{t,a}$, from the ratio of survivors to initial numbers, is then calculated. The $Z$ is subsequently used for in-season interpolation to get expected values for observations. \item $F$ as parameters uses the standard Baranov catch equation and lets ADMB find the $F'$ parameter values that produce the lowest negative log-likelihood, which includes fit to the input catch data. $F$ as parameters method tends to work better than Pope's or hybrid in high $F$ situations because it allows for some lack of fit to catch levels in early iterations and can later improve this fit as it closes in on the best solution. @@ -78,7 +78,7 @@ \subsection{Fishing Mortality in Stock Synthesis} For options 4 and 5 of F\_report\_units, the $F$ is calculated as $Z-M$ where $Z$ is calculated as $ln(N_{t+1,a+1}/N_{t,a})$, thus $Z$ subsumes the effect of $F$. -The ann$F$ is calculated for each year of the estimated time series and of the forecast. Additionally, an ann$F$ is calculated in the benchmark calculations to provide equilibrium values that have the same units as ann$F$ from the time series. In versions previous to 3.30.15, it was labeled inaccurately as $F$std in the output, not ann$F$. For example, in the Management Quantities section of derived quantities prior to 3.30.15, there is a quantity labeled Fstd\_Btgt. This is more accurately labeled as the annual $F$ associated with the biomass target, ann\_F\_Btgt, in 3.30.15. +The ann$F$ is calculated for each year of the estimated time series and of the forecast. Additionally, an ann$F$ is calculated in the benchmark calculations to provide equilibrium values that have the same units as ann$F$ from the time series. In versions previous to v.3.30.15, it was labeled inaccurately as $F$std in the output, not ann$F$. For example, in the Management Quantities section of derived quantities prior to v.3.30.15, there is a quantity labeled Fstd\_Btgt. This is more accurately labeled as the annual $F$ associated with the biomass target, ann\_F\_Btgt, in v.3.30.15. \myparagraph{$F$std} $F$std is a single annual value based on ann$F$ and the relationship to ann$F$ is specified by F\_report\_basis in the starter.ss file. The benchmark ann$F$ may be used to rescale the time series of ann$F$s to become a time series of standardized values representing the intensity of fishing, $F$std. The report basis is selected in the starter file as: diff --git a/_forecast_module.tex b/_forecast_module.tex index 391d8975..91a13a00 100644 --- a/_forecast_module.tex +++ b/_forecast_module.tex @@ -2,7 +2,7 @@ \subsection{Forecast Module: Benchmark and Forecasting Calculations} \label{sec:forecast} -SS3 v.3.20 introduced substantial upgrades to the benchmark and forecast module. The general intent was to make the forecast outputs more consistent with the requirement to set catch limits that have a known probability of exceeding the overfishing limit. In addition, this upgrade addressed several inadequacies with the previous module, including: +Stock Synthesis v.3.20 introduced substantial upgrades to the benchmark and forecast module. The general intent was to make the forecast outputs more consistent with the requirement to set catch limits that have a known probability of exceeding the overfishing limit. In addition, this upgrade addressed several inadequacies with the previous module, including: \begin{itemize} \item The average selectivity and relative F was the same for the benchmark and the forecast calculations; @@ -14,7 +14,7 @@ \subsection{Forecast Module: Benchmark and Forecasting Calculations} \item The forecast allowed for a blend of fixed input catches and catches calculated from target F; this is not optimal for calculation of the variance of F conditioned on a catch policy that sets annual catch limits (ACLs). \end{itemize} -The V3.20 module addressed these issues by: +The v.3.20 module addressed these issues by: \begin{itemize} \item Providing for unique specification of a range of years from which to calculate average selectivity for benchmark, average selectivity for forecast, relative F for benchmark, and relative F for forecast; \item Create a new specification for the range of years over which to average size-at-age and fecundity-at-age for the benchmark calculation. In a setup with time-varying growth, it may make sense to do this over the entire range of years in the time series. Note that some additional quantities still use their endyr values, notably the migration rates and the allocation of recruitments among areas. This will be addressed shortly; diff --git a/docs/index.md b/docs/index.md index 1b81fd7b..64a9372f 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,8 +1,8 @@ # Stock Synthesis Documentation ## Links to Documentation -* [Getting Started Tutorial](Getting_Started_SS.html) -* [Building Your First SS3 Model Tutorial](ss_model_tips.html) +* [Getting Started Tutorial](Getting_Started_SS3.html) +* [Building Your First SS3 Model Tutorial](ss3_model_tips.html) * [Current User Manual (html)](SS330_User_Manual_release.html) * [Current User Manual (pdf)](https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/download/v3.30.21/SS330_User_Manual.pdf) diff --git a/technical_description/12init_numbers_recruitment.Rmd b/technical_description/12init_numbers_recruitment.Rmd index 0ceadbc8..f1e68292 100644 --- a/technical_description/12init_numbers_recruitment.Rmd +++ b/technical_description/12init_numbers_recruitment.Rmd @@ -2,7 +2,7 @@ The population in the initial year of a SS application can be simply an unfished equilibrium population, a population in equilibrium with an estimated mortality rate that is influenced by data on historical equilibrium catch, or an equilibrium population that has estimable age-specific deviations from this equilibrium for a user-specified number of the younger ages. -The numbers of animals of gender $\gamma$ in age group a in a virgin state ($y=0$) is: +The numbers of animals of sex $\gamma$ in age group a in a virgin state ($y=0$) is: \begin{equation} \label{eqn1} diff --git a/technical_description/13biology.Rmd b/technical_description/13biology.Rmd index f55d5216..a05065e3 100644 --- a/technical_description/13biology.Rmd +++ b/technical_description/13biology.Rmd @@ -59,7 +59,7 @@ Equation \ref{eqn8} would logically use natural mortality as the decay factor. H ## Growth -The mean size-at-age for the von Bertanlaffy growth curve by sex at the start of each season for each growth morph is incremented across years as: +The mean size-at-age for the von Bertalanffy growth curve by sex at the start of each season for each growth morph is incremented across years as: \begin{equation} \label{eqn9} diff --git a/technical_description/_main.knit.md b/technical_description/_main.knit.md index fa43e3fe..ed208d9c 100644 --- a/technical_description/_main.knit.md +++ b/technical_description/_main.knit.md @@ -216,7 +216,7 @@ Equation \ref{eqn8} would logically use natural mortality as the decay factor. H ## Growth -The mean size-at-age for the von Bertanlaffy growth curve by sex at the start of each season for each growth morph is incremented across years as: +The mean size-at-age for the von Bertalanffy growth curve by sex at the start of each season for each growth morph is incremented across years as: \begin{equation} \label{eqn9} diff --git a/technical_description/_main.md b/technical_description/_main.md index fa43e3fe..ed208d9c 100644 --- a/technical_description/_main.md +++ b/technical_description/_main.md @@ -216,7 +216,7 @@ Equation \ref{eqn8} would logically use natural mortality as the decay factor. H ## Growth -The mean size-at-age for the von Bertanlaffy growth curve by sex at the start of each season for each growth morph is incremented across years as: +The mean size-at-age for the von Bertalanffy growth curve by sex at the start of each season for each growth morph is incremented across years as: \begin{equation} \label{eqn9} diff --git a/technical_description/_main.tex b/technical_description/_main.tex index 34d03d7e..ec8d4b66 100644 --- a/technical_description/_main.tex +++ b/technical_description/_main.tex @@ -493,7 +493,7 @@ \subsection{Growth}\label{growth}} \tagstructbegin{tag=P}\tagmcbegin{tag=P} -The mean size-at-age for the von Bertanlaffy growth curve by sex at the start of each season for each growth morph is incremented across years as: +The mean size-at-age for the von Bertalanffy growth curve by sex at the start of each season for each growth morph is incremented across years as: \leavevmode\tagmcend\tagstructend\par diff --git a/tv_parameter_description.tex b/tv_parameter_description.tex index 4e865055..df295b8b 100644 --- a/tv_parameter_description.tex +++ b/tv_parameter_description.tex @@ -49,7 +49,7 @@ \subsubsection{Specification of Time-Varying Parameters: Long Parameter Lines} \item $X_y = \rho*X_{y-1} + \text{dev}_y*\text{dev}_{se}$ \item $P_y = P_{base,y} + X_y$ \end{itemize} - \item 5 = mean reverting random walk with $\rho$ and a logit transformation to stay within the minimum and maximum parameter bounds (approach added in SS3 v.3.30.16) + \item 5 = mean reverting random walk with $\rho$ and a logit transformation to stay within the minimum and maximum parameter bounds (approach added in v.3.30.16) \begin{itemize} \item $X_1 = \text{dev}_1*\text{dev}_{se}$ \item $R = P_{max} - P_{min}$ @@ -60,7 +60,7 @@ \subsubsection{Specification of Time-Varying Parameters: Long Parameter Lines} \item $P_1 = P_{min} + \frac{R}{1 + e^{-Y_y - X_y }}$. For years after the first year. \end{itemize} \item 6 = mean reverting random walk with penalty to keep the root mean squared error (RMSE) near 1.0. Same as case 4, but with penalty applied. - \item The option of extending the final model year deviation value subsequent years (i.e., into the forecast period) was added in v. 3.30.13. This new option is specified by selecting the appropriate deviation link option and appending a 2 at the front (e.g, 25), which will use the final year deviation value for all forecast years. + \item The option of extending the final model year deviation value subsequent years (i.e., into the forecast period) was added in v.3.30.13. This new option is specified by selecting the appropriate deviation link option and appending a 2 at the front (e.g, 25), which will use the final year deviation value for all forecast years. \end{itemize} where: \begin{itemize} @@ -127,9 +127,9 @@ \subsubsection{Specification of Time-Varying Parameters: Short Parameter Lines} For example, if two parameters were specified to have environmental linkages in the MG parameter section, below the MG parameters would be two parameter lines (when not auto-generating these lines), which is an environmental linkage parameter for each time-varying base parameter: -\begin{longtable}{ p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm} } +\begin{longtable}{p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm}} \hline - & & & Prior & Prior & Prior & & \Tstrut\\ + & & & Prior & Prior & Prior & & \Tstrut\\ LO & HI & INIT & Value & SD & Type & Phase & Parameter Label \Bstrut\\ \hline \endfirsthead @@ -145,19 +145,19 @@ \subsubsection{Specification of Time-Varying Parameters: Short Parameter Lines} \endlastfoot \multicolumn{7}{l}{COND: Only if MG parameters are time-varying} \Tstrut\\ - -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_1\_Fem\_ENV\_add\Tstrut\\ - -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_2\_Fem\_ENV\_add\Bstrut\\ + -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_1\_Fem\_ENV\_add \Tstrut\\ + -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_2\_Fem\_ENV\_add \Bstrut\\ \hline \end{longtable} -In SS3 v.3.30, the time-varying input short parameter lines are organized such that all parameters that affect a base parameter are clustered together with time blocks (or trend) first, then environmental linkages, then parameter deviations. For example, if the mortality-growth (MG) base parameters 3 and 7 had time varying changes, the order would look like: +In Stock Synthesis v.3.30, the time-varying input short parameter lines are organized such that all parameters that affect a base parameter are clustered together with time blocks (or trend) first, then environmental linkages, then parameter deviations. For example, if the mortality-growth (MG) base parameters 3 and 7 had time varying changes, the order would look like: \begin{center} \begin{longtable}{p{5cm} p{10cm}} \hline - MG base parameter 3 & Block parameter 3-1\Tstrut\\ - & Block parameter 3-2\\ - & Environmental link parameter 3-1\\ + MG base parameter 3 & Block parameter 3-1 \Tstrut\\ + & Block parameter 3-2 \\ + & Environmental link parameter 3-1 \\ & Deviation se parameter 3 \\ & Deviation $\rho$ parameter 3 \Bstrut\\ MG base parameter 7 & Block parameter 7-1 \\ @@ -195,7 +195,7 @@ \subsubsection{Example Time-varying Parameter Setups} \myparagraph{Time Blocks} \begin{itemize} - \item Offset approach: One or more time blocks are created and cover all or a subset of the years. Each block gets a parameter that is used as an offset from the base parameter (time block functional form 1). In this situation, typically the base parameter and each of the offset parameters are estimated. In years not covered by blocks, the base parameter alone is used. However, if blocks cover all the years, then the value of the block parameter is completely correlated with the mean of the block offsets, so model convergence and variance estimation could be affected. The recommended approach when using offsets is to not have all years covered by blocks or to fix the base parameter value at a reasonable level when doing offsets for all years. + \item Offset approach: One or more time blocks are created and cover all or a subset of the years. Each block gets a parameter that is used as an offset from the base parameter (time block functional form 1). In this situation, typically the base parameter and each of the offset parameters are estimated. In years not covered by blocks, the base parameter alone is used. However, if blocks cover all the years, then the value of the block parameter is completely correlated with the mean of the block offsets, so model convergence and variance estimation could be affected. The recommended approach when using offsets is to not have all years covered by blocks or to fix the base parameter value at a reasonable level when doing offsets for all years. \item Replacement approach, Option A: Time blocks are created which cover a subset of the years. The base parameter is used in the non-block years and the value of the base parameter is replaced by the block parameter in each respective block (time block functional form 2). In this situation, typically the base parameter and each of the block parameters are estimated. @@ -207,27 +207,27 @@ \subsubsection{Example Time-varying Parameter Setups} \begin{itemize} \item Suppose natural mortality was thought to increase from 0.1 to 0.2 during 2000 to 2010. This could be input as a trend. First, the natural mortality parameter would be fixed at an initial value of 0.1. Then, a value of -2 could be input into the ``use block'' column of the natural mortality long parameter line to indicate that the direct input option for trends should be used. The long parameter line for M could look like: \begin{center} - \begin{longtable}{p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{3cm}} + \begin{longtable}{p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{3cm}} \hline - LO \Tstrut & HI & INIT & & PHASE & & Use\_Block & Block Fxn & Parameter Label\Bstrut\\ + LO \Tstrut & HI & INIT & & PHASE & & Use\_Block & Block Fxn & Parameter Label \Bstrut\\ \hline - 0 & 4 & 0.1 & \multicolumn{1}{c}{...} & -1 & \multicolumn{1}{c}{...} & -2 & 0 & \#M \Bstrut\\ + 0 & 4 & 0.1 & \multicolumn{1}{c}{...} & -1 & \multicolumn{1}{c}{...} & -2 & 0 & \#M \Bstrut\\ \hline \end{longtable} \end{center} \item Three short parameter lines are then expected after the mortality-growth long parameter lines, one for the final value, one for the inflection year and one for the width. The final value could be fixed by using 0.2 as the final value on the short parameter line and a negative phase value. The inflection year could be fixed at 2005 by inputting 2005 for the inflection year in the short parameter line with a negative phase. Finally, the width value (i.e., standard deviation of the cumulative normal distribution) could be set at 3 years. The short parameter lines could look like: - \begin{longtable}{ p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm}} + \begin{longtable}{p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm}} \hline - & & & Prior & Prior & Prior & & \Tstrut\\ + & & & Prior & Prior & Prior & & \Tstrut\\ LO & HI & INIT & Value & SD & Type & Phase & Parameter Label \Bstrut\\ \hline \endfirsthead \hline - & & & Prior & Prior & Prior & & \Tstrut\\ + & & & Prior & Prior & Prior & & \Tstrut\\ LO & HI & INIT & Value & SD & Type & Phase & Parameter Label \Bstrut\\ \hline \endhead @@ -236,9 +236,9 @@ \subsubsection{Example Time-varying Parameter Setups} \endlastfoot - 0.001 & 4 & 0.2 & 0 & 0.01 & 0 & -1 &\#M\_TrendFinal\Tstrut\\ - 1999 & 2011 & 2005 & 0 & 0.01 & 0 & -1 &\#M\_TrendInfl\Bstrut\\ - -99 & 99 & 3 & 0 & 0.01 & 0 & -1 &\#M\_TrendWidth\_yrs\Bstrut\\ + 0.001 & 4 & 0.2 & 0 & 0.01 & 0 & -1 & \#M\_TrendFinal \Tstrut\\ + 1999 & 2011 & 2005 & 0 & 0.01 & 0 & -1 & \#M\_TrendInfl \Bstrut\\ + -99 & 99 & 3 & 0 & 0.01 & 0 & -1 & \#M\_TrendWidth\_yrs \Bstrut\\ \hline \end{longtable} \end{itemize}