From dd58cdf23fe589306a10f5022b569d11f1c10424 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Thu, 12 Oct 2023 11:34:54 -0400 Subject: [PATCH 1/8] change ss to ss3 in runningSS.tex and r4ss.tex; change ss to ss3 in helper docs; formatting in other .tex files --- .github/workflows/deploy-ss3-docs.yml | 4 +- 10optional_inputs.tex | 16 +- 12runningSS.tex | 96 ++-- 13output.tex | 144 +++--- 14r4ss.tex | 20 +- 15special.tex | 16 +- 1_4sections.tex | 2 +- 5converting.tex | 4 +- 6starter.tex | 17 +- 8data.tex | 413 +++++++++--------- 9control.tex | 84 ++-- README.md | 2 +- SS.bib => SS3.bib | 0 SS330_User_Manual.tex | 2 +- ...Started_SS.Rmd => Getting_Started_SS3.Rmd} | 12 +- .../model_step_by_step/model_tutorial.Rmd | 38 +- .../ss3_model_tips.Rmd} | 2 +- _data_weighting.tex | 4 +- docs/index.md | 4 +- tv_parameter_description.tex | 36 +- 20 files changed, 456 insertions(+), 460 deletions(-) rename SS.bib => SS3.bib (100%) rename User_Guides/getting_started/{Getting_Started_SS.Rmd => Getting_Started_SS3.Rmd} (92%) rename User_Guides/{ss_model_tips/ss_model_tips.Rmd => ss3_model_tips/ss3_model_tips.Rmd} (99%) diff --git a/.github/workflows/deploy-ss3-docs.yml b/.github/workflows/deploy-ss3-docs.yml index 50bf5111..da5e11f2 100644 --- a/.github/workflows/deploy-ss3-docs.yml +++ b/.github/workflows/deploy-ss3-docs.yml @@ -37,8 +37,8 @@ jobs: - name: render the rmd files run: | - rmarkdown::render("User_Guides/ss_model_tips/ss_model_tips.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") - rmarkdown::render("User_Guides/getting_started/Getting_Started_SS.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") + rmarkdown::render("User_Guides/ss3_model_tips/ss3_model_tips.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") + rmarkdown::render("User_Guides/getting_started/Getting_Started_SS3.Rmd", output_format = c("html_document", "pdf_document"), output_dir = "docs") shell: Rscript {0} - name: Deploy to GitHub pages diff --git a/10optional_inputs.tex b/10optional_inputs.tex index 8b916e5b..a47a1d90 100644 --- a/10optional_inputs.tex +++ b/10optional_inputs.tex @@ -2,17 +2,17 @@ \section{Optional Inputs} \hypertarget{WAA}{} \subsection{Empirical Weight-at-Age (wtatage.ss)} -The model has the capability to read empirical body weight at age for the population and each fleet, in lieu of generating these weights internally from the growth parameters, weight-at-length, and size-selectivity. Selection of this option is done by setting an explicit switch near the top of the control file. The values are read from a separate file named, wtatage.ss. This file is only required to exist if this option is selected. +The model has the capability to read empirical body weight at age for the population and each fleet, in lieu of generating these weights internally from the growth parameters, weight-at-length, and size-selectivity. Selection of this option is done by setting an explicit switch near the top of the control file. The values are read from a separate file named, wtatage.ss. This file is only required to exist if this option is selected. The first value read is a single integer for the maximum age used in reading this file. So if the maximum age is 40, there will be 41 columns of weight-at-age entries to read, with the first column being for age 0. If the number of ages specified in this table is greater than maximum age in the model, the extra weight-at-age values are ignored. If the number of ages in this table is less than maximum age in the model, the weight-at-age data for the number of ages in the file is filled in for all unread ages out to maximum age. The format of this input file is: -\begin{tabular}{l l l l l l l l l } +\begin{tabular}{l l l l l l l l l} \hline - 40 & \multicolumn{8}{l}{Maximum Age}\\ + 40 & \multicolumn{8}{l}{Maximum Age} \\ \hline - & & & Growth & Birth & & & & \Tstrut\\ + & & & Growth & Birth & & & & \Tstrut\\ Year & Season & Sex & Pattern & Season & Fleet & Age-0 & Age-1 & ... \Tstrut\Bstrut\\ \hline \-1971 & 1 & 1 & 1 & 1 & -2 & 0 & 0 & 0.1003 \Tstrut\\ @@ -59,14 +59,14 @@ \subsection{Empirical Weight-at-Age (wtatage.ss)} \subsection{runnumber.ss} -This file contains a single integer value. It is read when the program starts, incremented by 1, used when processing the profile value inputs (see below), used as an identifier in the batch output, then saved with the incremented value. Note that this incrementation may not occur if a run crashes. +This file contains a single integer value. It is read when the program starts, incremented by 1, used when processing the profile value inputs (see below), used as an identifier in the batch output, then saved with the incremented value. Note that this incrementation may not occur if a run crashes. \subsection{profilevalues.ss} -This file contains information for changing the value of selected parameters for each run in a batch. In the control file, each parameter that will be subject to modification by profilevalues.ss is designated by setting its phase to -9999. +This file contains information for changing the value of selected parameters for each run in a batch. In the control file, each parameter that will be subject to modification by profilevalues.ss is designated by setting its phase to -9999. -The first value in profilevalues.ss is the number of parameters to be batched. This value MUST match the number of parameters with phase set equal to -9999 in the control file. The program performs no checks for this equality. If the value is zero in the first field, then nothing else will be read. Otherwise, the model will read runnumber * Nparameters values and use the last Nparameters of these to replace the initial values of parameters designated with phase = --9999 in the control file. +The first value in profilevalues.ss is the number of parameters to be batched. This value MUST match the number of parameters with phase set equal to -9999 in the control file. The program performs no checks for this equality. If the value is zero in the first field, then nothing else will be read. Otherwise, the model will read runnumber * Nparameters values and use the last Nparameters of these to replace the initial values of parameters designated with phase = --9999 in the control file. -Usage Note: If one of the batch runs crashes before saving the updated value of runnumber.ss, then the processing of the profilevalue.ss will not proceed as expected. Check the output carefully until a more robust procedure is developed. Also, this options was created before use of R became widespread. You probably can create a more flexible approach using R today. +Usage Note: If one of the batch runs crashes before saving the updated value of runnumber.ss, then the processing of the profilevalue.ss will not proceed as expected. Check the output carefully until a more robust procedure is developed. Also, this options was created before use of R became widespread. You probably can create a more flexible approach using R today. \pagebreak \ No newline at end of file diff --git a/12runningSS.tex b/12runningSS.tex index d38a71cf..7dac8a64 100644 --- a/12runningSS.tex +++ b/12runningSS.tex @@ -1,13 +1,13 @@ \section{Running Stock Synthesis} \label{sec:RunningSS} \subsection{Command Line Interface} -The name of the SS3 executable files often contains the phrase ``safe'' or ``opt'' (for optimized). The safe version includes checking for out of bounds values and should always be used whenever there is a change to the data file. The optimized version runs slightly faster but can result in data not being included in the model as intended if the safe version has not been run first. A file named ``ss.exe'' is typically the safe version unless the result of renaming by the user. +The name of the SS3 executable files often contains the phrase ``safe'' or ``opt'' (for optimized). The safe version includes checking for out of bounds values and should always be used whenever there is a change to the data file. The optimized version runs slightly faster but can result in data not being included in the model as intended if the safe version has not been run first. A file named ``ss3.exe'' is typically the safe version unless the result of renaming by the user. On Mac and Linux computers, the executable does not include an extension (like .exe on Windows). Running the executable on from the DOS command line in Windows simply require typing the executable name (without the .exe extension): \begin{quote} \begin{verbatim} - > ss + > ss3 \end{verbatim} \end{quote} @@ -16,8 +16,8 @@ \subsection{Command Line Interface} \begin{quote} \begin{verbatim} - > chmod a+x ss - > ./ss + > chmod a+x ss3 + > ./ss3 \end{verbatim} \end{quote} @@ -30,13 +30,13 @@ \subsection{Command Line Interface} As of ADMB 12.3, a new command called ``-hess\_step'' is available to use and documented in the \hyperlink{hess-step}{Using -hess\_step to do additional Newton steps using the inverse Hessian} \subsubsection{Example of DOS batch input file} -One file management approach is to put ss.exe in its own folder (example: C:\textbackslash SS\_model) and to put your input files in separate folder (example: C:\textbackslash My Documents \textbackslash SS\_runs). Then a DOS batch file in the SS\_runs folder can be run at the command line to start ss.exe. All output will appear in SS\_runs folder. +One file management approach is to put ss3.exe in its own folder (example: C:\textbackslash SS3\_model) and to put your input files in separate folder (example: C:\textbackslash My Documents \textbackslash SS3\_runs). Then a DOS batch file in the SS3\_runs folder can be run at the command line to start ss3.exe. All output will appear in SS3\_runs folder. -A DOS batch file (e.g., SS.bat) might contain some explicit ADMB commands, some implicit commands, and some DOS commands: +A DOS batch file (e.g., SS3.bat) might contain some explicit ADMB commands, some implicit commands, and some DOS commands: \begin{quote} \begin{verbatim} - c:\SS_model\ss.exe -cbs 5000000000 -gbs 50000000000 \%1 \%2 \%3 \%4 + c:\SS3_model\ss3.exe -cbs 5000000000 -gbs 50000000000 \%1 \%2 \%3 \%4 del ss.r0* del ss.p0* del ss.b0* @@ -44,9 +44,9 @@ \subsubsection{Example of DOS batch input file} \end{quote} -In this batch file, the -cbs and -gbs arguments allocate a large amount of memory for SS3 to use (you may need to edit these for your computer and SS3 configuration), and the \%1, \%2 etc. allows passing of command line arguments such as -nox or -nohess. You add more items to the list of \% arguments as needed. +In this batch file, the -cbs and -gbs arguments allocate a large amount of memory for SS3 to use (you may need to edit these for your computer and SS3 configuration), and the \%1, \%2 etc., allows passing of command line arguments such as -nox or -nohess. You add more items to the list of \% arguments as needed. -An easy way to start a command line in your current directory (SS\_runs) is to create a shortcut to the DOS command line prompt. The shortcut's target would be: +An easy way to start a command line in your current directory (SS3\_runs) is to create a shortcut to the DOS command line prompt. The shortcut's target would be: \begin{quote} \begin{verbatim} @@ -62,7 +62,7 @@ \subsubsection{Example of DOS batch input file} \end{verbatim} \end{quote} -An alternative shortcut is to have the executable within the model folder then use Ctrl+Shift+Right Click and then select either ``Open powershell window here'' or ``Open command window here'', depending upon your computer. From the command window the executable name can be typed along with additional inputs (e.g., -nohess) and the model run. If using the powershell type cmd and then hit enter prior to calling the model (ss). +An alternative shortcut is to have the executable within the model folder then use Ctrl+Shift+Right Click and then select either ``Open powershell window here'' or ``Open command window here'', depending upon your computer. From the command window the executable name can be typed along with additional inputs (e.g., -nohess) and the model run. If using the powershell type cmd and then hit enter prior to calling the model (ss). \subsubsection{Simple Batch} @@ -74,7 +74,7 @@ \subsubsection{Simple Batch} del ss.cor del ss.std copy starter.r01 starter.ss - c:\admodel\ss\ss.exe -sdonly + c:\admodel\ss3\ss3.exe -sdonly copy ss.std ss-std01.txt \end{verbatim} \end{quote} @@ -94,7 +94,7 @@ \subsubsection{Complicated Batch} del ss.std del ss.cor del ss.par - c:\admodel\ss\ss.exe + c:\admodel\ss3\ss3.exe copy /Y ss.par A1-D1-A1-%%i.par copy /Y ss.std A1-D1-A1-%%i.std find ``Number'' A1-D1-A1-%%i.par >> Summary.txt @@ -111,7 +111,7 @@ \subsubsection{Running Without Estimation} \begin{quote} \begin{verbatim} - ss -nohess + ss3 -nohess \end{verbatim} \end{quote} @@ -119,24 +119,24 @@ \subsubsection{Running Without Estimation} \begin{quote} \begin{verbatim} - ss -maxfn 0 -phase 50 -nohess + ss3 -maxfn 0 -phase 50 -nohess \end{verbatim} \end{quote} where maxfun specifies the number of function calls and phase is the maximum phase for the model to start estimation where the number should be greater than the maximum phase for estimating parameters within the model. -However, the approaches above differ in subtle ways. First, if the maximum phase is set to 0 in the starter file the total likelihood will differ by a small amount (0.25 likelihood units) compared to the second approach which sets the maxfun and phase in the command window. This small difference is due a dummy parameter which is evaluated by the objective function when maximum phase in the starter is set to 0, resulting in a small contribution to the total likelihood of 0.25. However, all other likelihood components should not change. +However, the approaches above differ in subtle ways. First, if the maximum phase is set to 0 in the starter file the total likelihood will differ by a small amount (0.25 likelihood units) compared to the second approach which sets the maxfun and phase in the command window. This small difference is due a dummy parameter which is evaluated by the objective function when maximum phase in the starter is set to 0, resulting in a small contribution to the total likelihood of 0.25. However, all other likelihood components should not change. -The second difference between the two no estimation approaches is the reported number of ``Active\_count'' of parameters in the Report file. If the command line approach is used (ss -maxfn 0 -phase 50 -nohess) then the active number of parameters will equal the number of parameters with positive phases, but because the model is started in a phase greater than the maximum phase in the model, these parameters do not move from the initial values in the control file (or the par file). The first approach where the maximum phase is changed in the starter file will report the number of ``Active\_count'' parameters as 0. +The second difference between the two no estimation approaches is the reported number of ``Active\_count'' of parameters in the Report file. If the command line approach is used (ss3 -maxfn 0 -phase 50 -nohess) then the active number of parameters will equal the number of parameters with positive phases, but because the model is started in a phase greater than the maximum phase in the model, these parameters do not move from the initial values in the control file (or the par file). The first approach where the maximum phase is changed in the starter file will report the number of ``Active\_count'' parameters as 0. -The final thing to consider when running a model without estimation is whether you are starting from the par file or the control file. If you start from the par file (specified in the starter file: 1=use ss.par) then all parameters, including parameter deviations, will be fixed at the estimated values. However, if the model is not run with the par file, any parameter deviations (e.g., recruitment deviations) will not be included in the model run (a user could paste in the estimated recruitment deviations into the control file). +The final thing to consider when running a model without estimation is whether you are starting from the par file or the control file. If you start from the par file (specified in the starter file: 1=use ss.par) then all parameters, including parameter deviations, will be fixed at the estimated values. However, if the model is not run with the par file, any parameter deviations (e.g., recruitment deviations) will not be included in the model run (a user could paste in the estimated recruitment deviations into the control file). \myparagraph{Generate .ss\_new files} -There may be times a user would like to generate the .ss\_new files without running the model, with or without estimation. There are two approaches that a user can take. The first is to manually change the maxphase in the starter.ss file to -1 and running the model as normal will generate these files without running through the model dynamics (e.g., no Report file will be created). The maxphase in the starter.ss\_new file will be set to -1 and will need to be manually changed back if the intent is the replace the original (i.e., starter.ss) file with the new files (i.e., starter.ss\_new). The second approach is to modify the maxphase via the command line or power shell input. Calling the model using the commands: +There may be times a user would like to generate the .ss\_new files without running the model, with or without estimation. There are two approaches that a user can take. The first is to manually change the maxphase in the starter.ss file to -1 and running the model as normal will generate these files without running through the model dynamics (e.g., no Report file will be created). The maxphase in the starter.ss\_new file will be set to -1 and will need to be manually changed back if the intent is the replace the original (i.e., starter.ss) file with the new files (i.e., starter.ss\_new). The second approach is to modify the maxphase via the command line or power shell input. Calling the model using the commands: \begin{quote} \begin{verbatim} - ss -stopph -1 + ss3 -stopph -1 \end{verbatim} \end{quote} @@ -161,27 +161,27 @@ \subsubsection{Running Parameter Profiles} %\begin{center} \begin{longtable}{p{0.5cm} p{16cm}} - & Create a profilevalues.ss file\\ - & 2 \# number of parameters using profile feature\\ - & 0.16 \# value for first selected parameter when runnumber equals 1\\ - & 0.35 \# value for second selected parameter when runnumber equals 1\\ - & 0.16 \# value for first selected parameter when runnumber equals 2\\ - & 0.40 \# value for second selected parameter when runnumber equals 2\\ - & 0.18 \# value for first selected parameter when runnumber equals 3\\ - & 0.40 \# value for second selected parameter when runnumber equals 3\\ - & etc.; make it as long as you like.\\ + & Create a profilevalues.ss file \\ + & 2 \# number of parameters using profile feature \\ + & 0.16 \# value for first selected parameter when runnumber equals 1 \\ + & 0.35 \# value for second selected parameter when runnumber equals 1 \\ + & 0.16 \# value for first selected parameter when runnumber equals 2 \\ + & 0.40 \# value for second selected parameter when runnumber equals 2 \\ + & 0.18 \# value for first selected parameter when runnumber equals 3 \\ + & 0.40 \# value for second selected parameter when runnumber equals 3 \\ + & etc.; make it as long as you like. \\ \end{longtable} -Create a batch file that looks something like this. Or make it more complicated as in the example above. +Create a batch file that looks something like this. Or make it more complicated as in the example above. \begin{quote} \begin{verbatim} del cumreport.sso copy /Y runnumber.zero runnumber.ss % so you will start with runnumber=0 - C:\SS330\ss.exe - C:\SS330\ss.exe - C:\SS330\ss.exe + C:\SS330\ss3.exe + C:\SS330\ss3.exe + C:\SS330\ss3.exe \end{verbatim} \end{quote} @@ -201,24 +201,24 @@ \subsection{Putting Stock Synthesis in your PATH} \subsubsection{For Unix (OS X and Linux)} -To check if SS3 is in your path, assuming the binary is named SS: open a Terminal window and type \texttt{which SS} and hit enter. If you get nothing returned, then SS3 (named SS or SS.exe) is not in your path. The easiest way to fix this is to move the SS3 binary to a folder that's already in your path. To find existing path folders type \texttt{echo \$PATH} in the terminal and hit enter. Now move the SS3 binary to one of these folders. +To check if SS3 is in your path, assuming the binary is named SS3: open a Terminal window and type \texttt{which SS3} and hit enter. If you get nothing returned, then SS3 (named SS3 or SS3.exe) is not in your path. The easiest way to fix this is to move the SS3 binary to a folder that's already in your path. To find existing path folders type \texttt{echo \$PATH} in the terminal and hit enter. Now move the SS3 binary to one of these folders. For example, in a Terminal window type: \begin{quote} \begin{verbatim} - sudo cp ~/Downloads/SS /usr/bin/ + sudo cp ~/Downloads/SS3 /usr/bin/ \end{verbatim} \end{quote} -to move an binary called SS from the Downloads folder to \texttt{/usr/bin}. You will need to use \texttt{sudo} and enter your password after to have permission to move a file to a folder like \texttt{/usr/bin/}, because doing so edits the system for other users also. +to move an binary called SS3 from the Downloads folder to \texttt{/usr/bin}. You will need to use \texttt{sudo} and enter your password after to have permission to move a file to a folder like \texttt{/usr/bin/}, because doing so edits the system for other users also. -Also note that you may need to add executable permissions to the SS binary after downloading it. You can do that by switching to the folder where you placed the binary +Also note that you may need to add executable permissions to the SS3 binary after downloading it. You can do that by switching to the folder where you placed the binary (\texttt{cd /usr/bin/} if you followed the instructions above), and running the command: \begin{quote} \begin{verbatim} - sudo chmod +x SS + sudo chmod +x SS3 \end{verbatim} \end{quote} @@ -226,7 +226,7 @@ \subsubsection{For Unix (OS X and Linux)} \begin{quote} \begin{verbatim} - which SS + which SS3 \end{verbatim} \end{quote} @@ -234,7 +234,7 @@ \subsubsection{For Unix (OS X and Linux)} \begin{quote} \begin{verbatim} - /usr/bin/SS + /usr/bin/SS3 \end{verbatim} \end{quote} @@ -248,21 +248,21 @@ \subsubsection{For Unix (OS X and Linux)} \subsubsection{For Windows} -To check if SS3 is in your path for Windows, open a DOS prompt (either Command Prompt or Powershell should work) and type \texttt{SS -?} and hit enter. If the prompt returns a message like \texttt{SS is not recognized...}, then SS3 is not in your path (assuming the SS3 executable is called SS.exe). +To check if SS3 is in your path for Windows, open a DOS prompt (either Command Prompt or Powershell should work) and type \texttt{SS3 -?} and hit enter. If the prompt returns a message like \texttt{SS3 is not recognized...}, then SS3 is not in your path (assuming the SS3 executable is called SS3.exe). To add the SS3 binary file to your path, follow these steps: \begin{enumerate} - \item Find the correct version of the SS.exe binary on your computer (or download from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases}{SS3 releases}). - \item Move to and note the folder location. E.g., \texttt{C:/SS/} + \item Find the correct version of the SS3.exe binary on your computer (or download from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases}{SS3 releases}). + \item Move to and note the folder location. E.g., \texttt{C:/SS3/} \item Click on the start menu and type \texttt{environment} \item Choose \texttt{Edit environment variables for your account} under Control Panel \item Click on \texttt{PATH} if it exists, create it if does not exist \item Choose `PATH` and click edit \item In the \texttt{Edit User Variable} window add to the end of the \texttt{Variable value} section a semicolon and the SS3 folder location you recorded earlier. - E.g., \texttt{;C:/SS}. Do not overwrite what was previously in the \texttt{PATH} variable. + E.g., \texttt{;C:/SS3}. Do not overwrite what was previously in the \texttt{PATH} variable. \item Restart your computer - \item Go back to the DOS prompt and try typing \texttt{SS -?} and hitting return again. + \item Go back to the DOS prompt and try typing \texttt{SS3 -?} and hitting return again. \end{enumerate} @@ -284,8 +284,8 @@ \subsection{Running Stock Synthesis from R} Running SS3 from within R may be desirable for setting up simulations where many runs of SS3 models are required (e.g., \href{https://github.com/ss3sim/ss3sim}{ss3sim}) or if \texttt{r4ss} is already used to read model output. -\subsection{The Stock Synthesis GUI (SSI)} -\href{https://vlab.noaa.gov/web/stock-synthesis/document-library/-/document_library/0LmuycloZeIt/view/5042951}{Stock Synthesis Interface} (SSI or the SS3 GUI) provides an interface for loading, editing, and running model files, and also can link to r4ss to generate plots. +% \subsection{The Stock Synthesis GUI (SSI)} +% \href{https://vlab.noaa.gov/web/stock-synthesis/document-library/-/document_library/0LmuycloZeIt/view/5042951}{Stock Synthesis Interface} (SSI or the SS3 GUI) provides an interface for loading, editing, and running model files, and also can link to r4ss to generate plots. \subsection{Debugging Tips} When input files are causing the program to crash or fail to produce sensible results, there are a few steps that can be taken to diagnose the problem. Before trying the steps below, examine the echoinput.sso file. It is highly annotated, so you should be able to see if the model is interpreting your input files as you intended. Additionally, users should check the warning.sso file when attempting to debug a non-running model. @@ -293,7 +293,7 @@ \subsection{Debugging Tips} \begin{enumerate} \item Set the turn\_off\_phase switch to 0 in the starter.ss file. This will cause the mode to not attempt to adjust any parameters and simply converges a dummy parameter. It will still produce a Report.sso file, which can be examined to see what has been calculated from the initial parameter values. \item Turn the verbosity level to 2 in the starter.ss file. This will cause the program to display the value of each likelihood component to the screen on each iteration. So it the program is creating an illegal computation (e.g., divide by zero), it may show you which likelihood component contains the problematic calculation. If the program is producing a Report.sso file, you may then see which observation is causing the illegal calculation. - \item Run the program with the command ss >>SSpipe.txt. This will cause all screen display to go to the specified text file (note, delete this file before running because it will be appended to). Examination of this file will show detailed statements produced during the reading and preprocessing of input files. + \item Run the program with the command ss3 >>SSpipe.txt. This will cause all screen display to go to the specified text file (note, delete this file before running because it will be appended to). Examination of this file will show detailed statements produced during the reading and preprocessing of input files. \item If the model fails to achieve a proper Hessian it exits without writing the detailed outputs in the FINAL\_SECTION. If this happens, you can do a run with the -nohess option so you can view the Report.sso to attempt to diagnose the problem. \item If the problem is with reading one or more of the input files, please note that certain Mac line endings cannot be read by the model (although this is a rare occurrence). Be sure to save the text files with Windows or Linux style line endings so that the executable can parse them. \end{enumerate} @@ -315,7 +315,7 @@ \subsection{Running MCMC} \item Recommended: Remove existing .psv files in run directory to generate a new chain. \item Recommended: Before running, set the run detail switch in starter file to 0 to limit printing to the screen; reporting to screen will slow MCMC progress. \item Optional: Add \texttt{-nohess} to use the existing Hessian file without re-estimating. - \item Optional: To start the MCMC chain from specific values change the par file: run the model with estimation, adjust the par file to the values that the chain should start from, change within the starter file for the model to begin from the par file, and call the MCMC function using \texttt{ss -mcmc xxxx - mcsave yyyy -nohess -noest}. + \item Optional: To start the MCMC chain from specific values change the par file: run the model with estimation, adjust the par file to the values that the chain should start from, change within the starter file for the model to begin from the par file, and call the MCMC function using \texttt{ss3 -mcmc xxxx - mcsave yyyy -nohess -noest}. \end{itemize} \noindent Run SS3 with argument -mceval to get more summaries diff --git a/13output.tex b/13output.tex index e5f46df0..866291c9 100644 --- a/13output.tex +++ b/13output.tex @@ -1,42 +1,42 @@ \section{Output Files} \subsection{Custom Reporting} -\hypertarget{custom}{Additional} user control for what is included in the Report.sso file was added in v.3.30.16. This approach allows for full customizing of what is printed to the Report file by selecting custom reporting (option = 3) in the starter file where specific items now can be included or excluded depending upon a list passed to SS3 from the starter file. The numbering system for each item in the Report file is as follows: +\hypertarget{custom}{Additional} user control for what is included in the Report.sso file was added in v.3.30.16. This approach allows for full customizing of what is printed to the Report file by selecting custom reporting (option = 3) in the starter file where specific items now can be included or excluded depending upon a list passed to SS3 from the starter file. The numbering system for each item in the Report file is as follows: \begin{center} \begin{longtable}{p{1cm} p{6.5cm}p{1cm} p{6cm}} \hline - Num. & Report Item & Num. & Report Item\Tstrut\Bstrut\\ + Num. & Report Item & Num. & Report Item \Tstrut\Bstrut\\ \hline -1 & DEFINITIONS & 31 & LEN SELEX \\ -2 & LIKELIHOOD & 32 & AGE SELEX \\ -3 & Input Variance Adjustment & 33 & ENVIRONMENTAL DATA \\ -4 & Parm devs detail & 34 & TAG Recapture \\ -5 & PARAMETERS & 35 & NUMBERS-AT-AGE \\ -6 & DERIVED QUANTITIES & 36 & BIOMASS-AT-AGE \\ -7 & MGparm By Year after adjustments & 37 & NUMBERS-AT-LENGTH \\ -8 & selparm(Size) By Year after adjustments & 38 & BIOMASS-AT-LENGTH \\ -9 & selparm(Age) By Year after adjustments & 39 & F-AT-AGE \\ -10 & RECRUITMENT DIST & 40 & CATCH-AT-AGE \\ -11 & MORPH INDEXING & 41 & DISCARD-AT-AGE \\ +1 & DEFINITIONS & 31 & LEN SELEX \\ +2 & LIKELIHOOD & 32 & AGE SELEX \\ +3 & Input Variance Adjustment & 33 & ENVIRONMENTAL DATA \\ +4 & Parm devs detail & 34 & TAG Recapture \\ +5 & PARAMETERS & 35 & NUMBERS-AT-AGE \\ +6 & DERIVED QUANTITIES & 36 & BIOMASS-AT-AGE \\ +7 & MGparm By Year after adjustments & 37 & NUMBERS-AT-LENGTH \\ +8 & selparm(Size) By Year after adjustments & 38 & BIOMASS-AT-LENGTH \\ +9 & selparm(Age) By Year after adjustments & 39 & F-AT-AGE \\ +10 & RECRUITMENT DIST & 40 & CATCH-AT-AGE \\ +11 & MORPH INDEXING & 41 & DISCARD-AT-AGE \\ 12 & SIZEFREQ TRANSLATION & 42 & BIOLOGY \\ -13 & MOVEMENT & 43 & Natural Mortality \\ -14 & EXPLOITATION & 44 & AGE SPECIFIC K \\ -15 & CATCH & 45 & Growth Parameters \\ +13 & MOVEMENT & 43 & Natural Mortality \\ +14 & EXPLOITATION & 44 & AGE SPECIFIC K \\ +15 & CATCH & 45 & Growth Parameters \\ 16 & TIME SERIES & 46 & Seas Effects \\ -17 & SPR SERIES & 47 & Biology at age in endyr \\ -18 & Kobe Plot & 48 & MEAN BODY WT(Begin) \\ -19 & SPAWN RECRUIT & 49 & MEAN SIZE TIMESERIES \\ +17 & SPR SERIES & 47 & Biology at age in endyr \\ +18 & Kobe Plot & 48 & MEAN BODY WT(Begin) \\ +19 & SPAWN RECRUIT & 49 & MEAN SIZE TIMESERIES \\ 20 & SPAWN RECR CURVE & 50 & AGE LENGTH KEY \\ 21 & INDEX 1 & 51 & AGE AGE KEY \\ -22 & INDEX 2 & 52 & COMPOSITION DATABASE \\ -23 & INDEX 3 & 53 & SELEX database \\ +22 & INDEX 2 & 52 & COMPOSITION DATABASE \\ +23 & INDEX 3 & 53 & SELEX database \\ 24 & DISCARD SPECIFICATION & 54 & SPR/YPR Profile \\ 25 & DISCARD OUTPUT & 55 & GLOBAL MSY \\ 26 & MEAN BODY WT OUTPUT & 56 & SS\_summary.sso \\ -27 & FIT LEN COMPS & 57 & rebuilder.sso \\ -28 & FIT AGE COMPS & 58 & SIStable.sso \\ -29 & FIT SIZE COMPS & 59 & Dynamic Bzero \\ +27 & FIT LEN COMPS & 57 & rebuilder.sso \\ +28 & FIT AGE COMPS & 58 & SIStable.sso \\ +29 & FIT SIZE COMPS & 59 & Dynamic Bzero \\ 30 & OVERALL COMPS & 60 & wtatage.ss\_new \\ \hline \end{longtable} @@ -45,18 +45,18 @@ \subsection{Custom Reporting} \subsection{Standard ADMB output files} Standard ADMB files are created by SS3. These are: -ss.par - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see \hyperref[sec:RunningSS]{Running Stock Synthesis} for more info). +ss.par - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see \hyperref[sec:RunningSS]{Running Stock Synthesis} for more info). -ss.std - This file has the parameter values and their estimated standard deviation for those parameters that were active during the model run. It also contains the derived quantities declared as standard deviation report variables. All of this information is also report in the covar.sso. Also, the parameter section of Report.sso lists all the parameters with their SS3 generated names, denotes which were active in the reported run, displays the parameter standard deviations, then displays the derived quantities with their standard deviations. +ss.std - This file has the parameter values and their estimated standard deviation for those parameters that were active during the model run. It also contains the derived quantities declared as standard deviation report variables. All of this information is also report in the covar.sso. Also, the parameter section of Report.sso lists all the parameters with their SS3 generated names, denotes which were active in the reported run, displays the parameter standard deviations, then displays the derived quantities with their standard deviations. ss.rep - This report file is created between phases so, unlike Report.sso, will be created even if the hessian fails. It does not contain as much output as shown in Report.sso. -ss.cor - This is the standard ADMB report for parameter and standard deviation report correlations. It is in matrix form and challenging to interpret. This same information is reported in covar.sso. +ss.cor - This is the standard ADMB report for parameter and standard deviation report correlations. It is in matrix form and challenging to interpret. This same information is reported in covar.sso. \subsection{Stock Synthesis Summary} -The ss\_summary.sso file (available for versions 3.30.08.03 and later) is designed to put key model outputs all in one concise place. It is organized as a list. At the top of the file are descriptors, followed by the 1) likelihoods for each component, 2) parameters and their standard errors, and 3) derived quantities and their standard errors. Total biomass, summary biomass, and catch were added to the quantities reported in this file in version 3.30.11 and later. +The ss\_summary.sso file (available for versions 3.30.08.03 and later) is designed to put key model outputs all in one concise place. It is organized as a list. At the top of the file are descriptors, followed by the 1) likelihoods for each component, 2) parameters and their standard errors, and 3) derived quantities and their standard errors. Total biomass, summary biomass, and catch were added to the quantities reported in this file in v.3.30.11 and later. -Before 3.30.17, TotBio and SmryBio did not always match values reported in columns of the TIME\_SERIES table of Report.sso. The report file should be used instead of ss\_summary.sso for correct calculation of these quantities before 3.30.17. Care should be taken when using the TotBio and SmryBio if the model configuration has recruitment after January 1 or in a later season, as TotBio and SmryBio quantities are always calculated on January 1. Consult the detailed age-, area-, and season-specific tables in report.sso for calculations done at times other than January 1. +Before v.3.30.17, TotBio and SmryBio did not always match values reported in columns of the TIME\_SERIES table of Report.sso. The report file should be used instead of ss\_summary.sso for correct calculation of these quantities before v.3.30.17. Care should be taken when using the TotBio and SmryBio if the model configuration has recruitment after January 1 or in a later season, as TotBio and SmryBio quantities are always calculated on January 1. Consult the detailed age-, area-, and season-specific tables in report.sso for calculations done at times other than January 1. \subsection{SIS table} The SIS\_table.sso is deprecated as of SS3 v.3.30.17. Please use the \hyperref[sec:r4ss]{r4ss} function \texttt{get\_SIS\_info()} instead. @@ -68,7 +68,7 @@ \subsection{Derived Quantities} \hypertarget{VirginUnfished}{} \subsubsection{Virgin Spawning Biomass vs Unfished Spawning Biomass} -Unfished is the condition for which reference points (benchmark) are calculated. Virgin Spawning Biomass (B0) is the initial condition on which the start of the time-series depends.If biology or spawner-recruitment parameters are time-varying, then the benchmark year input in the forecast file tells the model which years to average in order to calculate ``unfished''. In this case, virgin recruitment and/or the virgin spawning biomass will differ from their unfished counterparts. Virgin recruitment and spawning biomass are reported in the mgmt\_quant portion of the sd\_report and are now labeled as ``unfished'' for clarity. Note that if ln(R0) is time-varying, then this will cause unfished to differ from virgin. However, if regime shift parameter is time-varying, then unfished will remain the same as virgin because the regime shift is treated as a temporary offset from virgin. Virgin spawning biomass is denoted as SPB\_virgin and spawning biomass unfished is denoted as SPB\_unf in the report file. +Unfished is the condition for which reference points (benchmark) are calculated. Virgin Spawning Biomass (B0) is the initial condition on which the start of the time-series depends. If biology or spawner-recruitment parameters are time-varying, then the benchmark year input in the forecast file tells the model which years to average in order to calculate ``unfished''. In this case, virgin recruitment and/or the virgin spawning biomass will differ from their unfished counterparts. Virgin recruitment and spawning biomass are reported in the mgmt\_quant portion of the sd\_report and are now labeled as ``unfished'' for clarity. Note that if ln(R0) is time-varying, then this will cause unfished to differ from virgin. However, if regime shift parameter is time-varying, then unfished will remain the same as virgin because the regime shift is treated as a temporary offset from virgin. Virgin spawning biomass is denoted as SPB\_virgin and spawning biomass unfished is denoted as SPB\_unf in the report file. Virgin Spawning Biomass (B0) is used in four ways within SS3: \begin{enumerate} @@ -83,9 +83,9 @@ \subsubsection{Metric for Fishing Mortality} A generic single metric of annual fishing mortality is difficult to define in a generalized model that admits multiple areas, multiple biological cohorts, dome-shaped selectivity in size and age for each of many fleets. Several separate indices are provided and others could be calculated by a user from the detailed information in Report.sso. \subsubsection{Equilibrium SPR} -This index focuses on the effect of fishing on the spawning potential of the stock. It is calculated as the ratio of the equilibrium reproductive output per recruit that would occur with the current year's F intensities and biology, to the equilibrium reproductive output per recruit that would occur with the current year's biology and no fishing. Thus it internalizes all seasonality, movement, weird selectivity patterns, and other factors. Because this index moves in the opposite direction than F intensity itself, it is usually reported as 1-SPR. A benefit of this index is that it is a direct measure of common proxies used for F\textsubscript{MSY}, such as F\textsubscript {40\%}. A shortcoming of this index is that it does not directly demonstrate the fraction of the stock that is caught each year. The SPR value is also calculated in the benchmarks (see below). +This index focuses on the effect of fishing on the spawning potential of the stock. It is calculated as the ratio of the equilibrium reproductive output per recruit that would occur with the current year's F intensities and biology, to the equilibrium reproductive output per recruit that would occur with the current year's biology and no fishing. Thus it internalizes all seasonality, movement, weird selectivity patterns, and other factors. Because this index moves in the opposite direction than F intensity itself, it is usually reported as 1-SPR. A benefit of this index is that it is a direct measure of common proxies used for F\textsubscript{MSY}, such as F\textsubscript {40\%}. A shortcoming of this index is that it does not directly demonstrate the fraction of the stock that is caught each year. The SPR value is also calculated in the benchmarks (see below). -The derived quantities report shows an annual SPR statistic. The options, as specified in the starter.ss file, are: +The derived quantities report shows an annual SPR statistic. The options, as specified in the starter.ss file, are: \begin{itemize} \item 0 = skip \item 1 = (1-SPR)/(1-SPR\textsubscript{TGT}) @@ -94,27 +94,27 @@ \subsubsection{Equilibrium SPR} \item 4 = raw SPR \end{itemize} -The SPR approach to measuring fishing intensity was implemented because the concept of a single annual F does not exist in SS3 because F varies by age, sex, and growth morph and season and area. There is no single F value that is applied to all ages unless you create a very simple model setup with knife-edge selectivity. So, what you see in the options are various ways to calculate annual fishing intensity. They can be broken down into three categories. One is exploitation rate calculated simply as total catch divided by biomass from a defined age range. Another is SPR, which is a single measure of the equilibrium effect of fishing according to the F. The third category are various ways to calculate an average F. Some measures of fishing intensity will be misleading if applied inappropriately. For example, the sum of the apical F's will be misleading if different fleets have very different selectivities or, worse, if they occur in different areas. The F=Z-M approach to getting fishing intensity is a way to have a single F that represents a number's weighted value across multiple areas, sexes, morphs, ages. An important distinction is that the exploitation rate and F-based approaches directly relate to the fraction of the population removed each year by fishing; whereas the SPR approach represents the cumulative effect of fishing so it's equivalent in F-space depends on M. +The SPR approach to measuring fishing intensity was implemented because the concept of a single annual F does not exist in SS3 because F varies by age, sex, and growth morph and season and area. There is no single F value that is applied to all ages unless you create a very simple model setup with knife-edge selectivity. So, what you see in the options are various ways to calculate annual fishing intensity. They can be broken down into three categories. One is exploitation rate calculated simply as total catch divided by biomass from a defined age range. Another is SPR, which is a single measure of the equilibrium effect of fishing according to the F. The third category are various ways to calculate an average F. Some measures of fishing intensity will be misleading if applied inappropriately. For example, the sum of the apical F's will be misleading if different fleets have very different selectivities or, worse, if they occur in different areas. The F=Z-M approach to getting fishing intensity is a way to have a single F that represents a number's weighted value across multiple areas, sexes, morphs, ages. An important distinction is that the exploitation rate and F-based approaches directly relate to the fraction of the population removed each year by fishing; whereas the SPR approach represents the cumulative effect of fishing so it's equivalent in F-space depends on M. \subsubsection{F std} -This index provides a direct measure of fishing mortality. The options are: +This index provides a direct measure of fishing mortality. The options are: \begin{itemize} \item 0 = skip \item 1 = exploitation(Bio) \item 2 = exploitation(Num) \item 3 = sum(Frates) \end{itemize} -The exploitation rates are calculated as the ratio of the total annual catch (in either biomass or numbers as specified) to the summary biomass or summary numbers on January 1. The sum of the F rates is simply the sum of all the apical Fs. This makes sense if the F method is in terms of instantaneous F (not Pope's approximation) and if there are not fleets with widely different size/age at peak selectivity, and if there is no seasonality, and especially if there is only one area. In the derived quantities, there is an annual statistic that is the ratio of the can be annual F\_std value to the corresponding benchmark statistic. The available options for the denominator are: +The exploitation rates are calculated as the ratio of the total annual catch (in either biomass or numbers as specified) to the summary biomass or summary numbers on January 1. The sum of the F rates is simply the sum of all the apical Fs. This makes sense if the F method is in terms of instantaneous F (not Pope's approximation) and if there are not fleets with widely different size/age at peak selectivity, and if there is no seasonality, and especially if there is only one area. In the derived quantities, there is an annual statistic that is the ratio of the can be annual F\_std value to the corresponding benchmark statistic. The available options for the denominator are: \begin{itemize} \item 0 = raw \item 1 = F/F\textsubscript {SPR} \item 2 = F/F\textsubscript {MSY} \item 3 = F/F\textsubscript {Btarget} - \item >= 11 A new option to allow for the calculation of a multi-year trailing average in F was implemented in v. 3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average. + \item >= 11 A new option to allow for the calculation of a multi-year trailing average in F was implemented in v.3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average. \end{itemize} \subsubsection{F-at-Age} -Because the annual F is so difficult to interpret as a sum of individual F components, an indirect calculation of F-at-age is reported at the end of the report.sso file. This section of the report calculates Z-at-age simply as $ln(N_{a+1,t+1}/N_{a,t})$. This is done on an annual basis and summed over all areas. It is done once using the fishing intensities as estimated (to get Z), and once with the F intensities set to 0.0 to get M-at-age. This latter sequence also provides a measure of dynamic Bzero. The user can then subtract the table of M-at-age/year from the table of Z-at-age/year to get a table of F-at-age/year. From this apical F, average F over a range of ages, or other user-desired statistics could be calculated. Further work within SS3 with this table of values is anticipated. +Because the annual F is so difficult to interpret as a sum of individual F components, an indirect calculation of F-at-age is reported at the end of the report.sso file. This section of the report calculates Z-at-age simply as $ln(N_{a+1,t+1}/N_{a,t})$. This is done on an annual basis and summed over all areas. It is done once using the fishing intensities as estimated (to get Z), and once with the F intensities set to 0.0 to get M-at-age. This latter sequence also provides a measure of dynamic Bzero. The user can then subtract the table of M-at-age/year from the table of Z-at-age/year to get a table of F-at-age/year. From this apical F, average F over a range of ages, or other user-desired statistics could be calculated. Further work within SS3 with this table of values is anticipated. \subsubsection{MSY and other Benchmark Items} The following quantities are included in the sdreport vector mgmt\_quantities, so obtain estimates of variance. Some additional quantities can be found in the benchmarks section of the forecast\_report.sso. @@ -122,12 +122,12 @@ \subsubsection{MSY and other Benchmark Items} \begin{center} \begin{longtable}{p{4cm} p{11cm}} \hline - Benchmark Item & Description\Tstrut\Bstrut\\ + Benchmark Item & Description \Tstrut\Bstrut\\ \hline \endfirsthead \hline - Benchmark Item & Description\Tstrut\Bstrut\\ + Benchmark Item & Description \Tstrut\Bstrut\\ \hline \endhead @@ -135,22 +135,22 @@ \subsubsection{MSY and other Benchmark Items} \hline \endlastfoot - SSB\_Unfished \Tstrut& Unfished reproductive potential (SSB is commonly female mature spawning biomass).\\ - TotBio\_Unfished \Tstrut& Total age 0+ biomass on January 1.\\ - SmryBio\_Unfished \Tstrut& Biomass for ages at or above the summary age on January 1.\\ - Recr\_Unfished \Tstrut& Unfished recruitment.\\ - SSB\_Btgt \Tstrut& SSB at user specified SSB target.\\ - SPR\_Btgt \Tstrut& Spawner potential ratio (SPR) at F intensity that produces user specified SSB target.\\ - Fstd\_Btgt \Tstrut& F statistic at F intensity that produces user specified SSB target.\\ - TotYield\_Btgt \Tstrut& Total yield at F intensity that produces user specified SSB target.\\ - SSB\_SPRtgt \Tstrut& SSB at user specified SPR target (but taking into account the spawner-recruitment relationship).\\ - Fstd\_SPRtgt \Tstrut& F intensity that produces user specified SPR target.\\ - TotYield\_SPRtgt \Tstrut& Total yield at F intensity that produces user specified SPR target.\\ - SSB\_MSY \Tstrut& SSB at F intensity that is associated with MSY; this F intensity may be directly calculated to produce MSY, or can be mapped to F\_SPR or F\_Btgt.\\ - SPR\_MSY \Tstrut& Spawner potential ratio (SPR) at F intensity associated with MSY.\\ - Fstd\_MSY \Tstrut& F statistic at F intensity associated with MSY.\\ - TotYield\_MSY \Tstrut& Total yield (biomass) at MSY.\\ - RetYield\_MSY \Tstrut& Retained yield (biomass) at MSY.\Bstrut\\ + SSB\_Unfished \Tstrut & Unfished reproductive potential (SSB is commonly female mature spawning biomass). \\ + TotBio\_Unfished \Tstrut & Total age 0+ biomass on January 1. \\ + SmryBio\_Unfished \Tstrut & Biomass for ages at or above the summary age on January 1. \\ + Recr\_Unfished \Tstrut & Unfished recruitment. \\ + SSB\_Btgt \Tstrut & SSB at user specified SSB target. \\ + SPR\_Btgt \Tstrut & Spawner potential ratio (SPR) at F intensity that produces user specified SSB target. \\ + Fstd\_Btgt \Tstrut & F statistic at F intensity that produces user specified SSB target. \\ + TotYield\_Btgt \Tstrut & Total yield at F intensity that produces user specified SSB target. \\ + SSB\_SPRtgt \Tstrut & SSB at user specified SPR target (but taking into account the spawner-recruitment relationship). \\ + Fstd\_SPRtgt \Tstrut & F intensity that produces user specified SPR target. \\ + TotYield\_SPRtgt \Tstrut & Total yield at F intensity that produces user specified SPR target. \\ + SSB\_MSY \Tstrut & SSB at F intensity that is associated with MSY; this F intensity may be directly calculated to produce MSY, or can be mapped to F\_SPR or F\_Btgt. \\ + SPR\_MSY \Tstrut & Spawner potential ratio (SPR) at F intensity associated with MSY. \\ + Fstd\_MSY \Tstrut & F statistic at F intensity associated with MSY. \\ + TotYield\_MSY \Tstrut & Total yield (biomass) at MSY. \\ + RetYield\_MSY \Tstrut & Retained yield (biomass) at MSY. \Bstrut\\ \end{longtable} \end{center} @@ -159,14 +159,14 @@ \subsection{Brief cumulative output} \hypertarget{bootstrap}{} \subsection{Bootstrap Data Files} -It is possible to create bootstrap data files for SS3 where an internal parametric bootstrap function generates a simulated data set by parametric bootstrap sampling the expected values given the input observation error. Starting in version 3.30.19, bootstrap data files are output separated in single numbered files (e.g., data\_boot\_001.ss). In version prior to version 3.30.19 a single file called data.ss\_new was output that contained multiple sections: the original data echoed out, the expected data values based on the model fit, and then subsequent bootstrap data files. +It is possible to create bootstrap data files for SS3 where an internal parametric bootstrap function generates a simulated data set by parametric bootstrap sampling the expected values given the input observation error. Starting in v.3.30.19, bootstrap data files are output separated in single numbered files (e.g., data\_boot\_001.ss). In version prior to v.3.30.19 a single file called data.ss\_new was output that contained multiple sections: the original data echoed out, the expected data values based on the model fit, and then subsequent bootstrap data files. -Specifying the number of bootstrap data files has remained the same across model versions. Creating bootstrap data files is specified in the starter file via the ``Number of datafiles to produce'' line where a value of 3 or greater will create three files: the original data file, data\_echo.ss\_new, a data file with the model expected values, data\_expval.ss, and single bootstrap data file, data\_boot\_001.ss. The first output provides the unaltered input data file (with annotations added). The second provides the expected values for only the data elements used in the model run. The third and subsequent outputs provide parametric bootstraps around the expected values. +Specifying the number of bootstrap data files has remained the same across model versions. Creating bootstrap data files is specified in the starter file via the ``Number of datafiles to produce'' line where a value of 3 or greater will create three files: the original data file, data\_echo.ss\_new, a data file with the model expected values, data\_expval.ss, and single bootstrap data file, data\_boot\_001.ss. The first output provides the unaltered input data file (with annotations added). The second provides the expected values for only the data elements used in the model run. The third and subsequent outputs provide parametric bootstraps around the expected values. The bootstrapping procedure within SS3 is done via the following steps: \begin{itemize} - \item Expected values of all input data are calculated (these are also used in the likelihood which compares observed to expected values for all data). The calculation of these expected values is described in detail under the ``Observation Model'' section of the appendix to \citet{methotstock2013}. \ + \item Expected values of all input data are calculated (these are also used in the likelihood which compares observed to expected values for all data). The calculation of these expected values is described in detail under the ``Observation Model'' section of the appendix to \citet{methotstock2013}. \item Parametric bootstrap data are calculated for each observation by sampling from a probability distribution corresponding to the likelihood for that data type using the expected values noted above. Examples of how this happens include the following: @@ -198,30 +198,30 @@ \subsection{Bootstrap Data Files} \end{itemize} \subsection{Forecast and Reference Points (Forecast-report.sso)} -The Forecast-report file contains output of fishery reference points and forecasts. It is designed to meet the needs of the Pacific Fishery Management Council's Groundfish Fishery Management Plan, but it should be quite feasible to develop other regionally specific variants of this output. +The Forecast-report file contains output of fishery reference points and forecasts. It is designed to meet the needs of the Pacific Fishery Management Council's Groundfish Fishery Management Plan, but it should be quite feasible to develop other regionally specific variants of this output. -The vector of forecast recruitment deviations is estimated during an additional model estimation phase. This vector includes any years after the end of the recruitment deviation time series and before or at the end year. When this vector starts before the ending year of the time series, then the estimates of these recruitments will be influenced by the data in these final years. This is problematic, because the original reason for not estimating these recruitments at the end of the time series was the poor signal/noise ratio in the available data. It is not that these data are worse than data from earlier in the time series, but the low amount of data accumulated for each cohort allows an individual datum to dominate the model's fit. Thus, an additional control is provided so that forecast recruitment deviations during these years can receive an extra weighting in order to counter-balance the influence of noisy data at the end of the time series. +The vector of forecast recruitment deviations is estimated during an additional model estimation phase. This vector includes any years after the end of the recruitment deviation time series and before or at the end year. When this vector starts before the ending year of the time series, then the estimates of these recruitments will be influenced by the data in these final years. This is problematic, because the original reason for not estimating these recruitments at the end of the time series was the poor signal/noise ratio in the available data. It is not that these data are worse than data from earlier in the time series, but the low amount of data accumulated for each cohort allows an individual datum to dominate the model's fit. Thus, an additional control is provided so that forecast recruitment deviations during these years can receive an extra weighting in order to counter-balance the influence of noisy data at the end of the time series. -An additional control is provided for the fraction of the log-bias adjustment to apply to the forecast recruitments. Recall that R is the expected mean level of recruitment for a particular year as specified by the spawner-recruitment curve and R' is the geometric mean recruitment level calculated by discounting R with the log-bias correction factor $e-0.5s^2$. Thus a lognormal distribution of recruitment deviations centered on R' will produce a mean level of recruitment equal to R. During the modeled time series, the virgin recruitment level and any recruitments prior to the first year of recruitment deviations are set at the level of R, and the lognormal recruitment deviations are centered on the R' level. For the forecast recruitments, the fraction control can be set to 1.0 so that 100\% of the log-bias correction is applied and the forecast recruitment deviations will be based on the R' level. This is certainly the configuration to use when the model is in MCMC mode. Setting the fraction to 0.0 during maximum likelihood forecasts would center the recruitment deviations, which all have a value of 0.0 in maximum likelihood mode, on R. Thus would provide a mean forecast that would be more comparable to the mean of the ensemble of forecasts produced in MCMC mode. Further work on this topic is underway. +An additional control is provided for the fraction of the log-bias adjustment to apply to the forecast recruitments. Recall that R is the expected mean level of recruitment for a particular year as specified by the spawner-recruitment curve and R' is the geometric mean recruitment level calculated by discounting R with the log-bias correction factor $e-0.5s^2$. Thus a lognormal distribution of recruitment deviations centered on R' will produce a mean level of recruitment equal to R. During the modeled time series, the virgin recruitment level and any recruitments prior to the first year of recruitment deviations are set at the level of R, and the lognormal recruitment deviations are centered on the R' level. For the forecast recruitments, the fraction control can be set to 1.0 so that 100\% of the log-bias correction is applied and the forecast recruitment deviations will be based on the R' level. This is certainly the configuration to use when the model is in MCMC mode. Setting the fraction to 0.0 during maximum likelihood forecasts would center the recruitment deviations, which all have a value of 0.0 in maximum likelihood mode, on R. Thus would provide a mean forecast that would be more comparable to the mean of the ensemble of forecasts produced in MCMC mode. Further work on this topic is underway. Note: \begin{itemize} \item Cohorts continue growing according to their specific growth parameters in the forecast period rather than staying static at the end year values. - \item Environmental data entered for future years can be used to adjust expected recruitment levels. However, environmental data will not affect growth or selectivity parameters in the forecast. + \item Environmental data entered for future years can be used to adjust expected recruitment levels. However, environmental data will not affect growth or selectivity parameters in the forecast. \end{itemize} The top of the Forecast-report file shows the search for F\textsubscript {SPR} and the search for F\textsubscript {MSY}, allowing the user to verify convergence. Note: if the STD file shows aberrant results, such as all the standard deviations being the same value for all recruitments, then check the F\textsubscript {MSY} search for convergence. The F\textsubscript {MSY} can be calculated, or set equal to one of the other F reference points per the selection made in starter.ss. \subsection{Main Output File, Report.sso} -This is the primary output file. Its major sections (as of SS3 v.3.30.16) are listed below. +This is the primary output file. Its major sections (as of SS3 v.3.30.16) are listed below. The sections of the output file are: \begin{itemize} - \item SS3 version number with date compiled. Time and date of model run. This info appears at the top of all output files. + \item SS3 version number with date compiled. Time and date of model run. This info appears at the top of all output files. \item Comments \begin{itemize} - \item Input file lines starting with \#C are echoed here. + \item Input file lines starting with \#C are echoed here. \end{itemize} \item Keywords \begin{itemize} @@ -245,7 +245,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Parameters \begin{itemize} - \item The parameters are listed here. For the estimated parameters, the display shows: Num (count of parameters), Label (as internally generated by SS3), Value, Active\_Cnt, Phase, Min, Max, Init, Prior, Prior\_type, Prior\_SD, Prior\_Like, Parm\_StD (standard deviation of parameter as calculated from inverse Hessian), Status (e.g., near bound), and Pr\_atMin (value of prior penalty if parameter was near bound). The Active\_Cnt entry is a count of the parameters in the same order they appear in the ss.cor file. + \item The parameters are listed here. For the estimated parameters, the display shows: Num (count of parameters), Label (as internally generated by SS3), Value, Active\_Cnt, Phase, Min, Max, Init, Prior, Prior\_type, Prior\_SD, Prior\_Like, Parm\_StD (standard deviation of parameter as calculated from inverse Hessian), Status (e.g., near bound), and Pr\_atMin (value of prior penalty if parameter was near bound). The Active\_Cnt entry is a count of the parameters in the same order they appear in the ss.cor file. \end{itemize} \item Derived Quantities \begin{itemize} @@ -258,7 +258,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \end{itemize} -Then the time series of output, with standard deviation of estimates, are produced with internally generated labels. Note that these time series extend through the forecast era. The order of the output is: spawning biomass, recruitment, SPRratio, Fratio, Bratio, management quantities, forecast catch (as a target level), forecast catch as a limit level (OFL), Selex\_std, Grow\_std, NatAge\_std. For the three ``ratio'' quantities, there is an additional column of output showing a Z-score calculation of the probability that the ratio differs from 1.0. The ``management quantities'' section is designed to meet the terms of reference for west coast groundfish assessments; other formats could be made available upon request. The standard deviation quantities at the end are set up according to specifications at the end of the control input file. In some cases, a user may specify that no derived quantity output of a certain type be produced. In those cases, SS3 substitutes a repeat output of the virgin spawning biomass so that vectors of null length are not created. +Then the time series of output, with standard deviation of estimates, are produced with internally generated labels. Note that these time series extend through the forecast era. The order of the output is: spawning biomass, recruitment, SPRratio, Fratio, Bratio, management quantities, forecast catch (as a target level), forecast catch as a limit level (OFL), Selex\_std, Grow\_std, NatAge\_std. For the three ``ratio'' quantities, there is an additional column of output showing a Z-score calculation of the probability that the ratio differs from 1.0. The ``management quantities'' section is designed to meet the terms of reference for west coast groundfish assessments; other formats could be made available upon request. The standard deviation quantities at the end are set up according to specifications at the end of the control input file. In some cases, a user may specify that no derived quantity output of a certain type be produced. In those cases, SS3 substitutes a repeat output of the virgin spawning biomass so that vectors of null length are not created. \begin{itemize} \item Mortality and growth parameters by year after adjustments @@ -279,11 +279,11 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Growth Morph Indexing \begin{itemize} - \item This block shows the internal index values for various quantities. It can be a useful reference for complex model setups. The vocabulary is: Bio\_Pattern refers to a collection of cohorts with the same defined growth and natural mortality parameters; sex is the next main index. If recruitment occurs in multiple seasons, then birth season is the index for that factor. The index labeled ``Platoon'' is used as a continuous index across all the other factor-specific indices. If sub-platoons are used, they are nested within the Bio\_Pattern x Sex x Birth Season platoon. However, some of the output tables use the column label ``platoon'' as a continuous index across platoons and sub-platoons. Note that there is no index here for area. Each of the cohorts is distributed across areas and they retain their biological characteristics as they move among areas. + \item This block shows the internal index values for various quantities. It can be a useful reference for complex model setups. The vocabulary is: Bio\_Pattern refers to a collection of cohorts with the same defined growth and natural mortality parameters; sex is the next main index. If recruitment occurs in multiple seasons, then birth season is the index for that factor. The index labeled ``Platoon'' is used as a continuous index across all the other factor-specific indices. If sub-platoons are used, they are nested within the Bio\_Pattern x Sex x Birth Season platoon. However, some of the output tables use the column label ``platoon'' as a continuous index across platoons and sub-platoons. Note that there is no index here for area. Each of the cohorts is distributed across areas and they retain their biological characteristics as they move among areas. \end{itemize} \item Size Frequency Translation \begin{itemize} - \item If the generalized size frequency approach is used, this block shows the translation probabilities between population length bins and the units of the defined size frequency method. If the method uses body weight as the accumulator, then output is in corresponding units. + \item If the generalized size frequency approach is used, this block shows the translation probabilities between population length bins and the units of the defined size frequency method. If the method uses body weight as the accumulator, then output is in corresponding units. \end{itemize} \item Movement \begin{itemize} @@ -365,7 +365,7 @@ \subsection{Main Output File, Report.sso} \item F at Age \item Catch at Age \begin{itemize} - \item The output is shown for each fleet. It is not necessary to show by area because each fleet operates in only one area. + \item The output is shown for each fleet. It is not necessary to show by area because each fleet operates in only one area. \end{itemize} \item Discard at Age \item Biology @@ -381,7 +381,7 @@ \subsection{Main Output File, Report.sso} \item Seasonal Effects \item Biology at Age \begin{itemize} - \item This section shows derived size-at-age and other quantities. As of v3.30.21 sex ratio is reported by area in this output table. + \item This section shows derived size-at-age and other quantities. As of v.3.30.21 sex ratio is reported by area in this output table. \end{itemize} \item Mean Body Wt (begin) \begin{itemize} @@ -401,7 +401,7 @@ \subsection{Main Output File, Report.sso} \end{itemize} \item Composition Database \begin{itemize} - \item Contains the length composition, age composition, and mean size-at-age observed and expected values. It is arranged in a database format, rather than an array of vectors. + \item Contains the length composition, age composition, and mean size-at-age observed and expected values. It is arranged in a database format, rather than an array of vectors. \end{itemize} \item Selectivity Database \begin{itemize} diff --git a/14r4ss.tex b/14r4ss.tex index 74d0b390..1c818f09 100644 --- a/14r4ss.tex +++ b/14r4ss.tex @@ -36,15 +36,19 @@ \section{Using R To View Model Output (r4ss)}\label{sec:r4ss} \hline Core Functions & \Tstrut\Bstrut\\ \hline - SS\_output \Tstrut& A function to create a list object for the output from Stock Synthesis\\ - SS\_plots \Tstrut& Plot many quantities related to output from Stock Synthesis\\ + SS\_output \Tstrut & A function to create a list object for the output from Stock Synthesis \\ + SS\_plots \Tstrut & Plot many quantities related to output from Stock Synthesis \\ \hline + \multicolumn{2}{l}{Download the SS3 Executable:} \Tstrut\Bstrut\\ + \hline + get\_ss3\_exe \Tstrut & Download the latest version or a specified version of the SS3 executable \\ + \hline \multicolumn{2}{l}{Model comparisons and other diagnostics:} \Tstrut\Bstrut\\ \hline - SSsummarize \Tstrut & Read output from multiple SS3 models\\ - SStableComparison \Tstrut & Make table comparing quantities across models\\ + SSsummarize \Tstrut & Read output from multiple SS3 models \\ + SStableComparison \Tstrut & Make table comparing quantities across models \\ SSplotComparison \Tstrut & Plot output from multiple SS3 models \\ SSplotPars \Tstrut & Plot distributions of priors, posteriors, and estimates \\ SS\_profile \Tstrut & Run likelihood parameter profiles \\ @@ -52,12 +56,12 @@ \section{Using R To View Model Output (r4ss)}\label{sec:r4ss} PinerPlot \Tstrut & Plot fleet-specific contributions to likelihood profile \\ SS\_RunJitter \Tstrut & Run multiple model jitters to determine best model fit \\ SS\_doRetro \Tstrut & Run retrospective analysis \\ - SSmohnsrho \Tstrut & Calculate Mohn's Rho values\\ + SSmohnsrho \Tstrut & Calculate Mohn's Rho values \\ SSplotRetroRecruits \Tstrut & Make retrospective pattern of recruitment estimates (a.k.a. squid plot) as seen in Pacific hake assessments\Bstrut \\ SS\_fitbiasramp \Tstrut& Estimate bias adjustment for recruitment deviates \Bstrut\\ \hline - \multicolumn{2}{l}{File manipulation for inputs:}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{File manipulation for inputs:} \Tstrut\Bstrut\\ \hline SS\_readdat \Tstrut & Read data file \\ SS\_readctl \Tstrut & Read control file \\ @@ -77,9 +81,9 @@ \section{Using R To View Model Output (r4ss)}\label{sec:r4ss} NegLogInt\_Fn \Tstrut& Calculated variances of time-varying parameters using SS3 implementation of the Laplace Approximation \Bstrut\\ \hline - \multicolumn{2}{l}{File manipulations for outputs:}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{File manipulations for outputs:} \Tstrut\Bstrut\\ \hline - SS\_recdevs \Tstrut & Insert a vector of recruitment deviations into the control file \\ + SS\_recdevs \Tstrut & Insert a vector of recruitment deviations into the control file \\ \hline \end{longtable} diff --git a/15special.tex b/15special.tex index f9317792..948de41f 100644 --- a/15special.tex +++ b/15special.tex @@ -30,7 +30,7 @@ \subsubsection{Time-Varying Parameters} \subsubsection{Time-Varying Growth Considerations} When time-varying growth is used, there are some additional considerations to be aware of: \begin{itemize} - \item Growth in the forecast with time blocks: Growth deviations propagate into the forecast because growth is by cohort according to the current year's growth parameters. The user can select which growth parameters get used during the forecast by setting the end year of the last block, if using time blocks. If the last block ends in the model's end year, then the growth parameters in effect during the forecast will be the base parameters. By setting the end year of the last block to one year past the model end year (endyr), the model will continue the last block's growth parameter levels throughout the forecast. + \item Growth in the forecast with time blocks: Growth deviations propagate into the forecast because growth is by cohort according to the current year's growth parameters. The user can select which growth parameters get used during the forecast by setting the end year of the last block, if using time blocks. If the last block ends in the model's end year, then the growth parameters in effect during the forecast will be the base parameters. By setting the end year of the last block to one year past the model end year (endyr), the model will continue the last block's growth parameter levels throughout the forecast. \item The equilibrium benchmark quantities (MSY, F40\%, etc.) previously used the model end year's (endyr) body size-at-age, which is not in equilibrium. Through the forecast file, it is possible to specify a range of years over which to average the size-at-age used in the benchmark calculations. An option to create equilibrium growth from averaged growth parameters would be a more realistic option and is under consideration, but is not yet available. % Which input in forecast?? The benchmark years input? I couldn't find this option... % Details about a potentially better solution. @@ -116,11 +116,11 @@ \subsection{Parameterizing the Two-Dimensional Autoregressive Selectivity} Second, fix $\sigma_s$ at the value iteratively tuned in the previous step and estimate $\epsilon_{a,t}$. Plot both Pearson residuals and $\epsilon_{a,t}$ out on the age-year surface to check their 2D dimensions. If their distributions seems to be not random but rather be autocorrelated (deviation estimates have the same sign several ages and/or years in a row), users should consider estimating and then including the autocorrelations in $\epsilon_{a,t}$. -Third, extract the estimated selectivity deviation samples from the previous step for estimating $\rho_a$ and $\rho_t$ externally by fitting the samples to a stand-alone model written in Template-Model Builder (TMB). In this model, both $\rho_a$ and $\rho_t$ are bounded between 0 and 1 via applying a logic transformation. If at least one of the two AR1 coefficients are notably different from 0, the model should be run one more time by fixing the two AR1 coefficients at their values externally estimated from deviation samples. The Pearson residuals and $\epsilon_{a,t}$ from this run are expected to distribute more randomly as the autocorrelations in selectivity deviations can be at least partially included in the 2D AR1 process. +Third, extract the estimated selectivity deviation samples from the previous step for estimating $\rho_a$ and $\rho_t$ externally by fitting the samples to a stand-alone model written in Template-Model Builder (TMB). In this model, both $\rho_a$ and $\rho_t$ are bounded between 0 and 1 via applying a logic transformation. If at least one of the two AR1 coefficients are notably different from 0, the model should be run one more time by fixing the two AR1 coefficients at their values externally estimated from deviation samples. The Pearson residuals and $\epsilon_{a,t}$ from this run are expected to distribute more randomly as the autocorrelations in selectivity deviations can be at least partially included in the 2D AR1 process. \hypertarget{continuous-seasonal-recruitment-sec}{} \subsection{Continuous seasonal recruitment} -Setting up a seasonal model such that recruitment can occur with similar and independent probability in any season of any year is awkward in SS3. Instead, SS3 can be set up so that each quarter appears as a year (i.e., a seasons as years model). All the data and parameters are set up to treat quarters as if they were years. Note that setting up a seasons as years model also requires that all rate parameters be re-scaled to correctly account for the quarters being treated as years. +Setting up a seasonal model such that recruitment can occur with similar and independent probability in any season of any year is awkward in SS3. Instead, SS3 can be set up so that each quarter appears as a year (i.e., a seasons as years model). All the data and parameters are set up to treat quarters as if they were years. Note that setting up a seasons as years model also requires that all rate parameters be re-scaled to correctly account for the quarters being treated as years. Other adjustments to make when using seasons as years include: @@ -144,7 +144,7 @@ \section{Detailed Information on Stock Synthesis Processes} \subsection{Jitter} \hypertarget{Jitter}{} -The jitter function has been updated with SS3.30. The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}): +The jitter function has been updated with v.3.30. The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}): \begin{enumerate} \item A normal distribution is calculated such that the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. \item A jitter shift value, termed ``\textit{K}'', is calculated from the distribution equal to pr(P\textsubscript{CURRENT}). @@ -174,7 +174,7 @@ \subsection{Jitter} \hypertarget{PriorDescrip}{} \subsection{Parameter Priors} -Priors on parameters fulfill two roles in SS3. First, for parameters provided with an informative prior, SS3 is receiving additional information about the true value of the parameter. This information works with the information in the data through the overall log likelihood function to arrive at the final parameter estimate. Second, diffuse priors provide only weak information about the value of a prior and serve to manage model performance during execution. For example, some selectivity parameters may become unimportant depending upon the values of other parameters of that selectivity function. In the double normal selectivity function, the parameters controlling the width of the peak and the slope of the descending side become redundant if the parameter controlling the final selectivity moves to a value indicating asymptotic selectivity. The width and slope parameters would no longer have any effect on the log likelihood, so they would have no gradient in the log likelihood and would drift aimlessly. A diffuse prior would then steer them towards a central value and avoid them crashing into the bounds. Another benefit of diffuse priors is the control of parameters that are given unnaturally wide bounds. When a parameter is given too broad of a bound, then early in a model run it could drift into this tail and potentially get into a situation where the gradient with respect that parameter approaches zero even though it is not at its global best value. Here the diffuse prior helps move the parameter back towards the middle of its range where it presumably will be more influential and estimable. +Priors on parameters fulfill two roles in SS3. First, for parameters provided with an informative prior, SS3 is receiving additional information about the true value of the parameter. This information works with the information in the data through the overall log likelihood function to arrive at the final parameter estimate. Second, diffuse priors provide only weak information about the value of a prior and serve to manage model performance during execution. For example, some selectivity parameters may become unimportant depending upon the values of other parameters of that selectivity function. In the double normal selectivity function, the parameters controlling the width of the peak and the slope of the descending side become redundant if the parameter controlling the final selectivity moves to a value indicating asymptotic selectivity. The width and slope parameters would no longer have any effect on the log likelihood, so they would have no gradient in the log likelihood and would drift aimlessly. A diffuse prior would then steer them towards a central value and avoid them crashing into the bounds. Another benefit of diffuse priors is the control of parameters that are given unnaturally wide bounds. When a parameter is given too broad of a bound, then early in a model run it could drift into this tail and potentially get into a situation where the gradient with respect that parameter approaches zero even though it is not at its global best value. Here the diffuse prior helps move the parameter back towards the middle of its range where it presumably will be more influential and estimable. The options for parameter priors are described as a function of $Pval$, the value of the parameter for which a prior is being calculated, as well as the parameter bounds in the case of the beta distribution ($Pmax$ and $Pmin$), and the input values for $Prior$ and $Pr\_SD$, which in some cases are the mean and standard deviation, but interpretation depends on the prior type. The Prior Likelihoods below represent the negative log likelihood in all cases. @@ -182,7 +182,7 @@ \subsection{Parameter Priors} Note that the numbering in SS3 v.3.30 is different from that used in SS3 v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs: \begin{tabular}{ll} - $P_\text{init}$ & The value of the parameter for which a prior is being calculated where init can either be\\ + $P_\text{init}$ & The value of the parameter for which a prior is being calculated where init can either be \\ & the initial un-estimated value or the estimated value (3rd column in control or \\ & control.ss\_new file) \\ $P_\text{LB}$ & The lower bound of the parameter (1st column in control file) \\ @@ -196,7 +196,7 @@ \subsection{Parameter Priors} In a Bayesian context this is equivalent to a uniform prior between the parameter bounds. \item \textbf{Prior Type = 1 = Symmetric beta prior} \\ - The symmetric beta is scaled between parameter bounds, imposing a larger penalty near the bounds. Prior standard deviation of 0.05 is very diffuse and a value of 5.0 provides a smooth U-shaped prior. The prior input is ignored for this prior type. + The symmetric beta is scaled between parameter bounds, imposing a larger penalty near the bounds. Prior standard deviation of 0.05 is very diffuse and a value of 5.0 provides a smooth U-shaped prior. The prior input is ignored for this prior type. \begin{equation} \mu = -P_\text{PRSD} \cdot ln\left(\frac{P_\text{UB}+P_\text{LB}}{2} - P_\text{LB} \right) - P_\text{PRSD} \cdot ln(0.5) \end{equation} @@ -216,7 +216,7 @@ \subsection{Parameter Priors} \end{figure} - \item \textbf{Prior Type = 2 = Beta prior} \\ + \item \textbf{Prior Type = 2 = Beta prior} \\ The definition of $\mu$ is consistent with CASAL's formulation with the $\beta_\text{PR}$ and $\alpha_\text{PR}$ corresponding to the $m$ and $n$ parameters. \begin{equation} \mu = \frac{P_\text{PR}-P_\text{LB}}{P_\text{UB}-P_\text{LB}} diff --git a/1_4sections.tex b/1_4sections.tex index 09368f87..79d4f083 100644 --- a/1_4sections.tex +++ b/1_4sections.tex @@ -73,6 +73,6 @@ \section{File Organization}\label{FileOrganization} \section{Starting Stock Synthesis} SS3 is typically run through the command line interface, although it can also be called from another program, R, the Stock Synthesis Interface, or a script file (such as a DOS batch file). SS3 is compiled for Windows, Mac, and Linux operating systems. The memory requirements depend on the complexity of the model you run, but in general, SS3 will run much slower on computers with inadequate memory. See \hyperref[sec:RunningSS]{Running Stock Synthesis} for additional notes on methods of running SS3. -Communication with the program is through text files. When the program first starts, it reads the file starter.ss, which typically must be located in the same directory from which SS3 is being run. The file starter.ss contains required input information plus references to other required input files, as described in the \hyperref[FileOrganization]{File Organization section}. The names of the control and data files must match the names specified in the starter.ss file. File names, including starter.ss, are case-sensitive on Linux and Mac systems but not on Windows. The echoinput.sso file outputs how the executable reads each input file and can be used for troubleshooting when trying to setup a model correctly. Output from SS3 consists of text files containing specific keywords. Output processing programs, such as the SSI, Excel, or R can search for these keywords and parse the specific information located below that keyword in the text file. +Communication with the program is through text files. When the program first starts, it reads the file starter.ss, which typically must be located in the same directory from which SS3 is being run. The file starter.ss contains required input information plus references to other required input files, as described in the \hyperref[FileOrganization]{File Organization section}. The names of the control and data files must match the names specified in the starter.ss file. File names, including starter.ss, are case-sensitive on Linux and Mac systems but not on Windows. The echoinput.sso file outputs how the executable reads each input file and can be used for troubleshooting when trying to setup a model correctly. Output from SS3 consists of text files containing specific keywords. Output processing programs, such as Excel, or R can search for these keywords and parse the specific information located below that keyword in the text file. \pagebreak diff --git a/5converting.tex b/5converting.tex index e5e3d200..0f168e64 100644 --- a/5converting.tex +++ b/5converting.tex @@ -1,6 +1,6 @@ \hypertarget{ConvIssues}{} \section{Converting Files from SS3 v.3.24} -Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes 3.24 files as input and will output 3.30 input and output files. SS\_trans executables are available for v. 3.30.01 - 3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v. 3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. +Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes 3.24 files as input and will output 3.30 input and output files. SS\_trans executables are available for v. 3.30.01 - 3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. The following file structure and steps are recommended for converting model files: \begin{enumerate} @@ -10,7 +10,7 @@ \section{Converting Files from SS3 v.3.24} \item Review the control (control.ss\_new) file to determine that all model functions converted correctly. The structural changes and assumptions for a couple of the advanced model features are too complicated to convert automatically. See below for some known features that may not convert. When needed, it is recommended to modify the control.ss\_new file, the converted control file, for only the features that failed to convert properly. - \item Change the max phase to a value greater than the last phase in which the a parameter is set to estimated within the control file. Run the new SS3 v.3.30 executable (ss.exe) within the ``converted'' folder using the renamed ss\_new files created from the transition executable. + \item Change the max phase to a value greater than the last phase in which the a parameter is set to estimated within the control file. Run the new SS3 v.3.30 executable (ss3.exe) within the ``converted'' folder using the renamed ss\_new files created from the transition executable. \item Compare likelihood and model estimates between the SS3 v.3.24 and SS3 v.3.30 model versions. diff --git a/6starter.tex b/6starter.tex index e5ad5c59..65055dd4 100644 --- a/6starter.tex +++ b/6starter.tex @@ -14,12 +14,12 @@ \subsection{Starter File Options (starter.ss)} \begin{longtable}{p{1.5cm} p{7.2cm} p{12.3cm}} \hline - \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut \\ + \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut\\ \hline \endfirsthead \hline - \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut \\ + \textbf{Value} & \textbf{Options} & \textbf{Description} \TBstrut\\ \hline \endhead @@ -27,7 +27,7 @@ \subsection{Starter File Options (starter.ss)} \endfoot \hline - \multicolumn{3}{ c }{ \textbf{End of Starter File}}\Tstrut\Bstrut\\ + \multicolumn{3}{c}{\textbf{End of Starter File}} \Tstrut\Bstrut\\ \hline \endlastfoot @@ -58,7 +58,7 @@ \subsection{Starter File Options (starter.ss)} & 3 = custom output & \\ \pagebreak - \multicolumn{2}{l}{COND: Detailed age-structure report = 3 } & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Custom report options: First value: -100 start with minimal items or -101 start with all items; Next Values: A list of items to add or remove where negative number items are removed and positive number items added, -999 to end. The \hyperlink{custom}{reporting numbers} for each item that can be selected or omitted are shown in the Report file next to each section key word.}} \Tstrut\\ + \multicolumn{2}{l}{COND: Detailed age-structure report = 3} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Custom report options: First value: -100 start with minimal items or -101 start with all items; Next Values: A list of items to add or remove where negative number items are removed and positive number items added, -999 to end. The \hyperlink{custom}{reporting numbers} for each item that can be selected or omitted are shown in the Report file next to each section key word.}} \Tstrut\\ \multicolumn{1}{r}{-100} & & \\ \multicolumn{1}{r}{ -5} & & \\ \multicolumn{1}{r}{ 9} & & \\ @@ -101,7 +101,7 @@ \subsection{Starter File Options (starter.ss)} %\pagebreak \hline - 1 & Number of Data Files to Output: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{All output files are sequentially output to data\_echo.ss\_new and need to be parsed by the user into separate data files. The output of the input data file makes no changes, retaining the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data\_echo.ss\_new. As of v.3.30.19, the output file names have changed; now a separate file is created for the echoed data (data\_echo.ss\_new), the expected data values given the model fit (data\_expval.ss), and any requested bootstrap data files (data\_boot\_x.ss where x is the bootstrap number). In versions before 3.30.19, each of these outputs was printed to a single file called data.ss\_new.}}\Tstrut\Bstrut\\ + 1 & Number of Data Files to Output: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{All output files are sequentially output to data\_echo.ss\_new and need to be parsed by the user into separate data files. The output of the input data file makes no changes, retaining the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data\_echo.ss\_new. As of v.3.30.19, the output file names have changed; now a separate file is created for the echoed data (data\_echo.ss\_new), the expected data values given the model fit (data\_expval.ss), and any requested bootstrap data files (data\_boot\_x.ss where x is the bootstrap number). In versions before 3.30.19, each of these outputs was printed to a single file called data.ss\_new.}} \Tstrut\Bstrut\\ & 0 = none; As of 3.30.16, none of the .ss\_new files will be produced;& \Bstrut\\ & 1 = output an annotated replicate of the input data file; & \Tstrut\Bstrut\\ & 2 = add a second data file containing the model's expected values with no added error. ; and & \Tstrut\Bstrut\\ @@ -124,7 +124,7 @@ \subsection{Starter File Options (starter.ss)} 200 & MCMC thin interval & Number of iterations to remove between the main period of the MCMC run. \Tstrut\\ \hline - 0.0 & \hyperlink{Jitter}{Jitter:} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The jitter function has been revised with SS3 v.3.30. Starting values are now jittered based on a normal distribution with the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, care should be given when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.}}\Tstrut\\ + 0.0 & \hyperlink{Jitter}{Jitter:} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The jitter function has been revised with SS3 v.3.30. Starting values are now jittered based on a normal distribution with the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, care should be given when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.}} \Tstrut\\ & 0 = no jitter done to starting values; and & \\ & >0 starting values will vary with larger jitter values resulting in larger changes from the parameter values in the control or par file. & \\ & & \\ @@ -189,7 +189,7 @@ \subsection{Starter File Options (starter.ss)} \hline %\pagebreak - 1 & SPR report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{SPR is the equilibrium SB per recruit that would result from the current year's pattern and intensity of F's. The quantities identified by 1, 2, and 3 here are all calculated in the benchmarks section. Then the one specified here is used as the selected }} \Tstrut\\ + 1 & SPR report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{SPR is the equilibrium SB per recruit that would result from the current year's pattern and intensity of F's. The quantities identified by 1, 2, and 3 here are all calculated in the benchmarks section. Then the one specified here is used as the selected.}} \Tstrut\\ & 0 = skip; & \\ & 1 = use 1-SPR\textsubscript{target}; & \\ & 2 = use 1-SPR at MSY; & \Tstrut\\ @@ -214,7 +214,7 @@ \subsection{Starter File Options (starter.ss)} \hline %\pagebreak - \multicolumn{2}{l}{COND: If F std reporting $\geq$ 4 } & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify range of ages. Upper age must be less than max age because of incomplete handling of the accumulator age for this calculation.}} \Tstrut\\ + \multicolumn{2}{l}{COND: If F std reporting $\geq$ 4} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify range of ages. Upper age must be less than max age because of incomplete handling of the accumulator age for this calculation.}} \Tstrut\\ \multicolumn{1}{r}{3 7} & Age range if F std reporting = 4. & \Tstrut\Bstrut\\ \hline @@ -249,7 +249,6 @@ \subsection{Starter File Options (starter.ss)} \multicolumn{2}{l}{COND: Seed Value (i.e., 1234)}& \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify a seed for data generation. This feature is not available in versions prior to 3.30.15 This is an optional input value allowing for the specification of a random number seed value. If you do not want to specify a seed, skip this input line and end the starter file with the check value (3.30).}} \Tstrut\Bstrut\\ & & \Bstrut\\ & & \Bstrut\\ - % & & \\ % \pagebreak \hline diff --git a/8data.tex b/8data.tex index ef4ccaf6..65d7f44d 100644 --- a/8data.tex +++ b/8data.tex @@ -77,7 +77,7 @@ \subsubsection{Subseasons and Timing of Events} Time steps can be broken into subseason and the ALK can be calculated multiple times over the course of a year: \begin{center} - \begin{tabular}{| p{2.37cm}| p{2.37cm}| p{2.37cm}| p{2.37cm}| p{2.37cm}| p{2.37cm}| } + \begin{tabular}{|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|} \hline ALK & ALK* & ALK* & ALK & ALK* & ALK \Tstrut\Bstrut\\ \hline @@ -119,7 +119,7 @@ \subsection{Model Dimensions} & \Bstrut\\ \hline - \#C data using new survey & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Data file comment. Must start with \#C to be retained then written to top of various output files. These comments can occur anywhere in the data file, but must have \#C in columns 1-2.}} \Tstrut\\ + \#C data using new survey & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Data file comment. Must start with \#C to be retained then written to top of various output files. These comments can occur anywhere in the data file, but must have \#C in columns 1-2.}} \Tstrut\\ & \Bstrut\\ \hline @@ -132,7 +132,7 @@ \subsection{Model Dimensions} 1 & Number of seasons per year \Tstrut\Bstrut\\ \hline - 12 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Vector with the number of months in each season. These do not need to be integers. Note: If the sum of this vector is close to 12.0, then it is rescaled to sum to 1.0 so that season duration is a fraction of a year. If the sum is not equal to 12.0, then the entered values are summed and rescaled to 1. So, with one season per year and 3 months per season, the calculated season duration will be 0.25, which allows a quarterly model to be run as if quarters are years. All rates in SS3 are calculated by season (growth, mortality, etc.) using annual rates and season duration.}} \Tstrut\\ + 12 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Vector with the number of months in each season. These do not need to be integers. Note: If the sum of this vector is close to 12.0, then it is rescaled to sum to 1.0 so that season duration is a fraction of a year. If the sum is not equal to 12.0, then the entered values are summed and rescaled to 1. So, with one season per year and 3 months per season, the calculated season duration will be 0.25, which allows a quarterly model to be run as if quarters are years. All rates in SS3 are calculated by season (growth, mortality, etc.) using annual rates and season duration.}} \Tstrut\\ & \\ & \\ & \\ @@ -142,12 +142,12 @@ \subsection{Model Dimensions} & \Bstrut\\ \hline - 2 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{The number of subseasons. Entry must be even and the minimum value is 2. This is for the purpose of finer temporal granularity in calculating growth and the associated age-length key.}}\Tstrut\\ + 2 & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{The number of subseasons. Entry must be even and the minimum value is 2. This is for the purpose of finer temporal granularity in calculating growth and the associated age-length key.}} \Tstrut\\ & \\ & \Bstrut\\ \hline - \hypertarget{RecrTiminig}{1.5} & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Spawning month; spawning biomass is calculated at this time of year (1.5 means January 15) and used as basis for the total recruitment of all settlement events resulting from this spawning.}}\Tstrut\\ + \hypertarget{RecrTiminig}{1.5} & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Spawning month; spawning biomass is calculated at this time of year (1.5 means January 15) and used as basis for the total recruitment of all settlement events resulting from this spawning.}} \Tstrut\\ & \\ & \Bstrut\\ @@ -158,7 +158,7 @@ \subsection{Model Dimensions} & -1 = one sex and multiply the spawning biomass by the fraction female in the control file. \Bstrut\\ \hline - 20 \Tstrut & Number of ages. The value here will be the plus-group age. SS3 starts at age 0. \\ + 20 \Tstrut & Number of ages. The value here will be the plus-group age. SS3 starts at age 0. \\ \hline 1 & Number of areas \Tstrut\Bstrut\\ @@ -170,12 +170,12 @@ \subsection{Model Dimensions} \end{center} -\subsection{Fleet Definitions } -\hypertarget{GenericFleets}{The} catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (SS3 v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey. +\subsection{Fleet Definitions} +\hypertarget{GenericFleets}{The} catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (SS3 v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey. \begin{center} - \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{4cm} } - \multicolumn{6}{l}{Inputs that define the fishing and survey fleets:}\\ + \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{4cm}} + \multicolumn{6}{l}{Inputs that define the fishing and survey fleets:} \\ \hline 2 & \multicolumn{5}{l}{Number of fleets which includes survey in any order} \Tstrut\Bstrut\\ @@ -224,33 +224,33 @@ \subsection{Fleet Definitions } \hypertarget{CatchMult}{} \myparagraph{Catch Multiplier} -Invokes use of a catch multiplier, which is then entered as a parameter in the mortality-growth parameter section. The estimated value or fixed value of the catch multiplier is used to adjust the observed catch: +Invokes use of a catch multiplier, which is then entered as a parameter in the mortality-growth parameter section. The estimated value or fixed value of the catch multiplier is used to adjust the observed catch: \begin{itemize} \item 0 = No catch multiplier used; and \item 1 = Apply a catch multiplier which is defined as an estimable parameter in the control file after the cohort growth deviation in the biology parameter section. The model's estimated retained catch will be multiplied by this factor before being compared to the observed retained catch. \end{itemize} -A catch multiplier can be useful when trying to explore historical unrecorded catches or ongoing illegal and unregulated catches. The catch multiplier is a full parameter line in the control file and has the ability to be time-varying. +A catch multiplier can be useful when trying to explore historical unrecorded catches or ongoing illegal and unregulated catches. The catch multiplier is a full parameter line in the control file and has the ability to be time-varying. \subsection{Bycatch Fleets} -The option to include bycatch fleets was introduced in SS3 v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored. +The option to include bycatch fleets was introduced in SS3 v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored. -A fishing fleet is designated as a bycatch fleet by indicating that its fleet type is 2. A bycatch fleet creates a fishing mortality, same as a fleet of type 1, but a bycatch fleet has all catch discarded so the input value for retained catch is ignored. However, an input value for retained catch is still needed to indicate that the bycatch fleet was active in that year and season. A catch multiplier cannot be used with bycatch fleets because catch multiplier works on retained catch. SS3 will expect that the retention function for this fleet will be set in the selectivity section to type 3, indicating that all selected catch is discarded dead. It is necessary to specify a selectivity pattern for the bycatch fleet and, due to generally lack of data, to externally derive values for the parameters of this selectivity. +A fishing fleet is designated as a bycatch fleet by indicating that its fleet type is 2. A bycatch fleet creates a fishing mortality, same as a fleet of type 1, but a bycatch fleet has all catch discarded so the input value for retained catch is ignored. However, an input value for retained catch is still needed to indicate that the bycatch fleet was active in that year and season. A catch multiplier cannot be used with bycatch fleets because catch multiplier works on retained catch. SS3 will expect that the retention function for this fleet will be set in the selectivity section to type 3, indicating that all selected catch is discarded dead. It is necessary to specify a selectivity pattern for the bycatch fleet and, due to generally lack of data, to externally derive values for the parameters of this selectivity. -All catch from a bycatch fleet is discarded, so one option to use a discard fleet is to enter annual values for the amount (not proportion) that is discarded in each time step. However, it is uncommon to have such data for all years. An alternative approach that has been used principally in the U.S. Gulf of Mexico is to input a time series of effort data for this fleet in the survey section (e.g., effort is a ``survey'' of F, for example, the shrimp trawl fleet in the Gulf of Mexico catches and discards small finfish and an effort time series is available for this fleet) and to input in the discard data section an observation for the average discard over time using the super year approach. Another use of bycatch fleet is to use it to estimate effect of an external source of mortality, such as a red tide event. In this usage there may be no data on the magnitude of the discards and SS3 will then rely solely on the contrast in other data to attempt to estimate the magnitude of the red tide kill that occurred. The benefit of doing this as a bycatch fleet, and not a block on natural mortality, is that the selectivity of the effect can be specified. +All catch from a bycatch fleet is discarded, so one option to use a discard fleet is to enter annual values for the amount (not proportion) that is discarded in each time step. However, it is uncommon to have such data for all years. An alternative approach that has been used principally in the U.S. Gulf of Mexico is to input a time series of effort data for this fleet in the survey section (e.g., effort is a ``survey'' of F, for example, the shrimp trawl fleet in the Gulf of Mexico catches and discards small finfish and an effort time series is available for this fleet) and to input in the discard data section an observation for the average discard over time using the super year approach. Another use of bycatch fleet is to use it to estimate effect of an external source of mortality, such as a red tide event. In this usage there may be no data on the magnitude of the discards and SS3 will then rely solely on the contrast in other data to attempt to estimate the magnitude of the red tide kill that occurred. The benefit of doing this as a bycatch fleet, and not a block on natural mortality, is that the selectivity of the effect can be specified. -Bycatch fleets are not expected to be under the same type of fishery management controls as the retained catch fleets included in the model. This means that when SS3 enters into the reference point equilibrium calculations, it would be incorrect to have SS3 re-scale the magnitude of the F for the bycatch fleet as it searches for the F that produces, for example, F35\%. Related issues apply to the forecast. Consequently, a separate set of controls is provided for bycatch fleets (defined below). Input is required for each fleet designated as fleet type = 2. +Bycatch fleets are not expected to be under the same type of fishery management controls as the retained catch fleets included in the model. This means that when SS3 enters into the reference point equilibrium calculations, it would be incorrect to have SS3 re-scale the magnitude of the F for the bycatch fleet as it searches for the F that produces, for example, F35\%. Related issues apply to the forecast. Consequently, a separate set of controls is provided for bycatch fleets (defined below). Input is required for each fleet designated as fleet type = 2. \noindent If a fleet above was set as a bycatch fleet (fleet type = 2), the following line is required: \begin{center} - \begin{tabular}{p{2.25cm} p{2.65cm} p{2.25cm} p{2.5cm} p{2.5cm} p{2cm} } + \begin{tabular}{p{2.25cm} p{2.65cm} p{2.25cm} p{2.5cm} p{2.5cm} p{2cm}} - \multicolumn{6}{l}{Bycatch fleet input controls:}\\ + \multicolumn{6}{l}{Bycatch fleet input controls:} \\ \hline - a: & b: & c: & d: & e: & f: \Tstrut\\ - Fleet Index & Include in MSY & Fmult & F or First Year & Last Year & Not used \Bstrut\\ + a: & b: & c: & d: & e: & f: \Tstrut\\ + Fleet Index & Include in MSY & Fmult & F or First Year & Last Year & Not used \Bstrut\\ \hline - 2 & 2 & 3 & 1982 & 2010 & 0 \Tstrut\Bstrut\\ + 2 & 2 & 3 & 1982 & 2010 & 0 \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} @@ -339,13 +339,13 @@ \subsection{Catch} \hypertarget{ListBased}{There} is no longer a need to specify the number of records to be read; instead the list is terminated by entering a record with the value of -9999 in the year field. The updated list based approach extends throughout the data file (e.g., catch, length- and age-composition data), the control file (e.g., lambdas), and the forecast file (e.g., total catch by fleet, total catch by area, allocation groups, forecasted catch). -In addition, it is possible to collapse the number of seasons. So, if a season value is greater than the number of seasons for a particular model, that catch is added to the catch for the final season. This is one way to easily collapse a seasonal model into an annual model. The alternative option is to the use of season = 0. This will cause SS3 to distribute the input value of catch equally among the number of seasons. SS3 assumes that catch occurs continuously over seasons and hence is not specified as month in the catch data section. However, all other data types will need to be specified by month. +In addition, it is possible to collapse the number of seasons. \ So, if a season value is greater than the number of seasons for a particular model, that catch is added to the catch for the final season. This is one way to easily collapse a seasonal model into an annual model. The alternative option is to the use of season = 0. This will cause SS3 to distribute the input value of catch equally among the number of seasons. SS3 assumes that catch occurs continuously over seasons and hence is not specified as month in the catch data section. However, all other data types will need to be specified by month. -The format for a 2 season model with 2 fisheries looks like the table below. Example is sorted by fleet, but the sort order does not matter. In data.ss\_new, the sort order is fleet, year, season. +The format for a 2 season model with 2 fisheries looks like the table below. Example is sorted by fleet, but the sort order does not matter. In data.ss\_new, the sort order is fleet, year, season. \begin{center} \begin{tabular}{p{3cm} p{3cm} p{3cm} p{3cm} p{4cm}} - \multicolumn{5}{l}{Catches by year, season for every fleet:}\\ + \multicolumn{5}{l}{Catches by year, season for every fleet:} \\ \hline Year & Season & Fleet & Catch & Catch SE \Tstrut\Bstrut\\ \hline @@ -353,12 +353,12 @@ \subsection{Catch} -999 & 2 & 1 & 62 & 0.05 \\ 1975 & 1 & 1 & 876 & 0.05 \\ 1975 & 2 & 1 & 343 & 0.05 \\ - ... & ... & ... & ... & ... \\ + ... & ... & ... & ... & ... \\ -999 & 1 & 2 & 55 & 0.05 \\ -999 & 2 & 2 & 22 & 0.05 \\ 1975 & 1 & 2 & 555 & 0.05 \\ 1975 & 2 & 2 & 873 & 0.05 \\ - ... & ... & ... & ... & ... \\ + ... & ... & ... & ... & ... \\ -9999 & 0 & 0 & 0 & 0 \Bstrut\\ \hline \end{tabular} @@ -366,21 +366,21 @@ \subsection{Catch} \begin{itemize} \item Catch can be in terms of biomass or numbers for each fleet, but cannot be mixed within a fleet. - \item Catch is retained catch (aka landings). If there is discard also, then it is handled in the discard section below. This is the recommended setup which results in a model estimated retention curve based upon the discard data (specifically discard composition data). However, there may be instances where the data do not support estimation of retention curves. In these instances catches can be specified as all dead (retained + discard estimates). + \item Catch is retained catch (aka landings). If there is discard also, then it is handled in the discard section below. This is the recommended setup which results in a model estimated retention curve based upon the discard data (specifically discard composition data). However, there may be instances where the data do not support estimation of retention curves. In these instances catches can be specified as all dead (retained + discard estimates). \item If there are challenges to estimating discards within the model, catches can be input as total dead without the use of discard data and retention curves. - \item If there is reason to believe that the retained catch values underestimate the true catch, then it is possible in the retention parameter set up to create the ability for the model to estimate the degree of unrecorded catch. However, this is better handled with the new catch multiplier option. + \item If there is reason to believe that the retained catch values underestimate the true catch, then it is possible in the retention parameter set up to create the ability for the model to estimate the degree of unrecorded catch. However, this is better handled with the new catch multiplier option. \end{itemize} \subsection{Indices} -Indices are data that are compared to aggregate quantities in the model. Typically the index is a measure of selected fish abundance, but this data section also allows for the index to be related to a fishing fleet's F, or to another quantity estimated by the model. The first section of the ``Indices'' setup contains the fleet number, units, error distribution, and whether additional output (SD Report) will be written to the Report file for each fleet that has index data. +Indices are data that are compared to aggregate quantities in the model. Typically the index is a measure of selected fish abundance, but this data section also allows for the index to be related to a fishing fleet's F, or to another quantity estimated by the model. The first section of the ``Indices'' setup contains the fleet number, units, error distribution, and whether additional output (SD Report) will be written to the Report file for each fleet that has index data. \begin{center} \begin{tabular}{p{3cm} p{3cm} p{3cm} p{7cm}} - \multicolumn{4}{l}{Catch-per-unit-effort (CPUE) and Survey Abundance Observations:}\\ + \multicolumn{4}{l}{Catch-per-unit-effort (CPUE) and Survey Abundance Observations:} \\ \hline - Fleet/ & & Error & \Tstrut\\ - Survey & Units & Distribution & SD Report \Bstrut\\ + Fleet/ & & Error & \Tstrut\\ + Survey & Units & Distribution & SD Report \Bstrut\\ \hline 1 & 1 & 0 & 0 \Tstrut\\ 2 & 1 & 0 & 0 \\ @@ -465,10 +465,10 @@ \subsection{Indices} \item If the fishery or survey has time-varying selectivity, then this changing selectivity will be taken into account when calculating expected values for the CPUE or survey index. \item Year values that are before start year or after end year are excluded from model, so the easiest way to include provisional data in a data file is to put a negative sign on its year value. \item Duplicate survey observations for the same year are not allowed. - \item Observations that are to be included in the model but not included in the negative log likelihood need to have a negative sign on their fleet ID. Previously the code for not using observations was to enter the observation itself as a negative value. However, that old approach prevented use of a Z-score environmental index as a ``survey''. This approach is best for single or select years from an index rather than an approach to remove a whole index. Removing a whole index from the model should be done through the use of lambdas at the bottom of the control file which will eliminate the index from model fitting. + \item Observations that are to be included in the model but not included in the negative log likelihood need to have a negative sign on their fleet ID. Previously the code for not using observations was to enter the observation itself as a negative value. However, that old approach prevented use of a Z-score environmental index as a ``survey''. This approach is best for single or select years from an index rather than an approach to remove a whole index. Removing a whole index from the model should be done through the use of lambdas at the bottom of the control file which will eliminate the index from model fitting. \item Observations can be entered in any order, except if the super-year feature is used. - \item Super-periods are turned on and then turned back off again by putting a negative sign on the season. Previously, super-periods were started and stopped by entering -9999 and the -9998 in the SE field. See the \hyperlink{SuperPeriod}{Data Super-Period} section of this manual for more information. - \item If the statistical analysis used to create the CPUE index of a fishery has been conducted in such a way that its inherent size/age selectivity differs from the size/age selectivity estimated from the fishery's size and age composition, then you may want to enter the CPUE as if it was a separate survey and with a selectivity that differs from the fishery's estimated selectivity. The need for this split arises because the fishery size and age composition should be derived through a catch-weighted approach (to appropriately represent the removals by the fishery) and the CPUE should be derived through an area-weighted approach to better serve as a survey of stock abundance. + \item Super-periods are turned on and then turned back off again by putting a negative sign on the season. Previously, super-periods were started and stopped by entering -9999 and the -9998 in the SE field. See the \hyperlink{SuperPeriod}{Data Super-Period} section of this manual for more information. + \item If the statistical analysis used to create the CPUE index of a fishery has been conducted in such a way that its inherent size/age selectivity differs from the size/age selectivity estimated from the fishery's size and age composition, then you may want to enter the CPUE as if it was a separate survey and with a selectivity that differs from the fishery's estimated selectivity. The need for this split arises because the fishery size and age composition should be derived through a catch-weighted approach (to appropriately represent the removals by the fishery) and the CPUE should be derived through an area-weighted approach to better serve as a survey of stock abundance. \end{itemize} \subsection{Discard} @@ -487,13 +487,13 @@ \subsection{Discard} \begin{center} \begin{tabular}{p{2cm} p{3cm} p{3cm} p{3cm} p{3cm}} \hline - 1 & \multicolumn{4}{l}{Number of fleets with discard observations}\Tstrut\Bstrut\\ + 1 & \multicolumn{4}{l}{Number of fleets with discard observations} \Tstrut\Bstrut\\ \hline - Fleet & Units & \multicolumn{3}{l}{Error Distribution}\Tstrut\Bstrut\\ + Fleet & Units & \multicolumn{3}{l}{Error Distribution} \Tstrut\Bstrut\\ \hline - 1 & 2 & \multicolumn{3}{l}{-1}\Tstrut\Bstrut\\ + 1 & 2 & \multicolumn{3}{l}{-1} \Tstrut\Bstrut\\ \hline - Year & Month & Fleet & Observation & Standard Error \Tstrut\Bstrut\\ + Year & Month & Fleet & Observation & Standard Error \Tstrut\Bstrut\\ \hline 1980 & 7 & 1 & 0.05 & 0.25 \Tstrut\\ 1991 & 7 & 1 & 0.10 & 0.25 \\ @@ -502,7 +502,7 @@ \subsection{Discard} \end{tabular} \end{center} -Note that although the user must specify a month for the observed discard data, the unit for discard data is in terms of a season rather than a specific month. So, if using a seasonal model, the input month values must corresponding to some time during the correct season. The actual value will not matter because the discard amount is calculated for the entirety of the season. However, discard length or age observations will be treated by entered observation month. +Note that although the user must specify a month for the observed discard data, the unit for discard data is in terms of a season rather than a specific month. So, if using a seasonal model, the input month values must corresponding to some time during the correct season. The actual value will not matter because the discard amount is calculated for the entirety of the season. However, discard length or age observations will be treated by entered observation month. \myparagraph{Discard Units} The options are: @@ -515,11 +515,11 @@ \subsection{Discard} \myparagraph{Discard Error Distribution} The four options for discard error are: \begin{itemize} - \item >0 = degrees of freedom for Student's t-distribution used to scale mean body weight deviations. Value of error in data file is interpreted as CV of the observation; + \item >0 = degrees of freedom for Student's t-distribution used to scale mean body weight deviations. Value of error in data file is interpreted as CV of the observation; \item 0 = normal distribution, value of error in data file is interpreted as CV of the observation; \item -1 = normal distribution, value of error in data file is interpreted as standard error of the observation; \item -2 = lognormal distribution, value of error in data file is interpreted as standard error of the observation in log space; and - \item -3 = truncated normal distribution (new with SS3 v.3.30, needs further testing), value of error in data file is interpreted as standard error of the observation. This is a good option for low observed discard rates. + \item -3 = truncated normal distribution (new with SS3 v.3.30, needs further testing), value of error in data file is interpreted as standard error of the observation. This is a good option for low observed discard rates. \end{itemize} \myparagraph{Discard Notes} @@ -529,27 +529,27 @@ \subsection{Discard} \item Zero (0.0) is a legitimate discard observation, unless lognormal error structure is used. \item Duplicate discard observations from a fleet for the same year are not allowed. \item Observations can be entered in any order, except if the super-period feature is used. - \item Note that in the control file you will enter information for retention such that 1-retention is the amount discarded. All discard is assumed dead, unless you enter information for discard mortality. Retention and discard mortality can be either size-based or age-based (new with SS3 v.3.30). + \item Note that in the control file you will enter information for retention such that 1-retention is the amount discarded. All discard is assumed dead, unless you enter information for discard mortality. Retention and discard mortality can be either size-based or age-based (new with SS3 v.3.30). \end{itemize} \myparagraph{Cautionary Note} -The use of CV as the measure of variance can cause a small discard value to appear to be overly precise, even with the minimum standard error of the discard observation set to 0.001. In the control file, there is an option to add an extra amount of variance. This amount is added to the standard error, not to the CV, to help correct this problem of underestimated variance. +The use of CV as the measure of variance can cause a small discard value to appear to be overly precise, even with the minimum standard error of the discard observation set to 0.001. In the control file, there is an option to add an extra amount of variance. This amount is added to the standard error, not to the CV, to help correct this problem of underestimated variance. \subsection{Mean Body Weight or Length} -This is the overall mean body weight or length across all selected sizes and ages. This may be useful in situations where individual fish are not measured but mean weight is obtained by counting the number of fish in a specified sample, e.g., a 25 kg basket. +This is the overall mean body weight or length across all selected sizes and ages. This may be useful in situations where individual fish are not measured but mean weight is obtained by counting the number of fish in a specified sample (e.g., a 25 kg basket). \begin{center} \begin{tabular}{p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{2cm} p{2.8cm}} - \multicolumn{7}{l}{Mean Body Weight Data Section:}\\ + \multicolumn{7}{l}{Mean Body Weight Data Section:} \\ \hline - 1 & \multicolumn{6}{l}{Use mean body size data (0/1) } \Tstrut\Bstrut\\ + 1 & \multicolumn{6}{l}{Use mean body size data (0/1)} \Tstrut\Bstrut\\ \hline \multicolumn{7}{l}{COND > 0:}\Tstrut\\ - 30 & \multicolumn{6}{l}{Degrees of freedom for Student's t-distribution used to evaluate mean body weight } \\ - & \multicolumn{6}{l}{deviation.}\Bstrut\\ + 30 & \multicolumn{6}{l}{Degrees of freedom for Student's t-distribution used to evaluate mean body weight} \\ + & \multicolumn{6}{l}{deviation.} \Bstrut\\ \hline - Year & Month & Fleet & Partition & Type & Observation & CV\Tstrut\Bstrut\\ + Year & Month & Fleet & Partition & Type & Observation & CV \Tstrut\Bstrut\\ \hline 1990 & 7 & 1 & 0 & 1 & 4.0 & 0.95 \Tstrut\\ 1990 & 7 & 1 & 0 & 1 & 1.0 & 0.95 \\ @@ -575,29 +575,29 @@ \subsection{Mean Body Weight or Length} \end{itemize} \myparagraph{Observation - Units} -Units must correspond to the units of body weight, normally in kilograms, (or mean length in cm). The expected value of mean body weight (or mean length) is calculated in a way that incorporates effect of selectivity and retention. +Units must correspond to the units of body weight, normally in kilograms, (or mean length in cm). The expected value of mean body weight (or mean length) is calculated in a way that incorporates effect of selectivity and retention. \myparagraph{Error} Error is entered as the CV of the observed mean body weight (or mean length) \subsection{Population Length Bins} -The first part of the length composition section sets up the bin structure for the population. These bins define the granularity of the age-length key and the coarseness of the length selectivity. Fine bins create smoother distributions, but a larger and slower running model. +The first part of the length composition section sets up the bin structure for the population. These bins define the granularity of the age-length key and the coarseness of the length selectivity. Fine bins create smoother distributions, but a larger and slower running model. First read a single value to select one of three population length bin methods, then any conditional input for options 2 and 3: \begin{center} \begin{tabular}{p{2cm} p{5cm} p{8cm}} \hline - 1 & \multicolumn{2}{l}{Use data bins to be read later. No additional input here.} \Tstrut\Bstrut\\ + 1 & \multicolumn{2}{l}{Use data bins to be read later. No additional input here.} \Tstrut\Bstrut\\ \hline 2 & \multicolumn{2}{l}{generate from bin width min max, read next:} \Tstrut\\ \multirow{4}{2cm}[-0.1cm]{} & 2 & Bin width \\ - & 10 & Lower size of first bin\\ - & 82 & Lower size of largest bin\\ + & 10 & Lower size of first bin \\ + & 82 & Lower size of largest bin \\ \multicolumn{3}{l}{The number of bins is then calculated from: (max Lread - min Lread)/(bin width) + 1}\Bstrut\\ \hline 3 & \multicolumn{2}{l}{Read 1 value for number of bins, and then read vector of bin boundaries} \Tstrut\\ - \multirow{2}{2cm}[-0.1cm]{} & 37 & Number of population length bins to be read\\ - & 10 12 14 ... 82 & Vector containing lower edge of each population size bin \Bstrut\\ + \multirow{2}{2cm}[-0.1cm]{} & 37 & Number of population length bins to be read \\ + & 10 12 14 ... 82 & Vector containing lower edge of each population size bin \Bstrut\\ \hline \end{tabular} @@ -606,42 +606,42 @@ \subsection{Population Length Bins} \myparagraph{Notes} There are some items for users to consider when setting up population length bins: \begin{itemize} - \item For option 2, bin width should be a factor of min size and max size. For options 2 and 3, the data length bins must not be wider than the population length bins and the boundaries of the bins do not have to align. The transition matrix between population and data length bins is output to echoinput.sso. + \item For option 2, bin width should be a factor of min size and max size. For options 2 and 3, the data length bins must not be wider than the population length bins and the boundaries of the bins do not have to align. The transition matrix between population and data length bins is output to echoinput.sso. \item The mean size at settlement (virtual recruitment age) is set equal to the min size of the first population length bin. \item When using more, finer population length bins, the model will create smoother length selectivity curves and smoother length distributions in the age-length key, but run more slowly (more calculations to do). - \item The mean weight-at-length, maturity-at-length and size-selectivity are based on the mid-length of the population bins. So these quantities will be rougher approximations if broad bins are defined. + \item The mean weight-at-length, maturity-at-length and size-selectivity are based on the mid-length of the population bins. So these quantities will be rougher approximations if broad bins are defined. - \item Provide a wide enough range of population size bins so that the mean body weight-at-age will be calculated correctly for the youngest and oldest fish. If the growth curve extends beyond the largest size bin, then these fish will be assigned a length equal to the mid-bin size for the purpose of calculating their body weight. + \item Provide a wide enough range of population size bins so that the mean body weight-at-age will be calculated correctly for the youngest and oldest fish. If the growth curve extends beyond the largest size bin, then these fish will be assigned a length equal to the mid-bin size for the purpose of calculating their body weight. - \item While exploring the performance of models with finer bin structure, a potentially pathological situation has been identified. When the bin structure is coarse (note that some applications have used 10 cm bin widths for the largest fish), it is possible for a selectivity slope parameter or a retention parameter to become so steep that all of the action occurs within the range of a single size bin. In this case, the model will see zero gradient of the log likelihood with respect to that parameter and convergence will be hampered. + \item While exploring the performance of models with finer bin structure, a potentially pathological situation has been identified. When the bin structure is coarse (note that some applications have used 10 cm bin widths for the largest fish), it is possible for a selectivity slope parameter or a retention parameter to become so steep that all of the action occurs within the range of a single size bin. In this case, the model will see zero gradient of the log likelihood with respect to that parameter and convergence will be hampered. - \item A value read near the end of the starter.ss file defines the degree of tail compression used for the age-length key, called ALK tolerance. If this is set to 0.0, then no compression is used and all cells of the age-length key are processed, even though they may contain trivial (e.g., 1 e-13) fraction of the fish at a given age. With tail compression of, say 0.0001, the model, at the beginning of each phase, will calculate the min and max length bin to process for each age of each morphs ALK and compress accordingly. Depending on how many extra bins are outside this range, you may see speed increases near 10-20\%. Large values of ALK tolerance, say 0.1, will create a sharp end to each distribution and likely will impede convergence. It is recommended to start with a value of 0 and if model speed is an issue, explore values greater than 0 and evaluate the trade-off between model estimates and run time. The user is encouraged to explore this feature. + \item A value read near the end of the starter.ss file defines the degree of tail compression used for the age-length key, called ALK tolerance. If this is set to 0.0, then no compression is used and all cells of the age-length key are processed, even though they may contain trivial (e.g., 1 e-13) fraction of the fish at a given age. With tail compression of, say 0.0001, the model, at the beginning of each phase, will calculate the min and max length bin to process for each age of each morphs ALK and compress accordingly. Depending on how many extra bins are outside this range, you may see speed increases near 10-20\%. Large values of ALK tolerance, say 0.1, will create a sharp end to each distribution and likely will impede convergence. It is recommended to start with a value of 0 and if model speed is an issue, explore values greater than 0 and evaluate the trade-off between model estimates and run time. The user is encouraged to explore this feature. \end{itemize} \subsection{Length Composition Data Structure} \begin{tabular}{p{2cm} p{14cm}} - \multicolumn{2}{l}{Enter a code to indicate whether or not length composition data will be used:\Tstrut\Bstrut}\\ + \multicolumn{2}{l}{Enter a code to indicate whether or not length composition data will be used: \Tstrut\Bstrut}\\ \hline - 1 & Use length composition data (0/1/2)\Tstrut\Bstrut\\ + 1 & Use length composition data (0/1/2) \Tstrut\Bstrut\\ \hline \end{tabular} -If the value 0 is entered, then skip all length related inputs below and skip to the age data setup section. If value 1 is entered, all data weighting options for composition data apply equally to all partitions within a fleet. If value 2 is entered, then the data weighting options are applied by the partition specified. Note that the partitions must be entered in numerical order within each fleet. +If the value 0 is entered, then skip all length related inputs below and skip to the age data setup section. If value 1 is entered, all data weighting options for composition data apply equally to all partitions within a fleet. If value 2 is entered, then the data weighting options are applied by the partition specified. Note that the partitions must be entered in numerical order within each fleet. If the value for fleet is negative, then the vector of inputs is copied to all partitions (0 = combined, 1 = discard, and 2 = retained) for that fleet and all higher numbered fleets. This as a good practice so that the user controls the values used for all fleets. \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{1.5cm} p{1.7cm}} - \multicolumn{7}{l}{Example table of length composition settings when ``Use length composition data'' = 1 (where here }\\ - \multicolumn{7}{l}{the first fleet has multinomial error structure with no associated parameter, and the second fleet}\\ - \multicolumn{7}{l}{uses Dirichlet-multinomial structure):}\\ + \multicolumn{7}{l}{Example table of length composition settings when ``Use length composition data'' = 1 (where here} \\ + \multicolumn{7}{l}{the first fleet has multinomial error structure with no associated parameter, and the second fleet} \\ + \multicolumn{7}{l}{uses Dirichlet-multinomial structure):} \\ \hline - Min. & Constant & Combine & & Comp. & & Min.\Tstrut\\ - Tail & added & males \& & Compress. & Error & Param. & Sample\\ - Compress. & to prop. & females & Bins & Dist. & Select & Size\Bstrut\\ + Min. & Constant & Combine & & Comp. & & Min. \Tstrut\\ + Tail & added & males \& & Compress. & Error & Param. & Sample \\ + Compress. & to prop. & females & Bins & Dist. & Select & Size \Bstrut\\ \hline 0 & 0.0001 & 0 & 0 & 0 & 0 & 0.1 \Tstrut\\ 0 & 0.0001 & 0 & 0 & 1 & 1 & 0.1 \Bstrut\\ @@ -650,13 +650,13 @@ \subsection{Length Composition Data Structure} \begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}} - \multicolumn{9}{l}{Example table of length composition settings when ``Use length composition data'' = 2 (where here}\\ - \multicolumn{9}{l}{the -1 in the fleet column applies the first parameter to all partitions for fleet 1 while fleet 2 has}\\ - \multicolumn{9}{l}{separate parameters for discards and retained fish):}\\ + \multicolumn{9}{l}{Example table of length composition settings when ``Use length composition data'' = 2 (where here} \\ + \multicolumn{9}{l}{the -1 in the fleet column applies the first parameter to all partitions for fleet 1 while fleet 2 has} \\ + \multicolumn{9}{l}{separate parameters for discards and retained fish):} \\ \hline - & & Min. & Constant & Combine & & Comp. & & Min.\Tstrut\\ - & & Tail & added & males \& & Compress. & Error & Param. & Sample\\ - Fleet & Partition & Compress. & to prop. & females & Bins & Dist. & Select & Size\Bstrut\\ + & & Min. & Constant & Combine & & Comp. & & Min. \Tstrut\\ + & & Tail & added & males \& & Compress. & Error & Param. & Sample \\ + Fleet & Partition & Compress. & to prop. & females & Bins & Dist. & Select & Size \Bstrut\\ \hline -1 & 0 & 0 & 0.0001 & 0 & 0 & 1 & 1 & 0.1 \Tstrut\\ 2 & 1 & 0 & 0.0001 & 0 & 0 & 1 & 2 & 0.1 \\ @@ -672,13 +672,13 @@ \subsection{Length Composition Data Structure} Compress tails of composition until observed proportion is greater than this value; negative value causes no compression; Advise using no compression if data are very sparse, and especially if the set-up is using age composition within length bins because of the sparseness of these data. \myparagraph{Added Constant to Proportions} -Constant added to observed and expected proportions at length and age to make logL calculations more robust. Tail compression occurs before adding this constant. Proportions are renormalized to sum to 1.0 after constant is added. +Constant added to observed and expected proportions at length and age to make logL calculations more robust. Tail compression occurs before adding this constant. Proportions are renormalized to sum to 1.0 after constant is added. \myparagraph{Combine Males \& Females} -Combine males into females at or below this bin number. This is useful if the sex determination of very small fish is doubtful so allows the small fish to be treated as combined sex. If Combine Males \& Females > 0, then add males into females for bins 1 through this number, zero out the males, set male data to start at the first bin above this bin. Note that Combine Males \& Females > 0 is entered as a bin index, not as the size associated with that bin. Comparable option is available for age composition data. +Combine males into females at or below this bin number. This is useful if the sex determination of very small fish is doubtful so allows the small fish to be treated as combined sex. If Combine Males \& Females > 0, then add males into females for bins 1 through this number, zero out the males, set male data to start at the first bin above this bin. Note that Combine Males \& Females > 0 is entered as a bin index, not as the size associated with that bin. Comparable option is available for age composition data. \myparagraph{Compress Bins} -This option allows for the compression of length or age bins beyond a specific length or age by each data source. As an example, a value of 5 in the compress bins column would condense the final five length bins for the specified data source. +This option allows for the compression of length or age bins beyond a specific length or age by each data source. As an example, a value of 5 in the compress bins column would condense the final five length bins for the specified data source. \myparagraph{Composition Error Distribution} The options are: @@ -710,11 +710,11 @@ \subsection{Length Composition Data Structure} The minimum value (floor) for all sample sizes. This value must be at least 0.001. Conditional age-at-length data may have observations with sample sizes less than 1. SS3 v.3.24 had an implicit minimum sample size value of 1. \myparagraph{Additional information on Dirichlet Parameter Number and Effective Sample Sizes} -If the Dirichlet-multinomial error distribution is selected, indicate here which of a list of Dirichlet-multinomial parameters will be used for this fleet. So each fleet could use a unique Dirichlet-multinomial parameter, or all could share the same, or any combination of unique and shared. The requested number of Dirichlet-multinomial parameters are specified as parameter lines in the control file immediately after the selectivity parameter section. Please note that age-compositions Dirichlet-multinomial parameters are continued after length-compositions, so a model with one fleet and both data types would presumably require two new Dirichlet-multinomial parameters. +If the Dirichlet-multinomial error distribution is selected, indicate here which of a list of Dirichlet-multinomial parameters will be used for this fleet. So each fleet could use a unique Dirichlet-multinomial parameter, or all could share the same, or any combination of unique and shared. The requested number of Dirichlet-multinomial parameters are specified as parameter lines in the control file immediately after the selectivity parameter section. Please note that age-compositions Dirichlet-multinomial parameters are continued after length-compositions, so a model with one fleet and both data types would presumably require two new Dirichlet-multinomial parameters. -The Dirichlet estimates the effective sample size as $N_{eff}=\frac{1}{1+\theta}+\frac{N\theta}{1+\theta}$ where $\theta$ is the estimated parameter and $N$ is the input sample size. Stock Synthesis estimates the log of the Dirichlet-multinomial parameter such that $\hat{\theta}_{\text{fishery}} = e^{-0.6072} = 0.54$ where assuming $N=100$ for the fishery would result in an effective sample size equal to 35.7. +The Dirichlet estimates the effective sample size as $N_{eff}=\frac{1}{1+\theta}+\frac{N\theta}{1+\theta}$ where $\theta$ is the estimated parameter and $N$ is the input sample size. Stock Synthesis estimates the log of the Dirichlet-multinomial parameter such that $\hat{\theta}_{\text{fishery}} = e^{-0.6072} = 0.54$ where assuming $N=100$ for the fishery would result in an effective sample size equal to 35.7. -This formula for effective sample size implies that, as the Stock Synthesis parameter ln(DM\_theta) goes to large values (i.e., 20), then the adjusted sample size will converge to the input sample size. In this case, small changes in the value of the ln(DM\_theta) parameter has no action, and the derivative of the negative log-likelihood is zero with respect to the parameter, which means the Hessian will be singular and cannot be inverted. To avoid this non-invertible Hessian when the ln(DM\_theta) parameter becomes large, turn it off while fixing it at the high value. This is equivalent to turning off down-weighting of fleets where evidence suggests that the input sample sizes are reasonable. +This formula for effective sample size implies that, as the Stock Synthesis parameter ln(DM\_theta) goes to large values (i.e., 20), then the adjusted sample size will converge to the input sample size. In this case, small changes in the value of the ln(DM\_theta) parameter has no action, and the derivative of the negative log-likelihood is zero with respect to the parameter, which means the Hessian will be singular and cannot be inverted. To avoid this non-invertible Hessian when the ln(DM\_theta) parameter becomes large, turn it off while fixing it at the high value. This is equivalent to turning off down-weighting of fleets where evidence suggests that the input sample sizes are reasonable. For additional information about the Dirichlet-multinomial please see \citet{thorson-model-based-2017} and the detailed \hyperlink{DataWeight}{Data Weighting} section. @@ -722,15 +722,15 @@ \subsection{Length Composition Data Structure} \subsection{Length Composition Data} Composition data can be entered as proportions, numbers, or values of observations by length bin based on data expansions. -The data bins do not need to cover all observed lengths. The selection of data bin structure should be based on the observed distribution of lengths and the assumed growth curve. If growth asymptotes at larger lengths, having additional length bins across these sizes may not contribute information to the model and may slow model run time. Additionally, the lower length bin selection should be selected such that, depending on the size selection, to allow for information on smaller fish and possible patterns in recruitment. While set separately users should ensure that the length and age bins align. It is recommended to explore multiple configurations of length and age bins to determine the impact of this choice on model estimation. +The data bins do not need to cover all observed lengths. The selection of data bin structure should be based on the observed distribution of lengths and the assumed growth curve. If growth asymptotes at larger lengths, having additional length bins across these sizes may not contribute information to the model and may slow model run time. Additionally, the lower length bin selection should be selected such that, depending on the size selection, to allow for information on smaller fish and possible patterns in recruitment. While set separately users should ensure that the length and age bins align. It is recommended to explore multiple configurations of length and age bins to determine the impact of this choice on model estimation. Specify the length composition data as: \begin{center} \begin{tabular}{p{4cm} p{10cm}} \hline - 28 & Number of length bins for data \\ + 28 & Number of length bins for data \\ \hline - 26 28 30 ... 80 & Vector of length bins associated with the length data\\ + 26 28 30 ... 80 & Vector of length bins associated with the length data \\ \hline \end{tabular} \end{center} @@ -752,10 +752,10 @@ \subsection{Length Composition Data} \begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{5cm}} \multicolumn{7}{l}{} \\ \hline - Year & Month & Fleet & Sex & Partition & Nsamp & data vector\Tstrut\Bstrut\\ + Year & Month & Fleet & Sex & Partition & Nsamp & data vector \Tstrut\Bstrut\\ \hline 1986 & 1 & 1 & 3 & 0 & 20 & \Tstrut\\ - ... & ...& ... & ... & ...& ... & ... \\ + ... & ... & ... & ... & ... & ... & ... \\ -9999 & 0 & 0 & 0 & 0 & 0 & <0 repeated for each element of the data vector above> \Bstrut\\ \hline @@ -790,18 +790,18 @@ \subsection{Length Composition Data} \myparagraph{Note} When processing data to be input into SS3, all observed fish of sizes smaller than the first bin should be added to the first bin and all observed fish larger than the last bin should be condensed into the last bin. -The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in SS3 v.3.30, the model will continue to read length composition data until an pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read. +The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in SS3 v.3.30, the model will continue to read length composition data until an pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read. -Each observation can be stored as one row for ease of data management in a spreadsheet and for sorting of the observations. However, the 6 header values, the female vector and the male vector could each be on a separate line because ADMB reads values consecutively from the input file and will move to the next line as necessary to read additional values. +Each observation can be stored as one row for ease of data management in a spreadsheet and for sorting of the observations. However, the 6 header values, the female vector and the male vector could each be on a separate line because ADMB reads values consecutively from the input file and will move to the next line as necessary to read additional values. -The composition observations can be in any order and replicate observations by a year for a fleet are allowed (unlike survey and discard data). However, if the super-period approach is used, then each super-periods' observations must be contiguous in the data file. +The composition observations can be in any order and replicate observations by a year for a fleet are allowed (unlike survey and discard data). However, if the super-period approach is used, then each super-periods' observations must be contiguous in the data file. \subsection{Age Composition Option} -The age composition section begins by reading the number of age bins. If the value 0 is entered for the number of age bins, then skips reading the bin structure and all reading of other age composition data inputs. +The age composition section begins by reading the number of age bins. If the value 0 is entered for the number of age bins, then skips reading the bin structure and all reading of other age composition data inputs. \begin{center} \begin{tabular}{p{3cm} p{13cm}} \hline - 17 \Tstrut & Number of age bins; can be equal to 0 if age data are not used; do not include a vector of agebins if the number of age bins is set equal to 0.\Bstrut\\ + 17 \Tstrut & Number of age bins; can be equal to 0 if age data are not used; do not include a vector of agebins if the number of age bins is set equal to 0. \Bstrut\\ \hline \end{tabular} \end{center} @@ -812,43 +812,43 @@ \subsubsection{Age Composition Bins} \begin{center} \begin{tabular}{p{3cm} p{13cm}} \hline - 1 2 3 ... 20 25 & Vector of ages\Tstrut\Bstrut\\ + 1 2 3 ... 20 25 & Vector of ages \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} -The bins are in terms of observed age (here age) and entered as the lower edge of each bin. Each ageing imprecision definition is used to create a matrix that translates true age structure into age structure. The first and last age' bins work as accumulators. So in the example any age 0 fish that are caught would be assigned to the age = 1 bin. +The bins are in terms of observed age (here age) and entered as the lower edge of each bin. Each ageing imprecision definition is used to create a matrix that translates true age structure into age structure. The first and last age' bins work as accumulators. So in the example any age 0 fish that are caught would be assigned to the age = 1 bin. \subsubsection{Ageing Error} Here, the capability to create a distribution of age (e.g., age with possible bias and imprecision) from true age is created. One or many ageing error definitions can be created. For each, the model will expect an input vector of mean age and a vector of standard deviations associated with the mean age. \begin{center} - \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{3.5cm} p{2.5cm} } + \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{3.5cm} p{2.5cm}} \hline - \multicolumn{1}{l}{2} & \multicolumn{5}{l}{Number of ageing error matrices to generate}\Tstrut\Bstrut\\ - \hline\\ - \multicolumn{6}{l}{Example with no bias and very little uncertainty at age:}\Tstrut\Bstrut\\ + \multicolumn{1}{l}{2} & \multicolumn{5}{l}{Number of ageing error matrices to generate} \Tstrut\Bstrut\\ + \hline \\ + \multicolumn{6}{l}{Example with no bias and very little uncertainty at age Tstrut\Bstrut\\ \hline - Age-0 & Age-1 & Age-2 & ... & Max Age & \Tstrut\Bstrut\\ + Age-0 & Age-1 & Age-2 & ... & Max Age & \Tstrut\Bstrut\\ \hline - -1 & -1 & -1 & ... & -1 & \#Mean Age\Tstrut\\ - 0.001 & 0.001 & 0.001 & ... & 0.001 & \#SD\Bstrut\\ - \hline\\ - \multicolumn{6}{l}{Example with no bias and some uncertainty at age:}\Tstrut\Bstrut\\ + -1 & -1 & -1 & ... & -1 & \#Mean Age \Tstrut\\ + 0.001 & 0.001 & 0.001 & ... & 0.001 & \#SD \Bstrut\\ + \hline \\ + \multicolumn{6}{l}{Example with no bias and some uncertainty at age:} \Tstrut\Bstrut\\ \hline - 0.5 & 1.5 & 2.5 & ... & Max Age + 0.5 & \#Mean Age\Tstrut\\ - 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age\Bstrut\\ - \hline\\ - \multicolumn{6}{l}{Example with bias and uncertainty at age:}\Tstrut\Bstrut\\ + 0.5 & 1.5 & 2.5 & ... & Max Age + 0.5 & \#Mean Age \Tstrut\\ + 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age \Bstrut\\ + \hline \\ + \multicolumn{6}{l}{Example with bias and uncertainty at age:} \Tstrut\Bstrut\\ \hline - 0.5 & 1.4 & 2.3 & ... & Max Age + Age Bias & \#Mean Age\Tstrut\\ - 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age\Bstrut\\ + 0.5 & 1.4 & 2.3 & ... & Max Age + Age Bias & \#Mean Age \Tstrut\\ + 0.5 & 0.65 & 0.67 & ... & 4.3 & \#SD Age \Bstrut\\ \hline \end{tabular} \end{center} -In principle, one could have year or laboratory specific matrices for ageing error. For each matrix, enter a vector with mean age for each true age; if there is no ageing bias, then set age equal to true age + 0.5. Alternatively, -1 value for mean age means to set it equal to true age plus 0.5. The addition of +0.5 is needed so that fish will get assigned to the intended integer age. The length of the input vector is equal to the population maximum age plus one (0-max age), with the first entry being for age 0 fish and the last for fish of population maximum age even if the maximum age bin for the data is lower than the population maximum age. The following line is a a vector with the standard deviation of age for each true age with a normal distribution assumption. +In principle, one could have year or laboratory specific matrices for ageing error. For each matrix, enter a vector with mean age for each true age; if there is no ageing bias, then set age equal to true age + 0.5. Alternatively, -1 value for mean age means to set it equal to true age plus 0.5. The addition of +0.5 is needed so that fish will get assigned to the intended integer age. The length of the input vector is equal to the population maximum age plus one (0-max age), with the first entry being for age 0 fish and the last for fish of population maximum age even if the maximum age bin for the data is lower than the population maximum age. The following line is a a vector with the standard deviation of age for each true age with a normal distribution assumption. -The model is able to create one ageing error matrix from parameters, rather than from an input vector. The range of conditions in which this new feature will perform well has not been evaluated, so it should be considered as a preliminary implementation and subject to modification. To invoke this option, for the selected ageing error vector, set the standard deviation of ageing error to a negative value for age 0. This will cause creation of an ageing error matrix from parameters and any age or size-at-age data that specify use of this age error pattern will use this matrix. Then in the control file, add a full parameter line below the cohort growth deviation parameter (or the movement parameter lines if used) in the mortality growth parameter section. These parameters are described in the control file section of this manual. +The model is able to create one ageing error matrix from parameters, rather than from an input vector. The range of conditions in which this new feature will perform well has not been evaluated, so it should be considered as a preliminary implementation and subject to modification. To invoke this option, for the selected ageing error vector, set the standard deviation of ageing error to a negative value for age 0. This will cause creation of an ageing error matrix from parameters and any age or size-at-age data that specify use of this age error pattern will use this matrix. Then in the control file, add a full parameter line below the cohort growth deviation parameter (or the movement parameter lines if used) in the mortality growth parameter section. These parameters are described in the control file section of this manual. Code for ageing error calculation can be found in \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/blob/main/SS_miscfxn.tpl}{SS\_miscfxn.tpl}, search for function ``get\_age\_age'' or ``SS\_Label\_Function 45''. @@ -856,11 +856,11 @@ \subsubsection{Age Composition Specification} If age data are included in the model, the following set-up is required, similar to the length data section. \begin{tabular}{p{2cm} p{2cm} p{2cm} p{1.5cm} p{1.5cm} p{2cm} p{2cm}} - \multicolumn{7}{l}{Specify bin compression and error structure for age composition data for each fleet:}\\ + \multicolumn{7}{l}{Specify bin compression and error structure for age composition data for each fleet:} \\ \hline - Min. & Constant & Combine & & Comp. & & Min.\Tstrut\\ - Tail & added & males \& & Compress. & Error & Param. & Sample\\ - Compress. & to prop. & females & Bins & Dist. & Select & Size\Bstrut\\ + Min. & Constant & Combine & & Comp. & & Min. \Tstrut\\ + Tail & added & males \& & Compress. & Error & Param. & Sample \\ + Compress. & to prop. & females & Bins & Dist. & Select & Size \Bstrut\\ \hline 0 & 0.0001 & 1 & 0 & 0 & 0 & 1 \Tstrut\\ 0 & 0.0001 & 1 & 0 & 0 & 0 & 1 \Bstrut\\ @@ -870,25 +870,25 @@ \subsubsection{Age Composition Specification} \begin{tabular}{p{1cm} p{14cm}} & \\ - \multicolumn{2}{l}{Specify method by which length bin range for age obs will be interpreted:}\\ + \multicolumn{2}{l}{Specify method by which length bin range for age obs will be interpreted:} \\ \hline 1 & Bin method for age data \Tstrut\\ - & 1 = value refers to population bin index\\ - & 2 = value refers to data bin index\\ - & 3 = value is actual length (which must correspond to population length bin \\ - & boundary)\Bstrut\\ + & 1 = value refers to population bin index \\ + & 2 = value refers to data bin index \\ + & 3 = value is actual length (which must correspond to population length bin \\ + & boundary) \Bstrut\\ \hline \end{tabular} \begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{2.1cm}} - \multicolumn{10}{l}{ }\\ - \multicolumn{10}{l}{An example age composition observation:}\\ + \multicolumn{10}{l}{} \\ + \multicolumn{10}{l}{An example age composition observation:} \\ \hline Year & Month & Fleet & Sex & Partition & Age Err & Lbin lo & Lbin hi & Nsamp & Data Vector \Tstrut\\ \hline - 1987 & 1 & 1 & 3 & 0 & 2 & -1 & -1 & 79 & \Tstrut\\ - -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\Bstrut\\ + 1987 & 1 & 1 & 3 & 0 & 2 & -1 & -1 & 79 & \Tstrut\\ + -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \Bstrut\\ \hline \end{tabular} @@ -900,12 +900,12 @@ \subsubsection{Age Composition Specification} Age error (Age Err) identifies which ageing error matrix to use to generate expected value for this observation. \myparagraph{Lbin Low and Lbin High} -Lbin lo and Lbin hi are the range of length bins that this age composition observation refers to. Normally these are entered with a value of -1 and -1 to select the full size range. Whether these are entered as population bin number, length data bin number, or actual length is controlled by the value of the length bin range method above. +Lbin lo and Lbin hi are the range of length bins that this age composition observation refers to. Normally these are entered with a value of -1 and -1 to select the full size range. Whether these are entered as population bin number, length data bin number, or actual length is controlled by the value of the length bin range method above. \begin{itemize} \item Entering value of 0 or -1 for Lbin lo converts Lbin lo to 1; \item Entering value of 0 or -1 for Lbin hi converts Lbin hi to Maxbin; - \item It is strongly advised to use the -1 codes to select the full size range. If you use explicit values, then the model could unintentionally exclude information from some size range if the population bin structure is changed. + \item It is strongly advised to use the -1 codes to select the full size range. If you use explicit values, then the model could unintentionally exclude information from some size range if the population bin structure is changed. \item In reporting to the comp\_report.sso, the reported Lbin\_lo and Lbin\_hi values are always converted to actual length. \end{itemize} @@ -914,47 +914,45 @@ \subsubsection{Age Composition Specification} \subsection{Conditional Age-at-Length} -Use of conditional age-at-length will greatly increase the total number of age composition observations and associated model run time but there can be several advantages to inputting ages in this fashion. First, it avoids double use of fish for both age and size information because the age information is considered conditional on the length information. Second, it contains more detailed information about the relationship between size and age so provides stronger ability to estimate growth parameters, especially the variance of size-at-age. Lastly, where age data are collected in a length-stratified program, the conditional age-at-length approach can directly match the protocols of the sampling program. +Use of conditional age-at-length will greatly increase the total number of age composition observations and associated model run time but there can be several advantages to inputting ages in this fashion. First, it avoids double use of fish for both age and size information because the age information is considered conditional on the length information. Second, it contains more detailed information about the relationship between size and age so provides stronger ability to estimate growth parameters, especially the variance of size-at-age. Lastly, where age data are collected in a length-stratified program, the conditional age-at-length approach can directly match the protocols of the sampling program. -However, simulation research has shown that the use of conditional age-at-length data can result in biased growth estimates in the presence of unaccounted for age-based movement when length-based selectivity is assumed \citep{lee-effects-2017}, when other age-based processes (e.g., mortality) are not accounted for \citep{lee-use-2019}, or based on the age sampling protocol \citep{piner-evaluation-2016}. Understanding how data are collected (e.g., random, length-conditioned samples) and the biology of the stock is important when using conditional age-at-length data for a fleet. +However, simulation research has shown that the use of conditional age-at-length data can result in biased growth estimates in the presence of unaccounted for age-based movement when length-based selectivity is assumed \citep{lee-effects-2017}, when other age-based processes (e.g., mortality) are not accounted for \citep{lee-use-2019}, or based on the age sampling protocol \citep{piner-evaluation-2016}. Understanding how data are collected (e.g., random, length-conditioned samples) and the biology of the stock is important when using conditional age-at-length data for a fleet. -In a two sex model, it is best to enter these conditional age-at-length data as single sex observations (sex = 1 for females and = 2 for males), rather than as joint sex observations (sex = 3). Inputting joint sex observations comes with a more rigid assumption about sex ratios within each length bin. Using separate vectors for each sex allows 100\% of the expected composition to be fit to 100\% observations within each sex, whereas with the sex = 3 option, you would have a bad fit if the sex ratio were out of balance with the model expectation, even if the observed proportion at age within each sex exactly matched the model expectation for that age. Additionally, inputting the conditional age-at-length data as single sex observations isolates the age composition data from any sex selectivity as well. +In a two sex model, it is best to enter these conditional age-at-length data as single sex observations (sex = 1 for females and = 2 for males), rather than as joint sex observations (sex = 3). Inputting joint sex observations comes with a more rigid assumption about sex ratios within each length bin. Using separate vectors for each sex allows 100\% of the expected composition to be fit to 100\% observations within each sex, whereas with the sex = 3 option, you would have a bad fit if the sex ratio were out of balance with the model expectation, even if the observed proportion at age within each sex exactly matched the model expectation for that age. Additionally, inputting the conditional age-at-length data as single sex observations isolates the age composition data from any sex selectivity as well. -Conditional age-at-length data are entered within the age composition data section and can be mixed with marginal age observations for other fleets of other years within a fleet. To treat age data as conditional on length, Lbin\_lo and Lbin\_hi are used to select a subset of the total size range. This is different than setting Lbin\_lo and Lbin\_hi both to -1 to select the entire size -range, which treats the data entered on this line within the age composition data section as marginal age -composition data. +Conditional age-at-length data are entered within the age composition data section and can be mixed with marginal age observations for other fleets of other years within a fleet. To treat age data as conditional on length, Lbin\_lo and Lbin\_hi are used to select a subset of the total size range. This is different than setting Lbin\_lo and Lbin\_hi both to -1 to select the entire size range, which treats the data entered on this line within the age composition data section as marginal age composition data. \begin{tabular}{p{0.9cm} p{1cm} p{0.9cm} p{0.9cm} p{1.5cm} p{0.9cm} p{0.9cm} p{0.9cm} p{1cm} p{2.4cm}} - \multicolumn{10}{l}{ }\\ - \multicolumn{10}{l}{An example conditional age-at-length composition observations:}\\ + \multicolumn{10}{l}{} \\ + \multicolumn{10}{l}{An example conditional age-at-length composition observations:} \\ \hline Year & Month & Fleet & Sex & Partition & Age Err & Lbin lo & Lbin hi & Nsamp & Data Vector \Tstrut\\ \hline - 1987 & 1 & 1 & 1 & 0 & 2 & 10 & 10 & 18 & \Tstrut\\ - 1987 & 1 & 1 & 1 & 0 & 2 & 12 & 12 & 24 & \Tstrut\\ - 1987 & 1 & 1 & 1 & 0 & 2 & 14 & 14 & 16 & \Tstrut\\ - 1987 & 1 & 1 & 1 & 0 & 2 & 16 & 16 & 30 & \Tstrut\\ - -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\Bstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 10 & 10 & 18 & \Tstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 12 & 12 & 24 & \Tstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 14 & 14 & 16 & \Tstrut\\ + 1987 & 1 & 1 & 1 & 0 & 2 & 16 & 16 & 30 & \Tstrut\\ + -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \Bstrut\\ \hline \end{tabular} In this example observation, the age data is treated as on being conditional on the 2 cm length bins of 10--11.99, 12--13.99, 14--15.99, and 16--17.99cm. If there are no observations of ages for a specific sex within a length bin for a specific year, that entry may be omitted. \subsection{Mean Length or Body Weight-at-Age} -The model also accepts input of mean length-at-age or mean body weight-at-age. This is done in terms of observed age, not true age, to take into account the effects of ageing imprecision on expected mean size-at-age. If the value of the Age Error column is positive, then the observation is interpreted as mean length-at-age. If the value of the Age Error column is negative, then the observation is interpreted as mean body weight-at-age and the abs(Age Error) is used as Age Error. +The model also accepts input of mean length-at-age or mean body weight-at-age. This is done in terms of observed age, not true age, to take into account the effects of ageing imprecision on expected mean size-at-age. If the value of the Age Error column is positive, then the observation is interpreted as mean length-at-age. If the value of the Age Error column is negative, then the observation is interpreted as mean body weight-at-age and the abs(Age Error) is used as Age Error. \begin{center} - \begin{tabular}{p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{1cm} p{3.2cm} p{3.2cm} } + \begin{tabular}{p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{0.75cm} p{1cm} p{1cm} p{3.2cm} p{3.2cm}} \hline 1 & \multicolumn{8}{l}{Use mean size-at-age observation (0 = none, 1 = read data matrix)} \Tstrut\\ - \multicolumn{9}{l}{An example observation:}\Bstrut\\ + \multicolumn{9}{l}{An example observation:} \Bstrut\\ \hline & & & & & Age & & Data Vector & Sample Size \Tstrut\\ Yr & Month & Fleet & Sex & Part. & Err. & Ignore & (Female - Male) & (Female - Male) \Bstrut\\ \hline 1989 & 7 & 1 & 3 & 0 & 1 & 999 & & \Tstrut\\ ... & & & & & & & & \\ - -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 \Bstrut\\ + -9999 & 0 & 0 & 0 & 0 & 0 & 0 & 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 \Bstrut\\ \hline \end{tabular} \end{center} @@ -962,13 +960,9 @@ \subsection{Mean Length or Body Weight-at-Age} \myparagraph{Note} \begin{itemize} - \item Negatively valued mean size entries with be ignored in fitting. This - feature allows the user to see the fit to a provisional observation without having that - observation affect the model. - \item A number of fish value of 0 will cause mean size value to be ignored in fitting. If the number of fish is zero, a non-zero mean size or body weight-at-age value, such as 0.01 or -999, still needs to be added. This feature allows the user to see the fit to a provisional observation without having that - observation affect the model. - \item Negative value for year causes observation to not be included in the working matrix. This feature is the easiest way to include observations in a data file but not to use them in a - particular model scenario. + \item Negatively valued mean size entries with be ignored in fitting. This feature allows the user to see the fit to a provisional observation without having that observation affect the model. + \item A number of fish value of 0 will cause mean size value to be ignored in fitting. If the number of fish is zero, a non-zero mean size or body weight-at-age value, such as 0.01 or -999, still needs to be added. This feature allows the user to see the fit to a provisional observation without having that observation affect the model. + \item Negative value for year causes observation to not be included in the working matrix. This feature is the easiest way to include observations in a data file but not to use them in a particular model scenario. \item Each sexes' data vector and N fish vector has length equal to the number of age bins. \item The ``Ignore'' column is not used (set aside for future options) but still needs to have default values in that column (any value). \item Where age data are being entered as conditional age-at-length and growth parameters are being estimated, it may be useful to include a mean length-at-age vector with nil emphasis to provide another view on the model's estimates. @@ -978,15 +972,15 @@ \subsection{Mean Length or Body Weight-at-Age} \hypertarget{env-dat}{} \subsection{Environmental Data} -The model accepts input of time series of environmental data. Parameters can be made to be time-varying by making them a function of one of these environmental time series. In v.3.30.16 the option to specify the centering of environmental data by either using the mean of the by mean and the z-score. +The model accepts input of time series of environmental data. Parameters can be made to be time-varying by making them a function of one of these environmental time series. In v.3.30.16 the option to specify the centering of environmental data by either using the mean of the by mean and the z-score. \begin{center} \begin{tabular}{p{1cm} p{3cm} p{3cm} p{7.5cm}} - \multicolumn{4}{l}{Parameter values can be a function of an environmental data series: }\\ + \multicolumn{4}{l}{Parameter values can be a function of an environmental data series:} \\ \hline - 1 & \multicolumn{3}{l}{Number of environmental variables}\Tstrut\Bstrut\\ - \multicolumn{4}{l}{ The environmental data can be centered by subtracting the mean and dividing by stdev (z-score, -1) or }\\ - \multicolumn{4}{l}{ by subtracting the mean of the environmental variable (-2) based on the year column value. }\\ + 1 & \multicolumn{3}{l}{Number of environmental variables} \Tstrut\Bstrut\\ + \multicolumn{4}{l}{ The environmental data can be centered by subtracting the mean and dividing by stdev (z-score, -1) or} \\ + \multicolumn{4}{l}{by subtracting the mean of the environmental variable (-2) based on the year column value.} \\ \hline \multicolumn{4}{l}{COND > 0 Example of 2 environmental observations:} \Tstrut\\ & Year & Variable & Value \Bstrut\\ @@ -1000,12 +994,12 @@ \subsection{Environmental Data} \end{tabular} \end{center} -The final two lines in the example above indicate in that variable series 1 will be centered by subtracting the mean and dividing by the standard deviation (indicated by the -1 value in the year column). The environmental variable series 2 will be centered by subtracting the mean of the time series (indicated by the -2 value in the year column). The input in the ``value'' column for both of the final two lines specifying the centering of the time series is ignored by the model. The control file also will need to be modified to in the long parameter line column ``env-var'' for the selected parameter. This feature was added in v.3.30.16. +The final two lines in the example above indicate in that variable series 1 will be centered by subtracting the mean and dividing by the standard deviation (indicated by the -1 value in the year column). The environmental variable series 2 will be centered by subtracting the mean of the time series (indicated by the -2 value in the year column). The input in the ``value'' column for both of the final two lines specifying the centering of the time series is ignored by the model. The control file also will need to be modified to in the long parameter line column ``env-var'' for the selected parameter. This feature was added in v.3.30.16. \myparagraph{Note} \begin{itemize} - \item Any years for which environmental data are not read are assigned a value of 0.0. None of the current link functions contain a link parameter that acts as an offset. Therefore, you should subtract the mean from your data. This lessens the problem with missing observations, but does not eliminate it. A better approach for dealing with missing observations is to use a different approach for the environmental effect on the parameter. Set up the parameter to have random deviations for all years, then enter the zero-centered environmental information as a \hyperlink{SpecialSurvey}{special survey of type 35} and set up the catchability of that survey to be a link to the deviation vector. This is a more complex approach, but it is superior in treatment of missing values and superior in allowing for error in the environmental relationship. + \item Any years for which environmental data are not read are assigned a value of 0.0. None of the current link functions contain a link parameter that acts as an offset. Therefore, you should subtract the mean from your data. This lessens the problem with missing observations, but does not eliminate it. A better approach for dealing with missing observations is to use a different approach for the environmental effect on the parameter. Set up the parameter to have random deviations for all years, then enter the zero-centered environmental information as a \hyperlink{SpecialSurvey}{special survey of type 35} and set up the catchability of that survey to be a link to the deviation vector. This is a more complex approach, but it is superior in treatment of missing values and superior in allowing for error in the environmental relationship. \item Users can assign environmental conditions for the initial equilibrium year by including environmental data for one year before the start year. However, this works only for recruitment parameters, not biology or selectivity parameters. \item Environmental data can be read for up to 100 years after the end year of the model. Then, if the recruitment-environment link has been activated, the future recruitments will be influenced by any future environmental data. This could be used to create a future ``regime shift'' by setting historical values of the relevant environmental variable equal to zero and future values equal to 1, in which case the magnitude of the regime shift would be dictated by the value of the environmental linkage parameter. Note that only future recruitment and growth can be modified by the environmental inputs; there are no options to allow environmentally-linked selectivity in the forecast years. \end{itemize} @@ -1021,43 +1015,42 @@ \subsection{Generalized Size Composition Data} \item The generalized size composition data can be from the combined discard and retained, discard only, or retained only. \item There are two options for treating fish that in population size bins are smaller than the smallest size frequency bin. \begin{itemize} - \item Option 1: By default, these fish are excluded (unlike length composition data where the small fish are automatically accumulated up into the first bin.) + \item Option 1: By default, these fish are excluded (unlike length composition data where the small fish are automatically accumulated up into the first bin). \item Option 2: If the first size bin is given as a negative value, then accumulation is turned on and the absolute value of the entered value is used as the lower edge of the first size bin. \end{itemize} \end{itemize} \begin{center} \begin{tabular}{p{1.4cm} p{0.7cm} p{12.8 cm}} - \multicolumn{3}{l}{Example entry:}\\ + \multicolumn{3}{l}{Example entry:} \\ \hline 2 & & Number (N) of size frequency methods to be read. If this value is 0, then omit all entries below. A value of -1 (or any negative value) triggers expanded optional inputs below that allow for either Dirichlet of two parameter Multivariate (MV) Tweedie likelihood for fitting these data. \Tstrut\Bstrut\\ \hline - \multicolumn{3}{l}{COND < 0 - Number of size frequency } \Tstrut\\ + \multicolumn{3}{l}{COND < 0 - Number of size frequency} \Tstrut\\ \multicolumn{2}{l}{2} & Number of size frequency methods to read \Tstrut\\ - \multicolumn{3}{l}{END COND < 0} \Bstrut\\ + \multicolumn{3}{l}{END COND < 0} \Bstrut\\ \hline - \multicolumn{2}{r}{25 15} & Number of bins per method\Tstrut\\ - \multicolumn{2}{r}{2 2} & Units per each method (1 = biomass, 2 = numbers)\\ - \multicolumn{2}{r}{3 3} & Scale per each method (1 = kg, 2 = lbs, 3 = cm, 4 = inches)\\ - \multicolumn{2}{r}{1e-9 1e-9} & Min compression to add to each observation (entry for each method)\\ + \multicolumn{2}{r}{25 15} & Number of bins per method \Tstrut\\ + \multicolumn{2}{r}{2 2} & Units per each method (1 = biomass, 2 = numbers) \\ + \multicolumn{2}{r}{3 3} & Scale per each method (1 = kg, 2 = lbs, 3 = cm, 4 = inches) \\ + \multicolumn{2}{r}{1e-9 1e-9} & Min compression to add to each observation (entry for each method) \\ \multicolumn{2}{r}{2 2} & Number of observations per weight frequency method \Bstrut\\ \hline - \multicolumn{3}{l}{COND < 0 - Number of size frequency } \Tstrut\\ - \multicolumn{2}{r}{1 1} & Composition error structure (0 = multinomial, 1 = Dirichlet using Theta*n, 2 = Dirichlet using beta, 3 = MV Tweedie)\Tstrut\\ - \multicolumn{2}{r}{1 1} & Parameter select consecutive index for Dirichlet or MV Tweedie composition error\Bstrut\\ - \multicolumn{3}{l}{END COND < 0} \Tstrut\\ + \multicolumn{3}{l}{COND < 0 - Number of size frequency } \Tstrut\\ + \multicolumn{2}{r}{1 1} & Composition error structure (0 = multinomial, 1 = Dirichlet using Theta*n, 2 = Dirichlet using beta, 3 = MV Tweedie) \Tstrut\\ + \multicolumn{2}{r}{1 1} & Parameter select consecutive index for Dirichlet or MV Tweedie composition error \Bstrut\\ + \multicolumn{3}{l}{END COND < 0} \Tstrut\\ \hline \end{tabular} \end{center} \begin{center} \begin{tabular}{p{0.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.5cm} p{0.25cm}} - \multicolumn{18}{l}{Then enter the lower edge of the bins for each method. The two row vectors shown}\\ - \multicolumn{18}{l}{below contain the bin definitions for methods 1 and 2 respectively:}\\ + \multicolumn{18}{l}{Then enter the lower edge of the bins for each method. The two row vectors shown} \\ + \multicolumn{18}{l}{below contain the bin definitions for methods 1 and 2 respectively:} \\ \hline - -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & ... & 60 & 62 & 64 & 68 & 72 & 76 & 80 & 90\Tstrut\\ - -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & 44 & 46 & 48 & 50 & 52 & \multicolumn{4}{l}{54} \ - \Bstrut\\ + -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & ... & 60 & 62 & 64 & 68 & 72 & 76 & 80 & 90 \Tstrut\\ + -26 & 28 & 30 & 32 & 34 & 36 & 38 & 40 & 42 & 44 & 46 & 48 & 50 & 52 & \multicolumn{4}{l}{54} \Bstrut\\ \hline \end{tabular} \end{center} @@ -1068,7 +1061,7 @@ \subsection{Generalized Size Composition Data} \begin{tabular}{p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1.5cm} p{5cm}} \hline & & & & & & Sample & \Bstrut\\ + Method & Year & Month & Fleet & Sex & Part & Size & females then males> \Bstrut\\ \hline 1 & 1975 & 1 & 1 & 3 & 0 & 43 & \Tstrut\\ 1 & 1977 & 1 & 1 & 3 & 0 & 43 & \\ @@ -1097,38 +1090,38 @@ \subsection{Tag-Recapture Data} \begin{center} \begin{tabular}{p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{3cm}} - \multicolumn{9}{l}{Example set-up for tagging data:}\\ + \multicolumn{9}{l}{Example set-up for tagging data:} \\ \hline - 1 & & \multicolumn{7}{l}{Do tags - 0/1/2. If this value is 0, then omit all entries below.}\\ - & & \multicolumn{7}{l}{If value is 2, read 1 additional input.}\Tstrut\Bstrut\\ + 1 & & \multicolumn{7}{l}{Do tags - 0/1/2. If this value is 0, then omit all entries below.} \\ + & & \multicolumn{7}{l}{If value is 2, read 1 additional input.} \Tstrut\Bstrut\\ \hline \multicolumn{9}{l}{COND > 0 All subsequent tag-recapture entries must be omitted if ``Do Tags'' = 0} \Tstrut\\ - & 3 & \multicolumn{7}{l}{Number of tag groups}\Bstrut\\ + & 3 & \multicolumn{7}{l}{Number of tag groups} \Bstrut\\ \hline - & 7 & \multicolumn{7}{l}{Number of recapture events}\Tstrut\Bstrut\\ + & 7 & \multicolumn{7}{l}{Number of recapture events} \Tstrut\Bstrut\\ \hline - & 2 & \multicolumn{7}{l}{Mixing latency period: N periods to delay before comparing observed}\Tstrut\\ - & & \multicolumn{7}{l}{to expected recoveries (0 = release period). }\Bstrut\\ + & 2 & \multicolumn{7}{l}{Mixing latency period: N periods to delay before comparing observed} \Tstrut\\ + & & \multicolumn{7}{l}{to expected recoveries (0 = release period).} \Bstrut\\ \hline - & 10 & \multicolumn{7}{l}{Max periods (seasons) to track recoveries, after which tags enter}\Tstrut\\ - & & \multicolumn{7}{l}{ accumulator}\Bstrut\\ + & 10 & \multicolumn{7}{l}{Max periods (seasons) to track recoveries, after which tags enter} \Tstrut\\ + & & \multicolumn{7}{l}{ accumulator} \Bstrut\\ \hline \multicolumn{9}{l}{COND = 2} \Tstrut\\ - & 2 & \multicolumn{7}{l}{Minimum recaptures. The number of recaptures >= mixperiod must be}\\ + & 2 & \multicolumn{7}{l}{Minimum recaptures. The number of recaptures >= mixperiod must be} \\ & & \multicolumn{7}{l}{>= min tags recaptured specified to include tag group in log likelihood}\Bstrut\\ \hline & \multicolumn{8}{l}{Release Data} \Tstrut\\ - & TG & Area & Year & Season & & Sex & Age & N Release\Bstrut\\ + & TG & Area & Year & Season & & Sex & Age & N Release \Bstrut\\ \hline & 1 & 1 & 1980 & 1 & 999 & 0 & 24 & 2000 \Tstrut\\ & 2 & 1 & 1995 & 1 & 999 & 1 & 24 & 1000 \\ & 3 & 1 & 1985 & 1 & 999 & 2 & 24 & 10 \Bstrut\\ \hline - & \multicolumn{8}{l}{Recapture Data}\Tstrut\\ - & TG & & Year& & Season & & Fleet & Number\Bstrut\\ + & \multicolumn{8}{l}{Recapture Data} \Tstrut\\ + & TG & & Year & & Season & & Fleet & Number \Bstrut\\ \hline & 1 & & 1982 & & 1 & & 1 & 7 \Tstrut\\ & 1 & & 1982 & & 1 & & 2 & 5 \\ @@ -1151,20 +1144,20 @@ \subsection{Tag-Recapture Data} \end{itemize} \subsection{Stock (Morph) Composition Data} -It is sometimes possible to observe the fraction of a sample that is composed of fish from different stocks. These data could come from genetics, otolith microchemistry, tags, or other means. The growth pattern feature allows definition of cohorts of fish that have different biological characteristics and which are independently tracked as they move among areas. SS3 now incorporates the capability to calculate the expected proportion of a sample of fish that come from different growth patterns, ``morphs''. In the inaugural application of this feature, there was a 3 area model with one stock spawning and recruiting in area 1, the other stock in area 3, then seasonally the stocks would move into area 2 where stock composition observations were collected, then they moved back to their natal area later in the year. +It is sometimes possible to observe the fraction of a sample that is composed of fish from different stocks. These data could come from genetics, otolith microchemistry, tags, or other means. The growth pattern feature allows definition of cohorts of fish that have different biological characteristics and which are independently tracked as they move among areas. SS3 now incorporates the capability to calculate the expected proportion of a sample of fish that come from different growth patterns, ``morphs''. In the inaugural application of this feature, there was a 3 area model with one stock spawning and recruiting in area 1, the other stock in area 3, then seasonally the stocks would move into area 2 where stock composition observations were collected, then they moved back to their natal area later in the year. \begin{center} \begin{tabular}{p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{3.5cm}} - \multicolumn{8}{l}{Stock composition by growth pattern (morph) data can be entered in as follows:}\\ + \multicolumn{8}{l}{Stock composition by growth pattern (morph) data can be entered in as follows:} \\ \hline 1 & \multicolumn{7}{l}{Do morph composition, if zero, then do not enter any further input below.}\Tstrut\Bstrut\\ \hline - \multicolumn{8}{l}{COND = 1}\Tstrut\\ - & 3 & \multicolumn{6}{l}{Number of observations}\Bstrut\\ + \multicolumn{8}{l}{COND = 1} \Tstrut\\ + & 3 & \multicolumn{6}{l}{Number of observations} \Bstrut\\ \hline - & 2 & \multicolumn{6}{l}{Number of morphs}\Tstrut\Bstrut\\ + & 2 & \multicolumn{6}{l}{Number of morphs} \Tstrut\Bstrut\\ \hline - & 0.0001 & \multicolumn{6}{l}{Minimum Compression}\Tstrut\Bstrut\\ + & 0.0001 & \multicolumn{6}{l}{Minimum Compression} \Tstrut\Bstrut\\ \hline & Year & Month & Fleet & Null & Nsamp & \multicolumn{2}{l}{Data by N Morphs} \Tstrut\Bstrut\\ \hline @@ -1182,18 +1175,18 @@ \subsection{Stock (Morph) Composition Data} \item The expected value is combined across sexes. The entered data values will be normalized to sum to one within SS3. \item The ``null'' flag is included here in the data input section and is a reserved spot for future features. \item Note that there is a specific value of minimum compression to add to all values of observed and expected. - \item Warning for earlier versions of SS3: A flaw was identified in the calculation of accumulation by morph. This has been corrected in version 3.30.14. Older versions were incorrectly calculating the catch by morph using the expectation around age-at-length which already was accounting for the accumulation by morph. + \item Warning for earlier versions of SS3: A flaw was identified in the calculation of accumulation by morph. This has been corrected in version 3.30.14. Older versions were incorrectly calculating the catch by morph using the expectation around age-at-length which already was accounting for the accumulation by morph. \end{itemize} \subsection{Selectivity Empirical Data (future feature)} -It is sometimes possible to conduct field experiments or other studies to provide direct information about the selectivity of a particular length or age relative to the length or age that has peak selectivity, or to have a prior for selectivity that is more easily stated than a prior on a highly transformed selectivity parameter. This section provides a way to input data that would be compared to the specified derived value for selectivity. This is a placeholder at this time, required to include in the data file and will be fully implemented soon. +It is sometimes possible to conduct field experiments or other studies to provide direct information about the selectivity of a particular length or age relative to the length or age that has peak selectivity, or to have a prior for selectivity that is more easily stated than a prior on a highly transformed selectivity parameter. This section provides a way to input data that would be compared to the specified derived value for selectivity. This is a placeholder at this time, required to include in the data file and will be fully implemented soon. \begin{center} \begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{2.5cm} p{2.5cm} p{2.5cm}} - \multicolumn{9}{l}{Selectivity data feature is under development for a future option and is not yet implemented. }\\ - \multicolumn{9}{l}{The input line still must be specified in as follows:}\\ + \multicolumn{9}{l}{Selectivity data feature is under development for a future option and is not yet implemented.} \\ + \multicolumn{9}{l}{The input line still must be specified in as follows:} \\ \hline - 0 & \multicolumn{8}{l}{Do data read for selectivity (future option)}\Tstrut\Bstrut\\ + 0 & \multicolumn{8}{l}{Do data read for selectivity (future option)} \Tstrut\Bstrut\\ \hline %& Year & Month & Fleet & Age/Size & Bin \# & Datum & Datum SE\Tstrut\Bstrut\\ %\hline @@ -1201,17 +1194,17 @@ \subsection{Selectivity Empirical Data (future feature)} \end{center} \begin{center} - \begin{tabular}{p{2cm} p{14cm}}\\ - \multicolumn{2}{l}{End of Data File}\\ + \begin{tabular}{p{2cm} p{14cm}} \\ + \multicolumn{2}{l}{End of Data File} \\ \hline - 999 & \#End of data file marker\Tstrut\Bstrut\\ + 999 & \#End of data file marker \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} \subsection{Excluding Data} -Data that are before the model start year or greater than the retrospective year are not moved into the internal working arrays at all. So if you have any alternative observations that are used in some model runs and not in others, you can simply give them a negative year value rather than having to comment them out. The first output to data.ss\_new has the unaltered and complete input data. Subsequent reports to data.ss\_new produce expected values or bootstraps only for the data that are being used. Additional information on bootstrapping is available in \hyperlink{bootstrap}{Bootstrap Data Files Section}. +Data that are before the model start year or greater than the retrospective year are not moved into the internal working arrays at all. So if you have any alternative observations that are used in some model runs and not in others, you can simply give them a negative year value rather than having to comment them out. The first output to data.ss\_new has the unaltered and complete input data. Subsequent reports to data.ss\_new produce expected values or bootstraps only for the data that are being used. Additional information on bootstrapping is available in \hyperlink{bootstrap}{Bootstrap Data Files Section}. Data that are to be included in the calculations of expected values, but excluded from the calculation of negative log likelihood, are flagged by use of a negative value for fleet number. @@ -1223,23 +1216,23 @@ \subsection{Data Super-Periods} Super-periods are started with a negative value for month, and then stopped with a negative value for month, observations within the super-period are designated with a negative fleet field. The standard error or input sample size field is now used for weighting of the expected values. An error message is generated if the super-period does not contain one observation with a positive fleet field. -An expected value for the observation will be computed for each selected time period within the super-period. The expected values are weighted according to the values entered in the standard error (or input sample size) field for all observations except the single observation holding the combined data. The expected value for that year gets a relative weight of 1.0. So in the example below, the relative weights are: 1982, 1.0 (fixed); 1983, 0.85; 1985, 0.4; 1986, 0.4. These weights are summed and rescaled to sum to 1.0, and are output in the echoinput.sso file. +An expected value for the observation will be computed for each selected time period within the super-period. The expected values are weighted according to the values entered in the standard error (or input sample size) field for all observations except the single observation holding the combined data. The expected value for that year gets a relative weight of 1.0. So in the example below, the relative weights are: 1982, 1.0 (fixed); 1983, 0.85; 1985, 0.4; 1986, 0.4. These weights are summed and rescaled to sum to 1.0, and are output in the echoinput.sso file. Not all time steps within the extent of a super-period need be included. For example, in a three season model, a super-period could be set up to combine information from season 2 across 3 years, e.g., skip over the season 1 and season 3 for the purposes of calculating the expected value for the super-period. The key is to create a dummy observation (negative fleet value) for all time steps, except 1, that will be included in the super-period and to include one real observation (positive fleet value; which contains the real combined data from all the specified time steps). \begin{center} \begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{9cm}} - \multicolumn{6}{l}{Super-period example:}\\ + \multicolumn{6}{l}{Super-period example:} \\ \hline Year & Month & Fleet & Obs & SE & Comment \Tstrut\Bstrut\\ \hline - 1982 \Tstrut & \textbf{-2} & 3 & 34.2 & 0.3 & Start super-period. This observation has positive fleet value, so is expected to contain combined data from all identified periods of the super-period. The standard error (SE) entered here is use as the SE of the combined observation. The expected value for the survey in 1982 will have a relative weight of 1.0 (default) in calculating the combined expected value.\Bstrut\\ + 1982 \Tstrut & \textbf{-2} & 3 & 34.2 & 0.3 & Start super-period. This observation has positive fleet value, so is expected to contain combined data from all identified periods of the super-period. The standard error (SE) entered here is use as the SE of the combined observation. The expected value for the survey in 1982 will have a relative weight of 1.0 (default) in calculating the combined expected value.\Bstrut\\ \hline - 1983 \Tstrut & 2 & \textbf{-3} & 55 & 0.3 & In super-period; entered observation is ignored. The expected value for the survey in 1983 will have a relative weight equal to the value in the standard error field (0.3) in calculating the combined expected value.\Bstrut\\ + 1983 \Tstrut & 2 & \textbf{-3} & 55 & 0.3 & In super-period; entered observation is ignored. The expected value for the survey in 1983 will have a relative weight equal to the value in the standard error field (0.3) in calculating the combined expected value. \Bstrut\\ \hline - 1985 \Tstrut & 2 & \textbf{-3}& 88 & 0.40 & Note that 1984 is not included in the super-period Relative weight for 1985 is 0.4\Bstrut\\ + 1985 \Tstrut & 2 & \textbf{-3}& 88 & 0.40 & Note that 1984 is not included in the super-period. Relative weight for 1985 is 0.4 \Bstrut\\ \hline - 1986 & \textbf{-2} & \textbf{-3} & 88 & 0.40 & End super-period\Tstrut\Bstrut\\ + 1986 & \textbf{-2} & \textbf{-3} & 88 & 0.40 & End super-period \Tstrut\Bstrut\\ \hline \end{tabular} \end{center} diff --git a/9control.tex b/9control.tex index ed0ce645..b71011df 100644 --- a/9control.tex +++ b/9control.tex @@ -59,18 +59,18 @@ \subsection{Parameter Line Elements} 3 \Tstrut & INIT & Initial value for the parameter. If the phase (described below) for the parameter is negative the parameter is fixed at this value. If the ss.par file is read, it overwrites these INIT values.\\ 4 \Tstrut & PRIOR & Expected value for the parameter. This value is ignored if the prior type is 0 (no prior) or 1 (symmetric beta). If the selected prior type (described below) is lognormal, this value is entered in log space. \\ 5 \Tstrut & PRIOR SD & Standard deviation for the prior, used to calculate likelihood of the current parameter value. This value is ignored if prior type is 0. The standard deviation is in regular space regardless of the prior type.\\ - 6 \Tstrut & \hyperlink{PriorDescrip}{PRIOR TYPE} & 0 = none, \\ + 6 \Tstrut & \hyperlink{PriorDescrip}{PRIOR TYPE} & 0 = none; \\ & & 1 = symmetric beta; \\ & & 2 = full beta; \\ & & 3 = lognormal without bias adjustment; \\ & & 4 = lognormal with bias adjustment; \\ & & 5 = gamma; and \\ & & 6 = normal. \\ - 7 \Tstrut & PHASE & Phase in which parameter begins to be estimated. A negative value causes the parameter to retain its INIT value (or value read from the ss.par file).\Bstrut\\ - 8 \Tstrut & Env var \& Link & Create a linkage to an input environmental time-series\\ + 7 \Tstrut & PHASE & Phase in which parameter begins to be estimated. A negative value causes the parameter to retain its INIT value (or value read from the ss.par file). \Bstrut\\ + 8 \Tstrut & Env var \& Link & Create a linkage to an input environmental time-series \\ 9 \Tstrut & Dev link & Invokes use of the deviation vector in the linkage function \\ 10 \Tstrut & Dev min yr & Beginning year for the deviation vector \\ - 11 \Tstrut & Dev max yr & Ending year for the deviation vector\\ + 11 \Tstrut & Dev max yr & Ending year for the deviation vector \\ 12 \Tstrut & Dev phase & Phase for estimation for elements in the deviation vector \\ 13 \Tstrut & Block & Time block or trend to be applied \\ 14 \Tstrut & Block function & Functional form for the block offset. \Bstrut\\ @@ -104,7 +104,7 @@ \subsection{Beginning of Control File Inputs} \endlastfoot - \multicolumn{2}{l}{\#C comment }\Tstrut & Comments beginning with \#C at the top of the file will be retained and included in output. \Bstrut\\ + \multicolumn{2}{l}{\#C comment} \Tstrut & Comments beginning with \#C at the top of the file will be retained and included in output. \Bstrut\\ \hline 0 & & 0 = Do not read the weight-at-age (wtatage.ss) file; \Tstrut\\ @@ -141,12 +141,12 @@ \subsubsection{Settlement Timing for Recruits and Distribution} \begin{longtable}{p{1.25cm} p{1.25cm} p{1cm} p{11.5cm}} \hline - \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options} \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options}\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & \multicolumn{2}{l}{Description and Options} \Tstrut\Bstrut\\ \hline \endhead @@ -173,7 +173,7 @@ \subsubsection{Settlement Timing for Recruits and Distribution} 0 \Tstrut & & \multicolumn{2}{l}{Future feature, not implement yet but required.} \Bstrut\\ \hline - Growth Pattern & Month & Area & Age at settlement \Tstrut \\ + Growth Pattern & Month & Area & Age at settlement \Tstrut\\ \hline 1 & 5.5 & 1 & 0 \Bstrut\\ \hline @@ -327,12 +327,12 @@ \subsubsection{Auto-generation} \begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endhead @@ -343,12 +343,12 @@ \subsubsection{Auto-generation} 1 & & Environmental/Block/Deviation adjust method for all time-varying parameters. \Tstrut\\ & & 1 = warning relative to base parameter bounds; and \\ - & & 3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., SS3 v.3.24) is no longer an option.\Bstrut\\ + & & 3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., SS3 v.3.24) is no longer an option. \Bstrut\\ - \multicolumn{2}{l}{1 1 1 1 1} & Auto-generation of time-varying parameter lines. Five values control auto-generation for parameter block sections: 1-biology, 2-spawn-recruitment, 3-catchability, 4-tag (future), and 5-selectivity.\\ - & & The accepted values are:\\ - & & 0 = auto-generate all time-varying parameters (no time-varying parameters are expected);\\ - & & 1 = read each time-varying parameter line as exists in the control file; and\\ + \multicolumn{2}{l}{1 1 1 1 1} & Auto-generation of time-varying parameter lines. Five values control auto-generation for parameter block sections: 1-biology, 2-spawn-recruitment, 3-catchability, 4-tag (future), and 5-selectivity. \\ + & & The accepted values are: \\ + & & 0 = auto-generate all time-varying parameters (no time-varying parameters are expected); \\ + & & 1 = read each time-varying parameter line as exists in the control file; and \\ & & 2 = read each line and auto-generate if read if the time-varying parameter value for LO = -12345. Useful to generate reasonable starting values. \Bstrut\\ \hline \end{longtable} @@ -401,12 +401,12 @@ \subsubsection{Natural Mortality} \myparagraph{Natural Mortality Options} \begin{longtable}{p{0.5cm} p{2cm} p{12.75cm}} \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endhead \hline @@ -415,13 +415,13 @@ \subsubsection{Natural Mortality} \endlastfoot - 1 & & Natural Mortality Options:\Tstrut\\ - & & 0 = A single parameter;\\ - & & 1 = N breakpoints;\\ + 1 & & Natural Mortality Options: \Tstrut\\ + & & 0 = A single parameter; \\ + & & 1 = N breakpoints; \\ & & 2 = Lorenzen; \\ - & & 3 = Read age specific M and do not do seasonal interpolation;\\ - & & 4 = Read age specific and do seasonal interpolation, if appropriate;\\ - & & 5 = age-specific M linked to age-specific length and maturity (experimental);\\ + & & 3 = Read age specific M and do not do seasonal interpolation; \\ + & & 4 = Read age specific and do seasonal interpolation, if appropriate; \\ + & & 5 = age-specific M linked to age-specific length and maturity (experimental); \\ & & 6 = Age-range Lorenzen. \Bstrut\\ \hline @@ -429,7 +429,7 @@ \subsubsection{Natural Mortality} \hline \multicolumn{2}{l}{COND = 1} & \Tstrut\Bstrut\\ - & 4 & Number of breakpoints. Then read a vector of ages for these breakpoints. Later, per sex x GP, read N parameters for the natural mortality at each breakpoint.\\ + & 4 & Number of breakpoints. Then read a vector of ages for these breakpoints. Later, per sex x GP, read N parameters for the natural mortality at each breakpoint. \\ \multicolumn{2}{r}{2.5 4.5 9.0 15.0} & Vector of age breakpoints. \Bstrut\\ \hline @@ -440,19 +440,19 @@ \subsubsection{Natural Mortality} \multicolumn{2}{l}{COND = 3 or 4} \Tstrut & Do not read any natural mortality parameters in the mortality growth parameter section. With option 3, these M values are held fixed for the integer age (no seasonality or birth season considerations). With option 4, there is seasonal interpolation based on real age, just as in options 1 and 2.\\ - & 0.20 0.25 ... 0.20 0.23 ... & Age-specific M values where in a 2 sex model the first row is female and the second row is male. If there are multiple growth patterns female growth pattern 1-N is read first followed by males 1-N growth pattern.\Bstrut\\ + & 0.20 0.25 ... 0.20 0.23 ... & Age-specific M values where in a 2 sex model the first row is female and the second row is male. If there are multiple growth patterns female growth pattern 1-N is read first followed by males 1-N growth pattern. \Bstrut\\ \hline \multicolumn{2}{l}{COND = 5} \Tstrut & age-specific M linked to age-specific length and maturity suboptions. \\ & & 1 = Requires 4 long parameter lines per sex x growth pattern using maturity. Must be used with maturity option 1; \\ & & 2 = reserved for future option; \\ - & & 3 = Requires 6 long parameter lines per sex x growth pattern\Bstrut\\ + & & 3 = Requires 6 long parameter lines per sex x growth pattern \Bstrut\\ \hline \multicolumn{2}{l}{COND = 6} \Tstrut & Read two additional integer values that are the age range for average M. Later, read one long parameter line for each sex x growth pattern that will be the average M over the reference age range. \\ - & 0 \Tstrut & Minimum age of average M range for calculating Lorenzen natural mortality.\\ - & 10 \Tstrut & Maximum age of average M range for calculating Lorenzen natural mortality.\\ + & 0 \Tstrut & Minimum age of average M range for calculating Lorenzen natural mortality. \\ + & 10 \Tstrut & Maximum age of average M range for calculating Lorenzen natural mortality. \\ \hline \end{longtable} @@ -514,12 +514,12 @@ \subsubsection{Growth} \begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \multicolumn{3}{l}{Example growth specifications:} \Tstrut\Bstrut\\ \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endfirsthead \hline - \multicolumn{2}{l}{Typical Value} & Description and Options\Tstrut\Bstrut\\ + \multicolumn{2}{l}{Typical Value} & Description and Options \Tstrut\Bstrut\\ \hline \endhead \hline @@ -529,14 +529,14 @@ \subsubsection{Growth} \endlastfoot 1 & & Growth Model: \Tstrut\\ - & & 1 = von Bertalanffy (3 parameters);\\ + & & 1 = von Bertalanffy (3 parameters); \\ & & 2 = Schnute's generalized growth curve (aka Richards curve) with 3 parameters. Third parameter has null value of 1.0; \\ - & & 3 = von Bertalanffy with age-specific K multipliers for specified range of ages, requires additional inputs below following the placeholder for future growth feature;\\ + & & 3 = von Bertalanffy with age-specific K multipliers for specified range of ages, requires additional inputs below following the placeholder for future growth feature; \\ & & 4 = age-specific K. Set base K as K for age = nages and working backwards and the age-specific K = K for the next older age * multiplier, requires additional inputs below following the placeholder for future growth feature; \\ & & 5 = age specific K. Set base K as K for nages and work backwards and the age-specific K = base K * multiplier, requires additional inputs below following the placeholder for future growth feature; \\ & & 6 = not implemented; \\ & & 7 = not implemented; and \\ - & & 8 = growth cessation. Decreases the K for older fish. If implemented, the Amin and Amax parameters, the next two lines, need to be set at 0 and 999 respectively. The mortality-growth parameter section requires the base K parameter line which is interpreted as the steepness of the logistic function that models the reduction in the growth increment by age followed by a second parameter line which is the parameter related to the maximum growth rate. \Bstrut \\ + & & 8 = growth cessation. Decreases the K for older fish. If implemented, the Amin and Amax parameters, the next two lines, need to be set at 0 and 999 respectively. The mortality-growth parameter section requires the base K parameter line which is interpreted as the steepness of the logistic function that models the reduction in the growth increment by age followed by a second parameter line which is the parameter related to the maximum growth rate. \Bstrut\\ \hline \Tstrut 1 & & Growth Amin (A1): Reference age for first size-at-age L1 (post-settlement) parameter. First growth parameter is size at this age; linear growth below this. \Bstrut\\ @@ -553,20 +553,20 @@ \subsubsection{Growth} 0 & & Placeholder for future growth feature. \Tstrut\Bstrut\\ \hline - \multicolumn{2}{l}{COND = 3} & Growth model: age-specific K age-specific K where the age-specific K parameter values are multipliers of the age - 1 K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 5 is equal to 0.20 * age-5 multiplier. Subsequently, age 6 K value is equal to age 5 K (0.20 * age-5 multiplier) multiplied by the age-6 multiplier. All ages above the maximum age with age-specific K are equal to the maximum age-specific K. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section.\Tstrut\\ + \multicolumn{2}{l}{COND = 3} & Growth model: age-specific K age-specific K where the age-specific K parameter values are multipliers of the age - 1 K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 5 is equal to 0.20 * age-5 multiplier. Subsequently, age 6 K value is equal to age 5 K (0.20 * age-5 multiplier) multiplied by the age-6 multiplier. All ages above the maximum age with age-specific K are equal to the maximum age-specific K. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section.\Tstrut\\ 3 & & Number of K multipliers to read; \\ & 5 & Minimum age for age-specific K; and \\ & 6 & Second age for age-specific K; and \\ & 7 & Maximum age for age-specific K. \Bstrut\\ - \multicolumn{2}{l}{COND = 4} & Growth model: age-specific K where the age-specific K parameter values are multipliers of the age + 1 K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal to age 7 K (0.20 * age-7 multiplier) multiplied by the age-6 multiplier. All ages below the minimum age with age-specific K are equal to the minimum age-specific K. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section. \Tstrut\\ + \multicolumn{2}{l}{COND = 4} & Growth model: age-specific K where the age-specific K parameter values are multipliers of the age + 1 K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal to age 7 K (0.20 * age-7 multiplier) multiplied by the age-6 multiplier. All ages below the minimum age with age-specific K are equal to the minimum age-specific K. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section. \Tstrut\\ 3 & & Number of K multipliers to read; \\ & 7 & Maximum age for age-specific K; \\ & 6 & Second age for age-specific K; and \\ & 5 & Minimum age for age-specific K. \Bstrut\\ \hline - \multicolumn{2}{l}{COND = 5} & Growth model: age-specific K where the age-specific K parameter values are multipliers of the base K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal 0.20 * age-6 multiplier. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section. \Tstrut\\ + \multicolumn{2}{l}{COND = 5} & Growth model: age-specific K where the age-specific K parameter values are multipliers of the base K parameter value. For example, if the base parameter is 0.20 based on the example set-up the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal 0.20 * age-6 multiplier. The age specific K values are available in the Report file in the AGE\_SPECIFIC\_K section. \Tstrut\\ 3 & & Number of K multipliers to read; \\ & 7 & Maximum age for age-specific K; \\ & 6 & Second age for age-specific K; and \\ @@ -776,7 +776,7 @@ \subsubsection{Read Biology Parameters} \multicolumn{2}{l}{Females}\Tstrut & Female natural mortality and growth parameters in the following order by growth pattern. \\ & M & Natural mortality for female growth pattern 1, where the number of natural mortality parameters depends on the option selected. \Bstrut\\ \hline - \multicolumn{2}{l}{COND if M option = 1 } & \Tstrut\\ + \multicolumn{2}{l}{COND if M option = 1} & \Tstrut\\ & N breakpoints & N-1 parameter lines as an exponential offsets from the previous reference age. \Bstrut\\ \hline @@ -785,14 +785,14 @@ \subsubsection{Read Biology Parameters} & VBK & von Bertalanffy growth coefficient (units are per year) for females, growth pattern 1. \Bstrut\\ \hline - \multicolumn{2}{l}{COND if growth type = 2 } & \Tstrut\\ + \multicolumn{2}{l}{COND if growth type = 2} & \Tstrut\\ & Richards Coefficient & Only include this parameter if Richards growth function is used. If included, a parameter value of 1.0 will have a null effect and produce a growth curve identical to von Bertalanffy. \\ - \multicolumn{2}{l}{COND if growth type >=3 } & Age-Specific K \\ + \multicolumn{2}{l}{COND if growth type >=3} & Age-Specific K \\ & \multicolumn{2}{l}{N parameter lines equal to the number K deviations for the ages specified above.} \Bstrut\\ \hline - \Tstrut & CV young & Variability for size at age <= Amin for females, growth pattern 1. Note that CV cannot vary over time, so do not set up env-link or a deviation vector. Also, units are either as CV or as standard deviation, depending on assigned value of CV pattern.\\ + \Tstrut & CV young & Variability for size at age <= Amin for females, growth pattern 1. Note that CV cannot vary over time, so do not set up env-link or a deviation vector. Also, units are either as CV or as standard deviation, depending on assigned value of CV pattern. \\ & CV old & Variability for size at age >= Amax for females, growth pattern 1. For intermediate ages, do a linear interpolation of CV on means size-at-age. Note that the units for CV will depend on the CV pattern and the value of mortality-growth parameter as offset. The CV value cannot vary over time. \Bstrut\\ \hline @@ -857,7 +857,7 @@ \subsubsection{Read Biology Parameters} \multicolumn{2}{l}{Recruitment Dist. 2} & Recruitment apportionment parameter for the 2nd settlement event. \Bstrut\\ \hline - \multicolumn{2}{l}{Cohort growth deviation} \Tstrut & Set equal to 1.0 and do not estimate; it is deviations from this base that matter.\Bstrut\\ + \multicolumn{2}{l}{Cohort growth deviation} \Tstrut & Set equal to 1.0 and do not estimate; it is deviations from this base that matter. \Bstrut\\ \hline \multicolumn{2}{l}{2 x N selected movement pairs} & Movement parameters \Tstrut\Bstrut\\ @@ -1381,7 +1381,7 @@ \subsection{Fishing Mortality Method} \hline \multicolumn{3}{l}{COND: F method = 4} \Tstrut\\ - & & Read list of fleets needing parameters, starting F values, and phases. To treat a fleet F as hybrid only select a phase of 99. A parameter line is not required for all fleets and if not specified will be treated as hybrid across all phases, except for bycatch fleets which are required to have an input parameter line. Use a negative phase to set F as constant (i.e., not estimated) in v. 3.30.19 and higher. \Tstrut\\ + & & Read list of fleets needing parameters, starting F values, and phases. To treat a fleet F as hybrid only select a phase of 99. A parameter line is not required for all fleets and if not specified will be treated as hybrid across all phases, except for bycatch fleets which are required to have an input parameter line. Use a negative phase to set F as constant (i.e., not estimated) in v.3.30.19 and higher. \Tstrut\\ Fleet & Parameter Value & Phase \Tstrut\\ 1 & 0.05 & 1 \\ 2 & 0.01 & 1 \\ diff --git a/README.md b/README.md index 142fbff2..2e6cf24d 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ Source code for the stock synthesis manual and other supplementary documentation The documentation includes: - The Stock Synthesis user manual source code, in .tex files -- Getting started guide and Introduction to building an ss model guide, available in the [User_Guides subdirectory](https://github.com/nmfs-stock-synthesis/doc/tree/main/User_Guides) +- Getting started guide and Introduction to building an ss3 model guide, available in the [User_Guides subdirectory](https://github.com/nmfs-stock-synthesis/doc/tree/main/User_Guides) ## Where can I find compiled versions of the documentation? diff --git a/SS.bib b/SS3.bib similarity index 100% rename from SS.bib rename to SS3.bib diff --git a/SS330_User_Manual.tex b/SS330_User_Manual.tex index 3cda1ae3..c99d79ce 100644 --- a/SS330_User_Manual.tex +++ b/SS330_User_Manual.tex @@ -210,7 +210,7 @@ \input{16essays} %========= Reference Section \newpage - \bibliography{SS} + \bibliography{SS3} \bibliographystyle{JournalBiblio/cjfas} \newpage diff --git a/User_Guides/getting_started/Getting_Started_SS.Rmd b/User_Guides/getting_started/Getting_Started_SS3.Rmd similarity index 92% rename from User_Guides/getting_started/Getting_Started_SS.Rmd rename to User_Guides/getting_started/Getting_Started_SS3.Rmd index 1c024968..c464fa37 100644 --- a/User_Guides/getting_started/Getting_Started_SS.Rmd +++ b/User_Guides/getting_started/Getting_Started_SS3.Rmd @@ -40,7 +40,7 @@ SS3 uses text input files and produces text output files. In this section, the S ## SS3 files: Required inputs -Four required input files are read by the SS3 executable. Throughout this document, we will refer to the SS3 executable as ss.exe. Keep in mind that the Linux and Mac versions of SS3 have no file extension (e.g., ss), and the executable can be renamed by the user as desired (e.g., ss_win.exe, ss_3.30.18.exe). These input files are: +Four required input files are read by the SS3 executable. Throughout this document, we will refer to the SS3 executable as ss3.exe. Keep in mind that the Linux and Mac versions of SS3 have no file extension (e.g., ss), and the executable can be renamed by the user as desired (e.g.,ss3.exe, ss_win.exe, ss_3.30.18.exe). These input files are: 1. **starter.ss:** Required file containing file names of the data file and the control file plus other run controls. Must be named starter.ss. 2. **data file:** File containing model dimensions and the data. The data file can have any name, as specified in the starter file, but typically ends in .ss or .dat. @@ -85,7 +85,7 @@ Create a folder and add: + Control File (Must match name in starter.ss) + Data File (Must match name in starter.ss) + forecast.ss -+ ss.exe ++ ss3.exe + starter.ss + Conditional files: wtatage.ss (if doing empirical wt-at-age approach) and/or ss.par (to continue from a previous run) @@ -97,7 +97,7 @@ For example, here is what should be included for a model with no conditional fil Once all of the model files and the SS3 executable are in the same folder, you can open your command window of choice at the location of the model files. -To do this, you can typically click to highlight the folder the model files are in, then shift + right click on the same folder and select the option from the menu to open the command line of choice (e.g., Windows Powershell). This should bring up a command window. Then, type `ss` (or other name of the ss exe) into the command prompt and hit enter. Note that if you are using Windows Powershell, you will need to type `./ss`. +To do this, you can typically click to highlight the folder the model files are in, then shift + right click on the same folder and select the option from the menu to open the command line of choice (e.g., Windows Powershell). This should bring up a command window. Then, type `ss3` (or other name of the ss3 exe) into the command prompt and hit enter. Note that if you are using Windows Powershell, you will need to type `./ss3`. The exact instructions for running SS3 can differ depending on the command window used. If you have trouble, search for resources that describe running an executable for your specific command line. @@ -134,11 +134,11 @@ Output from SS3 can be read into [r4ss](https://github.com/r4ss/r4ss) or the exc ## Command line options {#options} -ADMB options can be added to the run when calling the SS3 executable from the command line. The most commonly used option is `ss -nohess` to skip standard errors (for quicker results or to get Report.sso if the hessian does not invert). +ADMB options can be added to the run when calling the SS3 executable from the command line. The most commonly used option is `ss3 -nohess` to skip standard errors (for quicker results or to get Report.sso if the hessian does not invert). -To list all command line options, use one of these calls: `SS -?` or `SS -help`. More info about the ADMB command line options is available in the [ADMB Manual](http://www.admb-project.org/docs/manuals/) (Chapter 12: Command line options). +To list all command line options, use one of these calls: `SS3 -?` or `SS3 -help`. More info about the ADMB command line options is available in the [ADMB Manual](http://www.admb-project.org/docs/manuals/) (Chapter 12: Command line options). -To run SS3 without estimation use: `ss -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in SS3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. +To run SS3 without estimation use: `ss3 -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in SS3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. ## Using ss.par for initial values diff --git a/User_Guides/model_step_by_step/model_tutorial.Rmd b/User_Guides/model_step_by_step/model_tutorial.Rmd index 6e6d7344..8eee8c19 100644 --- a/User_Guides/model_step_by_step/model_tutorial.Rmd +++ b/User_Guides/model_step_by_step/model_tutorial.Rmd @@ -1,6 +1,6 @@ --- title: "Model building tutorial" -author: "SS Development Team" +author: "SS3 Development Team" date: "10/23/2019" output: word_document --- @@ -11,9 +11,9 @@ knitr::opts_chunk$set(echo = TRUE) # Scope -This is a tutorial illustrating how different data and parameters familiar to stock assessment scientists can be added to Stock Synthesis input files. We assume that these users have had previous population dynamics modeling experience and already understand how to run an existing SS model. +This is a tutorial illustrating how different data and parameters familiar to stock assessment scientists can be added to Stock Synthesis input files. We assume that these users have had previous population dynamics modeling experience and already understand how to run an existing SS3 model. -If you are a new SS user who is not yet comfortable running an SS model, we suggest trying to run a working example model using advice in the **Getting Started** document before attempting to develop and run your own model as outlined here. You can also get more general model building advice in the **Developing your first Stock Synthesis model** guide. +If you are a new SS3 user who is not yet comfortable running an SS3 model, we suggest trying to run a working example model using advice in the **Getting Started** document before attempting to develop and run your own model as outlined here. You can also get more general model building advice in the **Developing your first Stock Synthesis model** guide. Throughout this example, we use an even simpler version of the Stock Synthesis example model "Simple". To get the most out of this tutorial, it is best to download the model files to look at during the tutorial. It may also be useful to run the model and plot the results using the R package [r4ss](github.com/r4ss/r4ss). @@ -80,7 +80,7 @@ In the case of this example, data.ss is the name of the data file, while control This is where the data inputs are specified. At the top, general information about the model is specified: the model years, number of seasons, number of sexes, maximum age, number of areas, number of fleets: ```{R eval = FALSE} -#Stock Synthesis (SS) is a work of the U.S. Government and is not subject to copyright protection in the United States. +#Stock Synthesis (SS3) is a work of the U.S. Government and is not subject to copyright protection in the United States. #Foreign copyrights may apply. See copyright.txt for more information. 1971 #_StartYr 2001 #_EndYr @@ -129,7 +129,7 @@ Next, the catch is specified: -9999 0 0 0 0 ``` -The first line of the above code chunk shows the column headers for the catch data. Note that all catch comes from the fishery. The line `-999 1 1 0 0.01` specifies equilibirum catch for years before the model starts - in this case, there is no equilibrium catch because the catch column is 0. To terminate this catch data section the line `-9999 0 0 0 0` is needed. This tells SS that it can stop reading catch data. +The first line of the above code chunk shows the column headers for the catch data. Note that all catch comes from the fishery. The line `-999 1 1 0 0.01` specifies equilibirum catch for years before the model starts - in this case, there is no equilibrium catch because the catch column is 0. To terminate this catch data section the line `-9999 0 0 0 0` is needed. This tells SS3 that it can stop reading catch data. Next comes specification for indices of abundance. First is the setup for all of the fleets: @@ -161,7 +161,7 @@ Directly after its header, the indices of abundance data is included: -9999 1 1 1 1 # terminator for survey observations ``` -Like the catch data, a terminator line is needed to tell SS when to stop reading the indices. +Like the catch data, a terminator line is needed to tell SS3 when to stop reading the indices. Next, discards and mean body size data could be specified, but they are 0 in this example: ```{r eval = FALSE} @@ -221,7 +221,7 @@ Age composition data follows. First, the age bins and ageerror definitions are e 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5 12.5 13.5 14.5 15.5 16.5 17.5 18.5 19.5 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5 28.5 29.5 30.5 31.5 32.5 33.5 34.5 35.5 36.5 37.5 38.5 39.5 40.5 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 ``` -For the age bins, SS reads in the number (17 in this case) and then expects that number of inputs for the age bins (the 17 values below it). Next, SS reads the age error definitions. In this case, there is only 1 definition, so SS expects 2 vectors, each which contain the max number of ages + 1 values (41 values per vector in this case). The first line defines the *bias* for the aging error, while the second vector defines the *standard deviation* of the aging error. This example has no aging bias and very high aging precision (low standard deviation), so this is close to assuming no aging error. +For the age bins, SS3 reads in the number (17 in this case) and then expects that number of inputs for the age bins (the 17 values below it). Next, SS3 reads the age error definitions. In this case, there is only 1 definition, so SS3 expects 2 vectors, each which contain the max number of ages + 1 values (41 values per vector in this case). The first line defines the *bias* for the aging error, while the second vector defines the *standard deviation* of the aging error. This example has no aging bias and very high aging precision (low standard deviation), so this is close to assuming no aging error. Next comes the age composition setup lines: ```{r eval = FALSE} @@ -246,9 +246,9 @@ which includes the length bin method for ages. Finally, the age composition data -9999 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ``` -One important note here is the using Lbin_lo and Lbin_hi = -1 selects the entire length bin as being used for the ages. Similar to the length composition data, SS expect 1 value for females in each data bin, followed by values for males in each data bin (in this case, there are 34 values in the data vector) +One important note here is the using Lbin_lo and Lbin_hi = -1 selects the entire length bin as being used for the ages. Similar to the length composition data, SS3 expect 1 value for females in each data bin, followed by values for males in each data bin (in this case, there are 34 values in the data vector) -SS has some additional options that we have not used here and thus set to 0: +SS3 has some additional options that we have not used here and thus set to 0: ```{r eval = FALSE} 0 #_Use_MeanSize-at-Age_obs (0/1) # @@ -266,7 +266,7 @@ SS has some additional options that we have not used here and thus set to 0: 0 # Do dataread for selectivity priors(0/1) ``` -And finally, the data file must end in `999` to tell SS to stop reading. +And finally, the data file must end in `999` to tell SS3 to stop reading. ```{r eval = FALSE} 999 ``` @@ -277,11 +277,11 @@ The control file contains the setup for model parameter values (both fixed value 0 # 0 means do not read wtatage.ss; 1 means read and use wtatage.ss and also read and use growth parameters ``` -In this case, it is not being used, so is set to 0. If empirical weight at age were used, SS would ignore all inputs relating to growth, maturity, and fecundity that are specified later in the control file (although it does still expect inputs). +In this case, it is not being used, so is set to 0. If empirical weight at age were used, SS3 would ignore all inputs relating to growth, maturity, and fecundity that are specified later in the control file (although it does still expect inputs). Next are options for number of growth patterns and platoons. These are set to 1 because we assume the whole population is the same growth pattern, and there are not platoons within the growth patterns. ```{r eval = FALSE} -1 #_N_Growth_Patterns (Growth Patterns, Morphs, Bio Patterns, GP are terms used interchangeably in SS) +1 #_N_Growth_Patterns (Growth Patterns, Morphs, Bio Patterns, GP are terms used interchangeably in SS3) 1 #_N_platoons_Within_GrowthPattern ``` @@ -383,7 +383,7 @@ The parameter lines resulting from the natural mortality, growth, and maturity ( 1e-006 0.999999 0.5 0.5 0.5 0 -99 0 0 0 0 0 0 0 # FracFemale_GP_1 ``` -Note that the first line in the block of SS input above shows the column headers. All sections with long parameter lines within the control file have these same headings. There are a lot of specifications in these long parameter lines, but a few of particular note are: +Note that the first line in the block of SS3 input above shows the column headers. All sections with long parameter lines within the control file have these same headings. There are a lot of specifications in these long parameter lines, but a few of particular note are: - Anything with negative phase (7th value in a long parameter line) is not estimated and is set at the initial value (3rd value in the line), while positivie phases are estimated. - Natural mortality for both males and females is specified at 0.1. @@ -433,7 +433,7 @@ These define the main recruitment devitations, which in this case last from the 1900 #_last_yr_nobias_adj_in_MPD; begin of ramp 1900 #_first_yr_fullbias_adj_in_MPD; begin of plateau 2001 #_last_yr_fullbias_adj_in_MPD - 2002 #_end_yr_for_ramp_in_MPD (can be in forecast to shape ramp, but SS sets bias_adj to 0.0 for fcast yrs) + 2002 #_end_yr_for_ramp_in_MPD (can be in forecast to shape ramp, but SS3 sets bias_adj to 0.0 for fcast yrs) 1 #_max_bias_adj_in_MPD (-1 to override ramp and set biasadj=1.0 for all estimated recdevs) 0 #_period of cycles in recruitment (N parms read below) -5 #min rec_dev @@ -441,7 +441,7 @@ These define the main recruitment devitations, which in this case last from the 0 #_read_recdevs #_end of advanced SR options ``` -The advanced options allow the user to bias adjust the recruitment deviations. There is more on bias adjustment in the SS user manual, but the general idea is to account for the fact that earlier and later recruitment deviations likely have less information informing them than the ones in the middle. The bias adjustment ramp accounts for this and is typically "tuned" by looking at bias ramp in the model results after it is run, respecifying the bias ramp as needed, and rerunning the model. +The advanced options allow the user to bias adjust the recruitment deviations. There is more on bias adjustment in the SS3 user manual, but the general idea is to account for the fact that earlier and later recruitment deviations likely have less information informing them than the ones in the middle. The bias adjustment ramp accounts for this and is typically "tuned" by looking at bias ramp in the model results after it is run, respecifying the bias ramp as needed, and rerunning the model. Fishing mortality info is next specified: ```{r eval = FALSE} @@ -544,7 +544,7 @@ Some special features (2DAR selectivity, tagging data, variance adjusment, lambd # 0 # (0/1) read specs for more stddev reporting ``` -Varaiance adjustment factors and/or lambdas can be used for data weighting, but in this case they have not yet been used. The control file then ends with 999 so that SS knows it can stop reading: +Varaiance adjustment factors and/or lambdas can be used for data weighting, but in this case they have not yet been used. The control file then ends with 999 so that SS3 knows it can stop reading: ```{r eval = FALSE} 999 ``` @@ -559,7 +559,7 @@ After running the model, open the warning.sso file to check for any warnings fro N warnings: 0 Number_of_active_parameters_on_or_near_bounds: 0 ``` -which suggests that the model is not misspecified in a way that SS knows to warn about. +which suggests that the model is not misspecified in a way that SS3 knows to warn about. Next, we want to quickly check for any evidence that the model did not converge. In Report.sso, underneath information about the data file and control file names is information about the convergence level: ```{r eval = FALSE} @@ -603,9 +603,9 @@ SS_writestarter(starter, dir = mydir, overwrite = TRUE) # write modified starter ``` Next, the jitter can be run: ```{r eval = FALSE} -SS_RunJitter(mydir = "simpler", model = "ss", Njitter = 100) +SS_RunJitter(mydir = "simpler", model = "ss3", Njitter = 100) ``` -The previous code assumes that the model directory `mydir` is called "simpler", which is a folder within the working directory. The `model` argument specifies the name of the ss executable, so in this case, it assumes that the SS executable is within the "simpler" folder and called "ss.exe". Finally, `Njitter` tells the function how many times to run the function. For west coast stock assessment jitters, 100 runs is a common value to use, but note that the run time is not trivial (it depends on the model, but may take an hour or more to run). +The previous code assumes that the model directory `mydir` is called "simpler", which is a folder within the working directory. The `model` argument specifies the name of the ss executable, so in this case, it assumes that the SS3 executable is within the "simpler" folder and called "ss.exe". Finally, `Njitter` tells the function how many times to run the function. For west coast stock assessment jitters, 100 runs is a common value to use, but note that the run time is not trivial (it depends on the model, but may take an hour or more to run). After the jitter is run, the final likelihood values are the most important part of the results to look at. If the original model run has found a global minimum, you would expect all likelihood values from the jitter to be the same or higher than the original model run. If there are any likehood values that are lower than the original model run, this indicates that the model run did not find a global minimum. Investigating the run or runs with lower likelihood values would be the next step in figuring out what the "final" model run will be. diff --git a/User_Guides/ss_model_tips/ss_model_tips.Rmd b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd similarity index 99% rename from User_Guides/ss_model_tips/ss_model_tips.Rmd rename to User_Guides/ss3_model_tips/ss3_model_tips.Rmd index c50086b9..b986e98c 100644 --- a/User_Guides/ss_model_tips/ss_model_tips.Rmd +++ b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd @@ -19,7 +19,7 @@ knitr::opts_chunk$set(echo = FALSE) The developing your first SS3 model guide teaches users how to develop a basic Stock Synthesis model. We assume that these users have had previous population dynamics modeling experience and already understand how to run an existing SS3 model. -If you are a new SS3 user who is not yet comfortable running an SS3 model, we suggest trying to run an example working model using advice in the [Getting Started guide](https://nmfs-stock-synthesis.github.io/doc/Getting_Started_SS.html) before attempting to develop and run your own model as outlined here. +If you are a new SS3 user who is not yet comfortable running an SS3 model, we suggest trying to run an example working model using advice in the [Getting Started guide](https://nmfs-stock-synthesis.github.io/doc/Getting_Started_SS3.html) before attempting to develop and run your own model as outlined here. By the end of using this guide, you should be able to: diff --git a/_data_weighting.tex b/_data_weighting.tex index 63733203..0129e29c 100644 --- a/_data_weighting.tex +++ b/_data_weighting.tex @@ -23,7 +23,7 @@ \subsection{Data Weighting} A convenient way to process these values into the format required by the control file is to use the function: -\texttt{ SS\_tune\_comps(replist, option = ``MI'') } +\texttt{SS\_tune\_comps(replist, option = ``MI'')} where the input ``replist'' is the object created by \texttt{SS\_output}. This function will return a table and also write a matching file called ``suggested\_tuning.ss'' to the directory where the model was run. @@ -39,7 +39,7 @@ \subsection{Data Weighting} \includegraphics[scale = 0.65]{appendixB_McAllister_Ianelli}\\ \end{center} - \caption{ The relationship between the observed sample size (the input sample number) versus the effective sample size where the effective sample size is the product of the input sample size and the data weighting applied to the data set. } + \caption{The relationship between the observed sample size (the input sample number) versus the effective sample size where the effective sample size is the product of the input sample size and the data weighting applied to the data set.} \label{(fig:mcallister)} \end{figure} diff --git a/docs/index.md b/docs/index.md index 1b81fd7b..64a9372f 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,8 +1,8 @@ # Stock Synthesis Documentation ## Links to Documentation -* [Getting Started Tutorial](Getting_Started_SS.html) -* [Building Your First SS3 Model Tutorial](ss_model_tips.html) +* [Getting Started Tutorial](Getting_Started_SS3.html) +* [Building Your First SS3 Model Tutorial](ss3_model_tips.html) * [Current User Manual (html)](SS330_User_Manual_release.html) * [Current User Manual (pdf)](https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/download/v3.30.21/SS330_User_Manual.pdf) diff --git a/tv_parameter_description.tex b/tv_parameter_description.tex index 4e865055..b4f7591b 100644 --- a/tv_parameter_description.tex +++ b/tv_parameter_description.tex @@ -60,7 +60,7 @@ \subsubsection{Specification of Time-Varying Parameters: Long Parameter Lines} \item $P_1 = P_{min} + \frac{R}{1 + e^{-Y_y - X_y }}$. For years after the first year. \end{itemize} \item 6 = mean reverting random walk with penalty to keep the root mean squared error (RMSE) near 1.0. Same as case 4, but with penalty applied. - \item The option of extending the final model year deviation value subsequent years (i.e., into the forecast period) was added in v. 3.30.13. This new option is specified by selecting the appropriate deviation link option and appending a 2 at the front (e.g, 25), which will use the final year deviation value for all forecast years. + \item The option of extending the final model year deviation value subsequent years (i.e., into the forecast period) was added in v. 3.30.13. This new option is specified by selecting the appropriate deviation link option and appending a 2 at the front (e.g, 25), which will use the final year deviation value for all forecast years. \end{itemize} where: \begin{itemize} @@ -127,9 +127,9 @@ \subsubsection{Specification of Time-Varying Parameters: Short Parameter Lines} For example, if two parameters were specified to have environmental linkages in the MG parameter section, below the MG parameters would be two parameter lines (when not auto-generating these lines), which is an environmental linkage parameter for each time-varying base parameter: -\begin{longtable}{ p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm} } +\begin{longtable}{p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm}} \hline - & & & Prior & Prior & Prior & & \Tstrut\\ + & & & Prior & Prior & Prior & & \Tstrut\\ LO & HI & INIT & Value & SD & Type & Phase & Parameter Label \Bstrut\\ \hline \endfirsthead @@ -145,8 +145,8 @@ \subsubsection{Specification of Time-Varying Parameters: Short Parameter Lines} \endlastfoot \multicolumn{7}{l}{COND: Only if MG parameters are time-varying} \Tstrut\\ - -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_1\_Fem\_ENV\_add\Tstrut\\ - -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_2\_Fem\_ENV\_add\Bstrut\\ + -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_1\_Fem\_ENV\_add \Tstrut\\ + -99 & 99 & 1 & 0 & 0.01 & 0 & -1 &\#Wtlen\_2\_Fem\_ENV\_add \Bstrut\\ \hline \end{longtable} @@ -155,9 +155,9 @@ \subsubsection{Specification of Time-Varying Parameters: Short Parameter Lines} \begin{center} \begin{longtable}{p{5cm} p{10cm}} \hline - MG base parameter 3 & Block parameter 3-1\Tstrut\\ - & Block parameter 3-2\\ - & Environmental link parameter 3-1\\ + MG base parameter 3 & Block parameter 3-1 \Tstrut\\ + & Block parameter 3-2 \\ + & Environmental link parameter 3-1 \\ & Deviation se parameter 3 \\ & Deviation $\rho$ parameter 3 \Bstrut\\ MG base parameter 7 & Block parameter 7-1 \\ @@ -195,7 +195,7 @@ \subsubsection{Example Time-varying Parameter Setups} \myparagraph{Time Blocks} \begin{itemize} - \item Offset approach: One or more time blocks are created and cover all or a subset of the years. Each block gets a parameter that is used as an offset from the base parameter (time block functional form 1). In this situation, typically the base parameter and each of the offset parameters are estimated. In years not covered by blocks, the base parameter alone is used. However, if blocks cover all the years, then the value of the block parameter is completely correlated with the mean of the block offsets, so model convergence and variance estimation could be affected. The recommended approach when using offsets is to not have all years covered by blocks or to fix the base parameter value at a reasonable level when doing offsets for all years. + \item Offset approach: One or more time blocks are created and cover all or a subset of the years. Each block gets a parameter that is used as an offset from the base parameter (time block functional form 1). In this situation, typically the base parameter and each of the offset parameters are estimated. In years not covered by blocks, the base parameter alone is used. However, if blocks cover all the years, then the value of the block parameter is completely correlated with the mean of the block offsets, so model convergence and variance estimation could be affected. The recommended approach when using offsets is to not have all years covered by blocks or to fix the base parameter value at a reasonable level when doing offsets for all years. \item Replacement approach, Option A: Time blocks are created which cover a subset of the years. The base parameter is used in the non-block years and the value of the base parameter is replaced by the block parameter in each respective block (time block functional form 2). In this situation, typically the base parameter and each of the block parameters are estimated. @@ -207,27 +207,27 @@ \subsubsection{Example Time-varying Parameter Setups} \begin{itemize} \item Suppose natural mortality was thought to increase from 0.1 to 0.2 during 2000 to 2010. This could be input as a trend. First, the natural mortality parameter would be fixed at an initial value of 0.1. Then, a value of -2 could be input into the ``use block'' column of the natural mortality long parameter line to indicate that the direct input option for trends should be used. The long parameter line for M could look like: \begin{center} - \begin{longtable}{p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{3cm}} + \begin{longtable}{p{1cm} p{1cm} p{1cm} p{1.5cm} p{1cm} p{1.5cm} p{1.5cm} p{1.5cm} p{3cm}} \hline - LO \Tstrut & HI & INIT & & PHASE & & Use\_Block & Block Fxn & Parameter Label\Bstrut\\ + LO \Tstrut & HI & INIT & & PHASE & & Use\_Block & Block Fxn & Parameter Label \Bstrut\\ \hline - 0 & 4 & 0.1 & \multicolumn{1}{c}{...} & -1 & \multicolumn{1}{c}{...} & -2 & 0 & \#M \Bstrut\\ + 0 & 4 & 0.1 & \multicolumn{1}{c}{...} & -1 & \multicolumn{1}{c}{...} & -2 & 0 & \#M \Bstrut\\ \hline \end{longtable} \end{center} \item Three short parameter lines are then expected after the mortality-growth long parameter lines, one for the final value, one for the inflection year and one for the width. The final value could be fixed by using 0.2 as the final value on the short parameter line and a negative phase value. The inflection year could be fixed at 2005 by inputting 2005 for the inflection year in the short parameter line with a negative phase. Finally, the width value (i.e., standard deviation of the cumulative normal distribution) could be set at 3 years. The short parameter lines could look like: - \begin{longtable}{ p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm}} + \begin{longtable}{p{0.7cm} p{0.7cm} p{0.7cm} p{1cm} p{1.4cm} p{1cm} p{1cm} p{6.7cm}} \hline - & & & Prior & Prior & Prior & & \Tstrut\\ + & & & Prior & Prior & Prior & & \Tstrut\\ LO & HI & INIT & Value & SD & Type & Phase & Parameter Label \Bstrut\\ \hline \endfirsthead \hline - & & & Prior & Prior & Prior & & \Tstrut\\ + & & & Prior & Prior & Prior & & \Tstrut\\ LO & HI & INIT & Value & SD & Type & Phase & Parameter Label \Bstrut\\ \hline \endhead @@ -236,9 +236,9 @@ \subsubsection{Example Time-varying Parameter Setups} \endlastfoot - 0.001 & 4 & 0.2 & 0 & 0.01 & 0 & -1 &\#M\_TrendFinal\Tstrut\\ - 1999 & 2011 & 2005 & 0 & 0.01 & 0 & -1 &\#M\_TrendInfl\Bstrut\\ - -99 & 99 & 3 & 0 & 0.01 & 0 & -1 &\#M\_TrendWidth\_yrs\Bstrut\\ + 0.001 & 4 & 0.2 & 0 & 0.01 & 0 & -1 & \#M\_TrendFinal \Tstrut\\ + 1999 & 2011 & 2005 & 0 & 0.01 & 0 & -1 & \#M\_TrendInfl \Bstrut\\ + -99 & 99 & 3 & 0 & 0.01 & 0 & -1 & \#M\_TrendWidth\_yrs \Bstrut\\ \hline \end{longtable} \end{itemize} From 0ccca46b0084883d7a20a6461b0756d85bb66995 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Thu, 12 Oct 2023 12:06:58 -0400 Subject: [PATCH 2/8] fix multicolumn issue --- 8data.tex | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/8data.tex b/8data.tex index 65d7f44d..f849b628 100644 --- a/8data.tex +++ b/8data.tex @@ -196,7 +196,7 @@ \subsection{Fleet Definitions} \item 1 = fleet with input catches; \item 2 = bycatch fleet (all catch discarded) and invoke extra input for treatment in equilibrium and forecast; \item 3 = survey: assumes no catch removals even if associated catches are specified below. If you would like to remove survey catch set fleet type to option = 1 with specific month timing for removals (defined below in the ``Timing'' section); and - \item 4 = predator (M2) fleet that adds additional mortality without a fleet F (added in version 3.30.18). Ideal for modeling large mortality events such as fish kills or red tide. Requires additional long parameter lines for a second mortality component (M2) in the control file after the natural mortality/growth parameter lines (entered immediately after the fraction female parameter line). + \item 4 = predator (M2) fleet that adds additional mortality without a fleet F (added in v.3.30.18). Ideal for modeling large mortality events such as fish kills or red tide. Requires additional long parameter lines for a second mortality component (M2) in the control file after the natural mortality/growth parameter lines (entered immediately after the fraction female parameter line). \end{itemize} \hypertarget{ObsTiming}{} @@ -624,7 +624,7 @@ \subsection{Population Length Bins} \subsection{Length Composition Data Structure} \begin{tabular}{p{2cm} p{14cm}} - \multicolumn{2}{l}{Enter a code to indicate whether or not length composition data will be used: \Tstrut\Bstrut}\\ + \multicolumn{2}{l}{Enter a code to indicate whether or not length composition data will be used:} \Tstrut\Bstrut\\ \hline 1 & Use length composition data (0/1/2) \Tstrut\Bstrut\\ \hline @@ -826,7 +826,7 @@ \subsubsection{Ageing Error} \hline \multicolumn{1}{l}{2} & \multicolumn{5}{l}{Number of ageing error matrices to generate} \Tstrut\Bstrut\\ \hline \\ - \multicolumn{6}{l}{Example with no bias and very little uncertainty at age Tstrut\Bstrut\\ + \multicolumn{6}{l}{Example with no bias and very little uncertainty at age Tstrut} \Bstrut\\ \hline Age-0 & Age-1 & Age-2 & ... & Max Age & \Tstrut\Bstrut\\ \hline @@ -979,7 +979,7 @@ \subsection{Environmental Data} \multicolumn{4}{l}{Parameter values can be a function of an environmental data series:} \\ \hline 1 & \multicolumn{3}{l}{Number of environmental variables} \Tstrut\Bstrut\\ - \multicolumn{4}{l}{ The environmental data can be centered by subtracting the mean and dividing by stdev (z-score, -1) or} \\ + \multicolumn{4}{l}{The environmental data can be centered by subtracting the mean and dividing by stdev (z-score, -1) or} \\ \multicolumn{4}{l}{by subtracting the mean of the environmental variable (-2) based on the year column value.} \\ \hline \multicolumn{4}{l}{COND > 0 Example of 2 environmental observations:} \Tstrut\\ @@ -1036,7 +1036,7 @@ \subsection{Generalized Size Composition Data} \multicolumn{2}{r}{1e-9 1e-9} & Min compression to add to each observation (entry for each method) \\ \multicolumn{2}{r}{2 2} & Number of observations per weight frequency method \Bstrut\\ \hline - \multicolumn{3}{l}{COND < 0 - Number of size frequency } \Tstrut\\ + \multicolumn{3}{l}{COND < 0 - Number of size frequency} \Tstrut\\ \multicolumn{2}{r}{1 1} & Composition error structure (0 = multinomial, 1 = Dirichlet using Theta*n, 2 = Dirichlet using beta, 3 = MV Tweedie) \Tstrut\\ \multicolumn{2}{r}{1 1} & Parameter select consecutive index for Dirichlet or MV Tweedie composition error \Bstrut\\ \multicolumn{3}{l}{END COND < 0} \Tstrut\\ @@ -1106,7 +1106,7 @@ \subsection{Tag-Recapture Data} & & \multicolumn{7}{l}{to expected recoveries (0 = release period).} \Bstrut\\ \hline & 10 & \multicolumn{7}{l}{Max periods (seasons) to track recoveries, after which tags enter} \Tstrut\\ - & & \multicolumn{7}{l}{ accumulator} \Bstrut\\ + & & \multicolumn{7}{l}{accumulator} \Bstrut\\ \hline \multicolumn{9}{l}{COND = 2} \Tstrut\\ & 2 & \multicolumn{7}{l}{Minimum recaptures. The number of recaptures >= mixperiod must be} \\ @@ -1150,7 +1150,7 @@ \subsection{Stock (Morph) Composition Data} \begin{tabular}{p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{1.1cm} p{3.5cm}} \multicolumn{8}{l}{Stock composition by growth pattern (morph) data can be entered in as follows:} \\ \hline - 1 & \multicolumn{7}{l}{Do morph composition, if zero, then do not enter any further input below.}\Tstrut\Bstrut\\ + 1 & \multicolumn{7}{l}{Do morph composition, if zero, then do not enter any further input below.} \Tstrut\Bstrut\\ \hline \multicolumn{8}{l}{COND = 1} \Tstrut\\ & 3 & \multicolumn{6}{l}{Number of observations} \Bstrut\\ From f0064b96fa4cd7f754a5fce031d91d3bcd86dd31 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Thu, 12 Oct 2023 17:29:27 -0400 Subject: [PATCH 3/8] fix table placement --- 6starter.tex | 9 +++++---- 7forecast.tex | 1 - 8data.tex | 21 +++++++++++++++------ 3 files changed, 20 insertions(+), 11 deletions(-) diff --git a/6starter.tex b/6starter.tex index 65055dd4..82439cc4 100644 --- a/6starter.tex +++ b/6starter.tex @@ -239,20 +239,20 @@ \subsection{Starter File Options (starter.ss)} % & & \\ \hline - \hypertarget{ALK}{0} & Age-length-key (ALK) tolerance level, enter 0; & effect is disabled in code. \Tstrut\Bstrut\\ + \hypertarget{ALK}{0} & Age-length-key (ALK) tolerance level & This effect is disabled in code, enter 0. \Tstrut\Bstrut\\ % \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Value of 0 will not apply any compression. Values > 0 (e.g., 0.0001) will apply compression to the ALK which will increase the speed of calculations. The size of this value will impact the run time of your model, but one should be careful to ensure that the value used does not appreciably impact the estimated quantities relative to no compression of the ALK. The suggested value if applied is 0.0001.}} \Tstrut\Bstrut\\ % & & \\ % & & \Tstrut\\ % & & \Tstrut\Bstrut\\ \hline - \multicolumn{2}{l}{COND: Seed Value (i.e., 1234)}& \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify a seed for data generation. This feature is not available in versions prior to 3.30.15 This is an optional input value allowing for the specification of a random number seed value. If you do not want to specify a seed, skip this input line and end the starter file with the check value (3.30).}} \Tstrut\Bstrut\\ + \multicolumn{2}{l}{COND: Seed Value (i.e., 1234)}& \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Specify a seed for data generation. This feature is not available in versions prior to v.3.30.15 This is an optional input value allowing for the specification of a random number seed value. If you do not want to specify a seed, skip this input line and end the starter file with the check value (3.30).}} \Tstrut\Bstrut\\ & & \Bstrut\\ & & \Bstrut\\ % \pagebreak \hline - \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in SS3 v3.30 format and a value of 999 indicates that the control and data files are in a previous SS3 v.3.24 version. The ss\_trans.exe executable should be used and will convert the 3.24 version files to the new format for the control.ss\_new and data\_echo.ss\_new files. All ss\_new files are in the SS3 v.3.30 format, so starter.ss\_new has SS3 v.3.30 on the last line. The mortality-growth parameter section has a new sequence and SS3 v.3.30 cannot read a ss.par file produced by SS3 v.3.24 and earlier, so please ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from SS3 v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ + \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in SS3 v3.30 format. A value of 999 indicates that the control and data files are in a previous SS3 v.3.24 version. The ss\_trans.exe executable should be used and will convert the v.3.24 files the control.ss\_new and data\_echo.ss\_new files to the new format. All ss\_new files are in the SS3 v.3.30 format, so starter.ss\_new has SS3 v.3.30 on the last line. The mortality-growth parameter section has a new sequence and SS3 v.3.30 cannot read a ss.par file produced by SS3 v.3.24 and earlier, so ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from SS3 v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ & & \\ & & \\ & & \\ @@ -260,7 +260,8 @@ \subsection{Starter File Options (starter.ss)} & & \\ & & \\ & & \\ - & & \\ + % & & \\ + \end{longtable} \end{landscape} } diff --git a/7forecast.tex b/7forecast.tex index e536dd0b..83beac0d 100644 --- a/7forecast.tex +++ b/7forecast.tex @@ -348,7 +348,6 @@ \subsection{Benchmark Calculations} \myparagraph{Calculations} The calculation of equilibrium biomass and catch uses the same code that is used to calculate the virgin conditions and the initial equilibrium conditions. This equilibrium calculation code takes into account all morph, timing, biology, selectivity, and movement conditions as they apply while doing the time series calculations. You can verify this by running SS3 to calculate F\textsubscript{MSY} then hardwire initial F to equal this value, use the F\_method approach 2 so each annual F is equal to F\textsubscript{MSY} and then set forecast F to be the same F\textsubscript{MSY}. Then run SS3 without estimation and no recruitment deviations. You should see that the population has an initial equilibrium abundance equal to B\textsubscript{MSY} and stays at this level during the time series and forecast. -\pagebreak \myparagraph{Catch Units} For each fleet, SS3 always calculates catch in terms of biomass (mt) and numbers (1000s) for encountered (selected) catch, dead catch, and retained catch. These three categories differ only when some fleets have discarding or are designated as a bycatch fleet. SS3 uses total dead catch biomass as the quantity that is principally reported and the quantity that is optimized when searching for F\textsubscript{MSY}. The quantity ``dead catch'' may occasionally be referred to as ``yield''. diff --git a/8data.tex b/8data.tex index f849b628..74af0186 100644 --- a/8data.tex +++ b/8data.tex @@ -75,7 +75,7 @@ \subsubsection{Subseasons and Timing of Events} The treatment of subseasons in SS3 provide more precision in the timing of events compared to earlier model versions. In early versions, v.3.24 and before, there was effectively only two subseasons per season because the age-length-key (ALK) for each observation used the mid-season mean length-at-age and spawning occurred at the beginning of a specified season. Time steps can be broken into subseason and the ALK can be calculated multiple times over the course of a year: - +\vspace*{-\baselineskip} \begin{center} \begin{tabular}{|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|p{2.37cm}|} \hline @@ -83,7 +83,7 @@ \subsubsection{Subseasons and Timing of Events} \hline Subseason 1 & Subseason 2 & Subseason 3 & Subseason 4 & Subseason 5 & Subseason 6 \Tstrut\Bstrut\\ \hline - \multicolumn{6}{l}{ALK* only re-calculated when there is a survey that subseason }\Tstrut\Bstrut\\ + \multicolumn{6}{l}{ALK* only re-calculated when there is a survey that subseason} \Tstrut\Bstrut\\ \end{tabular} \end{center} @@ -167,6 +167,7 @@ \subsection{Model Dimensions} 2 \Tstrut & Total number of fishing and survey fleets (which now can be in any order).\\ \hline \end{longtable} + \vspace*{-1.7\baselineskip} \end{center} @@ -243,6 +244,7 @@ \subsection{Bycatch Fleets} \noindent If a fleet above was set as a bycatch fleet (fleet type = 2), the following line is required: \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{2.25cm} p{2.65cm} p{2.25cm} p{2.5cm} p{2.5cm} p{2cm}} \multicolumn{6}{l}{Bycatch fleet input controls:} \\ @@ -344,6 +346,7 @@ \subsection{Catch} The format for a 2 season model with 2 fisheries looks like the table below. Example is sorted by fleet, but the sort order does not matter. In data.ss\_new, the sort order is fleet, year, season. \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{3cm} p{3cm} p{3cm} p{3cm} p{4cm}} \multicolumn{5}{l}{Catches by year, season for every fleet:} \\ \hline @@ -376,6 +379,7 @@ \subsection{Indices} Indices are data that are compared to aggregate quantities in the model. Typically the index is a measure of selected fish abundance, but this data section also allows for the index to be related to a fishing fleet's F, or to another quantity estimated by the model. The first section of the ``Indices'' setup contains the fleet number, units, error distribution, and whether additional output (SD Report) will be written to the Report file for each fleet that has index data. \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{3cm} p{3cm} p{3cm} p{7cm}} \multicolumn{4}{l}{Catch-per-unit-effort (CPUE) and Survey Abundance Observations:} \\ \hline @@ -749,6 +753,7 @@ \subsection{Length Composition Data} Example of a single length composition observation: \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{5cm}} \multicolumn{7}{l}{} \\ \hline @@ -799,6 +804,7 @@ \subsection{Length Composition Data} \subsection{Age Composition Option} The age composition section begins by reading the number of age bins. If the value 0 is entered for the number of age bins, then skips reading the bin structure and all reading of other age composition data inputs. \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{3cm} p{13cm}} \hline 17 \Tstrut & Number of age bins; can be equal to 0 if age data are not used; do not include a vector of agebins if the number of age bins is set equal to 0. \Bstrut\\ @@ -810,6 +816,7 @@ \subsection{Age Composition Option} \subsubsection{Age Composition Bins} If a positive number of age bins is read, then reads the bin definition next. \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{3cm} p{13cm}} \hline 1 2 3 ... 20 25 & Vector of ages \Tstrut\Bstrut\\ @@ -822,6 +829,7 @@ \subsubsection{Ageing Error} Here, the capability to create a distribution of age (e.g., age with possible bias and imprecision) from true age is created. One or many ageing error definitions can be created. For each, the model will expect an input vector of mean age and a vector of standard deviations associated with the mean age. \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{3.5cm} p{2.5cm}} \hline \multicolumn{1}{l}{2} & \multicolumn{5}{l}{Number of ageing error matrices to generate} \Tstrut\Bstrut\\ @@ -894,8 +902,6 @@ \subsubsection{Age Composition Specification} Syntax for Sex, Partition, and data vector are same as for length. The data vector has female values then male values, just as for the length composition data. -\pagebreak - \myparagraph{Age Error} Age error (Age Err) identifies which ageing error matrix to use to generate expected value for this observation. @@ -922,6 +928,7 @@ \subsection{Conditional Age-at-Length} Conditional age-at-length data are entered within the age composition data section and can be mixed with marginal age observations for other fleets of other years within a fleet. To treat age data as conditional on length, Lbin\_lo and Lbin\_hi are used to select a subset of the total size range. This is different than setting Lbin\_lo and Lbin\_hi both to -1 to select the entire size range, which treats the data entered on this line within the age composition data section as marginal age composition data. +\vspace*{-\baselineskip} \begin{tabular}{p{0.9cm} p{1cm} p{0.9cm} p{0.9cm} p{1.5cm} p{0.9cm} p{0.9cm} p{0.9cm} p{1cm} p{2.4cm}} \multicolumn{10}{l}{} \\ \multicolumn{10}{l}{An example conditional age-at-length composition observations:} \\ @@ -975,6 +982,7 @@ \subsection{Environmental Data} The model accepts input of time series of environmental data. Parameters can be made to be time-varying by making them a function of one of these environmental time series. In v.3.30.16 the option to specify the centering of environmental data by either using the mean of the by mean and the z-score. \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{1cm} p{3cm} p{3cm} p{7.5cm}} \multicolumn{4}{l}{Parameter values can be a function of an environmental data series:} \\ \hline @@ -1140,7 +1148,7 @@ \subsection{Tag-Recapture Data} \item values are place holders and are replaced by program generated values for model time. \item Analysis of the tag-recapture data has one negative log likelihood component for the distribution of recaptures across areas and another negative log likelihood component for the decay of tag recaptures from a group over time. Note the decay of tag recaptures from a group over time suggests information about mortality is available in the tag-recapture data. More on this is in the \hyperlink{tagrecapture}{control file documentation}. \item Do tags option 2 adds an additional input compared to do tags option 1, minimum recaptures. Minimum recaptures allows the user to exclude tag groups that have few recaptures after the mixing period from the likelihood. This may be useful when few tags from a group have been recaptured as an alternative to manually removing the groups with these low numbers of recaptured tags from the tagging data. - \item Warning for earlier versions of SS3: A shortcoming in the recapture calculations when also using Pope's F approach was identified and corrected in version 3.30.14. + \item Warning for earlier versions of SS3: A shortcoming in the recapture calculations when also using Pope's F approach was identified and corrected in v.3.30.14. \end{itemize} \subsection{Stock (Morph) Composition Data} @@ -1186,7 +1194,7 @@ \subsection{Selectivity Empirical Data (future feature)} \multicolumn{9}{l}{Selectivity data feature is under development for a future option and is not yet implemented.} \\ \multicolumn{9}{l}{The input line still must be specified in as follows:} \\ \hline - 0 & \multicolumn{8}{l}{Do data read for selectivity (future option)} \Tstrut\Bstrut\\ + 0 & \multicolumn{8}{l}{Do data read for selectivity (future option)} \Tstrut\Bstrut\\ \hline %& Year & Month & Fleet & Age/Size & Bin \# & Datum & Datum SE\Tstrut\Bstrut\\ %\hline @@ -1221,6 +1229,7 @@ \subsection{Data Super-Periods} Not all time steps within the extent of a super-period need be included. For example, in a three season model, a super-period could be set up to combine information from season 2 across 3 years, e.g., skip over the season 1 and season 3 for the purposes of calculating the expected value for the super-period. The key is to create a dummy observation (negative fleet value) for all time steps, except 1, that will be included in the super-period and to include one real observation (positive fleet value; which contains the real combined data from all the specified time steps). \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{9cm}} \multicolumn{6}{l}{Super-period example:} \\ \hline From 9a58a7e00ad8652b1550fb6a78a90a9095987ae7 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Fri, 13 Oct 2023 11:22:27 -0400 Subject: [PATCH 4/8] more table adjustments --- 9control.tex | 78 ++++++++++++++++++++++++++++------------------------ 1 file changed, 42 insertions(+), 36 deletions(-) diff --git a/9control.tex b/9control.tex index b71011df..6dd8f13f 100644 --- a/9control.tex +++ b/9control.tex @@ -54,11 +54,11 @@ \subsection{Parameter Line Elements} \hline Column & Element & Description \Tstrut\Bstrut\\ \hline - 1 & LO & Minimum value for the parameter\Tstrut\\ - 2 & HI & Maximum value for the parameter\Tstrut\\ + 1 & LO & Minimum value for the parameter \Tstrut\\ + 2 & HI & Maximum value for the parameter \Tstrut\\ 3 \Tstrut & INIT & Initial value for the parameter. If the phase (described below) for the parameter is negative the parameter is fixed at this value. If the ss.par file is read, it overwrites these INIT values.\\ 4 \Tstrut & PRIOR & Expected value for the parameter. This value is ignored if the prior type is 0 (no prior) or 1 (symmetric beta). If the selected prior type (described below) is lognormal, this value is entered in log space. \\ - 5 \Tstrut & PRIOR SD & Standard deviation for the prior, used to calculate likelihood of the current parameter value. This value is ignored if prior type is 0. The standard deviation is in regular space regardless of the prior type.\\ + 5 \Tstrut & PRIOR SD & Standard deviation for the prior, used to calculate likelihood of the current parameter value. This value is ignored if prior type is 0. The standard deviation is in regular space regardless of the prior type. \\ 6 \Tstrut & \hyperlink{PriorDescrip}{PRIOR TYPE} & 0 = none; \\ & & 1 = symmetric beta; \\ & & 2 = full beta; \\ @@ -177,12 +177,14 @@ \subsubsection{Settlement Timing for Recruits and Distribution} \hline 1 & 5.5 & 1 & 0 \Bstrut\\ \hline -\end{longtable} +\end{longtable} +\vspace*{-\baselineskip} The above example specifies settlement to mid-May (month 5.5). Note that normally the calendar age at settlement is 0 if settlement happens between the time of spawning and the end of that year, and at age 1 if settlement is in the year after spawning. Below is an example set-up where there are multiple settlement events, with one occurring the following year after spawning: \begin{center} + \vspace*{-\baselineskip} \begin{tabular}{p{3cm} p{3cm} p{2cm} p{7cm}} \hline 3 & \multicolumn{3}{l}{Number of recruitment settlement events} \Tstrut\\ @@ -270,6 +272,8 @@ \subsubsection{Movement} \\ \hline \end{longtable} + \vspace*{-\baselineskip} + Two parameters will be entered later for each growth pattern, area pair, and season. \begin{itemize} @@ -317,6 +321,7 @@ \subsubsection{Time Blocks} & \multirow{1}{2cm}[-0.1cm]{1999 2002} & \multirow{1}{12cm}[-0.10cm]{Beginning and ending years for blocks in design 3.} \Bstrut\\ \hline \end{longtable} +\vspace*{-\baselineskip} Blocks and other time-vary parameter controls are operative during forecast years, so care should be taken when setting the end year of the last block in a pattern. If that end year is set to the last year in the time series, then the parameter will revert to the base value for the forecast. If the user wants to continue the last block through the forecast, it is advisable to set the last block's end year value to -2 to cause SS3 to reset it to the last year of the forecast. Using the value -1 will set the block's end year to the last year of the time series and leave the forecast at the base parameter value. Note that additional controls on time-varying parameters in forecast years are in the forecast section. @@ -510,7 +515,7 @@ \subsubsection{Growth} \myparagraph{Growth cessation} A growth cessation model was developed for the application to tropical tuna species \citep{maunder-growth-2018}. Growth cessation allows for a linear relationship between length and age, followed by a marked reduction of growth after the onset of sexual maturity by assuming linear growth for the youngest individuals and then a logistic function to model the decreasing growth rate at older ages. - +\vspace*{-\baselineskip} \begin{longtable}{p{0.5cm} p{2cm} p{12.5cm}} \multicolumn{3}{l}{Example growth specifications:} \Tstrut\Bstrut\\ \hline @@ -632,7 +637,7 @@ \subsubsection{Maturity-Fecundity} \hline \end{longtable} -\pagebreak +% \pagebreak \subsubsection{Hermaphroditism} @@ -671,6 +676,7 @@ \subsubsection{Hermaphroditism} & & 1 = simple addition of males to females. \Bstrut\\ \hline \end{longtable} +\vspace*{-\baselineskip} The hermaphroditism option requires three full parameter lines in the mortality growth section: \begin{enumerate} @@ -1162,6 +1168,7 @@ \subsubsection{Spawner-Recruitment Parameter Setup} \hline \end{longtable} \end{center} +\vspace*{-1.7\baselineskip} \subsubsection{Spawner-Recruitment Time-Varying Parameters} @@ -1974,9 +1981,9 @@ \subsubsection{Selectivity Pattern Details} For a 3 node setup, the input parameters would be: \begin{itemize} - \item p1 - Code for initial set-up which controls whether or not auto-generation is applied (input options are 0, 1, 2, 10, 11, or 12) as explained below - \item p2 - Gradient at the first node (should be a small positive value, or fixed at 1e30 to implement a ``natural cubic spline'') - \item p3 - Gradient at the last node (should be zero, a small negative value, or fixed at 1e30 to implement a ``natural cubic spline'') + \item p1 - Code for initial set-up which controls whether or not auto-generation is applied (input options are 0, 1, 2, 10, 11, or 12) as explained below + \item p2 - Gradient at the first node (should be a small positive value, or fixed at 1e30 to implement a ``natural cubic spline'') + \item p3 - Gradient at the last node (should be zero, a small negative value, or fixed at 1e30 to implement a ``natural cubic spline'') \item p4-p6 - The nodes in units of cm; must be in rank order and inside of the range of the population length bins. These must be held constant (not estimated, e.g., negative phase value) during a model run. \item p7-p9 - The values at the nodes. Units are ln(selectivity) before rescaling. \end{itemize} @@ -2454,17 +2461,16 @@ \subsection{Tag Recapture Parameters} \subsection{Variance Adjustment Factors} When doing iterative re-weighting of the input variance factors, it is convenient to do this in the control file, rather than the data file. This section creates that capability. +\begin{longtable}{p{3cm} p{3cm} p{2.5cm} p{6.25cm}} -\begin{longtable}{p{3cm} p{3cm} p{2.5cm} p{6.25cm} } - - \multicolumn{4}{l}{Read variance adjustment factors to be applied:}\\ + \multicolumn{4}{l}{Read variance adjustment factors to be applied:} \\ \hline Factor & Fleet & Value & Description \Tstrut\Bstrut\\ \hline 1 & 2 & 0.5 & \# Survey CV for survey/fleet 2 \Tstrut\\ 4 & 1 & 0.25 & \# Length data for fleet 1 \\ - 4 & 2 & 0.75 & \# Length data for fleet 2\\ - -9999 & 0 & 0 & \# End read\Bstrut\\ + 4 & 2 & 0.75 & \# Length data for fleet 2 \\ + -9999 & 0 & 0 & \# End read \Bstrut\\ \hline \end{longtable} @@ -2524,9 +2530,9 @@ \subsection{Lambdas (Emphasis Factors)} \begin{longtable}{p{3cm} p{3cm} p{2cm} p{3cm} p{3cm}} - \multicolumn{5}{l}{Read the lambda adjustments by fleet and data type:}\\ + \multicolumn{5}{l}{Read the lambda adjustments by fleet and data type:} \\ \hline - Likelihood & & & Lambda & SizeFreq\Tstrut\\ + Likelihood & & & Lambda & SizeFreq \Tstrut\\ Component & Fleet & Phase & Value & Method \Bstrut\\ \hline 1 & 2 & 2 & 1.5 & 1 \Tstrut\\ @@ -2542,13 +2548,13 @@ \subsection{Lambdas (Emphasis Factors)} \multicolumn{2}{l}{The codes for component are:}\\ \hline 1 = survey & 10 = recruitment deviations \Tstrut\\ - 2 = discard & 11 = parameter priors\\ - 3 = mean weight & 12 = parameter deviations\\ - 4 = length & 13 = crash penalty\\ - 5 = age & 14 = morph composition\\ - 6 = size frequency & 15 = tag composition\\ - 7 = size-at-age & 16 = tag negative binomial\\ - 8 = catch & 17 = F ballpark\\ + 2 = discard & 11 = parameter priors \\ + 3 = mean weight & 12 = parameter deviations \\ + 4 = length & 13 = crash penalty \\ + 5 = age & 14 = morph composition \\ + 6 = size frequency & 15 = tag composition \\ + 7 = size-at-age & 16 = tag negative binomial \\ + 8 = catch & 17 = F ballpark \\ 9 = initial equilibrium catch (see note below) & 18 = regime shift \Bstrut\\ \hline \end{longtable} @@ -2564,15 +2570,15 @@ \subsection{Controls for Variance of Derived Quantities} \begin{longtable}{p{1.1cm} p{1.4cm} p{1.2cm} p{1.2cm} p{1.3cm} p{1.6cm} p{1.4cm} p{1.4cm} p{1.4cm}} \hline - \multicolumn{3}{l}{Typical Value} & \multicolumn{6}{l}{Description and Options}\Tstrut\Bstrut\\ + \multicolumn{3}{l}{Typical Value} & \multicolumn{6}{l}{Description and Options} \Tstrut\Bstrut\\ \hline \endfirsthead \multicolumn{3}{l}{0} & \multicolumn{6}{l}{0 = No additional std dev reporting;} \Tstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{1 = read specification for reporting stdev for selectivity, size, numbers; and}\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{2 = read specification for reporting stdev for selectivity, size, numbers, }\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{natural mortality, dynamic B0, and Summary Bio}\Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{1 = read specification for reporting stdev for selectivity, size, numbers; and} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{2 = read specification for reporting stdev for selectivity, size, numbers,} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{natural mortality, dynamic B0, and Summary Bio} \Bstrut\\ \hline \end{longtable} @@ -2640,13 +2646,13 @@ \subsection{Controls for Variance of Derived Quantities} \begin{longtable}{p{1.1cm} p{1.4cm} p{1.2cm} p{1.2cm} p{1.3cm} p{1.6cm} p{1.4cm} p{1.4cm} p{1.4cm}} \hline - \multicolumn{9}{l}{Example Input:}\Tstrut\Bstrut\\ + \multicolumn{9}{l}{Example Input:} \Tstrut\Bstrut\\ \hline \multicolumn{3}{l}{2} & \multicolumn{6}{l}{\# 0 = No additional std dev reporting;} \Tstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 1 = read values below; and}\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 2 = read specification for reporting stdev for selectivity, size,numbers, and }\Bstrut\\ - \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# natural mortality.}\Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 1 = read values below; and} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# 2 = read specification for reporting stdev for selectivity, size,numbers, and} \Bstrut\\ + \multicolumn{3}{l}{ } & \multicolumn{6}{l}{\# natural mortality.} \Bstrut\\ \hline \multicolumn{4}{l}{1 1 -1 5} & \multicolumn{5}{l}{\# Selectivity} \Bstrut\\ @@ -2656,12 +2662,12 @@ \subsection{Controls for Variance of Derived Quantities} \multicolumn{4}{l}{1} & \multicolumn{5}{l}{\# Dynamic Bzero} \Bstrut\\ \multicolumn{4}{l}{1} & \multicolumn{5}{l}{\# Summary Biomass} \Bstrut\\ - \multicolumn{4}{l}{5 15 25 35 38} & \multicolumn{5}{l}{\# Vector with selectivity std bins (-1 in first bin to self-generate)}\Bstrut\\ - \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with growth std ages picks (-1 in first bin to self-generate)}\Bstrut\\ - \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with numbers-at-age std ages (-1 in first bin to self-generate)}\Bstrut\\ - \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with M-at-age std ages (-1 in first bin to self-generate)}\Bstrut\\ + \multicolumn{4}{l}{5 15 25 35 38} & \multicolumn{5}{l}{\# Vector with selectivity std bins (-1 in first bin to self-generate)} \Bstrut\\ + \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with growth std ages picks (-1 in first bin to self-generate)} \Bstrut\\ + \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with numbers-at-age std ages (-1 in first bin to self-generate)} \Bstrut\\ + \multicolumn{4}{l}{1 2 5 10 15} & \multicolumn{5}{l}{\# Vector with M-at-age std ages (-1 in first bin to self-generate)} \Bstrut\\ \hline - \bfseries{999} & \multicolumn{8}{l}{\#End of the control file input}\Tstrut\Bstrut\\ + \bfseries{999} & \multicolumn{8}{l}{\#End of the control file input} \Tstrut\Bstrut\\ \hline \end{longtable} From a443ce136b094a389161bcad54c76a1d95294227 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Mon, 16 Oct 2023 11:17:27 -0400 Subject: [PATCH 5/8] change gender to sex - stock-synthesis issue 516 --- User_Guides/model_step_by_step/model_tutorial.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/User_Guides/model_step_by_step/model_tutorial.Rmd b/User_Guides/model_step_by_step/model_tutorial.Rmd index 8eee8c19..c1e926bb 100644 --- a/User_Guides/model_step_by_step/model_tutorial.Rmd +++ b/User_Guides/model_step_by_step/model_tutorial.Rmd @@ -88,7 +88,7 @@ This is where the data inputs are specified. At the top, general information abo 12 #_months/season 2 #_Nsubseasons (even number, minimum is 2) 1 #_spawn_month -2 #_Ngenders: 1, 2, -1 (use -1 for 1 sex setup with SSB multiplied by female_frac parameter) +2 #_Nsexes: 1, 2, -1 (use -1 for 1 sex setup with SSB multiplied by female_frac parameter) 40 #_Nages=accumulator age, first age is always age 0 1 #_Nareas 3 #_Nfleets (including surveys) From fcff1dd3faf4f618fed586c60711579116c4d29a Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Mon, 16 Oct 2023 11:21:43 -0400 Subject: [PATCH 6/8] runningSS to runningSS3 --- 12runningSS.tex => 12runningSS3.tex | 0 13output.tex | 2 +- 1_4sections.tex | 2 +- SS330_User_Manual.tex | 2 +- User_Guides/getting_started/Getting_Started_SS3.Rmd | 4 ++-- 5 files changed, 5 insertions(+), 5 deletions(-) rename 12runningSS.tex => 12runningSS3.tex (100%) diff --git a/12runningSS.tex b/12runningSS3.tex similarity index 100% rename from 12runningSS.tex rename to 12runningSS3.tex diff --git a/13output.tex b/13output.tex index 866291c9..833d852f 100644 --- a/13output.tex +++ b/13output.tex @@ -45,7 +45,7 @@ \subsection{Custom Reporting} \subsection{Standard ADMB output files} Standard ADMB files are created by SS3. These are: -ss.par - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see \hyperref[sec:RunningSS]{Running Stock Synthesis} for more info). +ss.par - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see \hyperref[sec:RunningSS3]{Running Stock Synthesis} for more info). ss.std - This file has the parameter values and their estimated standard deviation for those parameters that were active during the model run. It also contains the derived quantities declared as standard deviation report variables. All of this information is also report in the covar.sso. Also, the parameter section of Report.sso lists all the parameters with their SS3 generated names, denotes which were active in the reported run, displays the parameter standard deviations, then displays the derived quantities with their standard deviations. diff --git a/1_4sections.tex b/1_4sections.tex index 79d4f083..75e820fc 100644 --- a/1_4sections.tex +++ b/1_4sections.tex @@ -71,7 +71,7 @@ \section{File Organization}\label{FileOrganization} \pagebreak \section{Starting Stock Synthesis} -SS3 is typically run through the command line interface, although it can also be called from another program, R, the Stock Synthesis Interface, or a script file (such as a DOS batch file). SS3 is compiled for Windows, Mac, and Linux operating systems. The memory requirements depend on the complexity of the model you run, but in general, SS3 will run much slower on computers with inadequate memory. See \hyperref[sec:RunningSS]{Running Stock Synthesis} for additional notes on methods of running SS3. +SS3 is typically run through the command line interface, although it can also be called from another program, R, the Stock Synthesis Interface, or a script file (such as a DOS batch file). SS3 is compiled for Windows, Mac, and Linux operating systems. The memory requirements depend on the complexity of the model you run, but in general, SS3 will run much slower on computers with inadequate memory. See \hyperref[sec:RunningSS3]{Running Stock Synthesis} for additional notes on methods of running SS3. Communication with the program is through text files. When the program first starts, it reads the file starter.ss, which typically must be located in the same directory from which SS3 is being run. The file starter.ss contains required input information plus references to other required input files, as described in the \hyperref[FileOrganization]{File Organization section}. The names of the control and data files must match the names specified in the starter.ss file. File names, including starter.ss, are case-sensitive on Linux and Mac systems but not on Windows. The echoinput.sso file outputs how the executable reads each input file and can be used for troubleshooting when trying to setup a model correctly. Output from SS3 consists of text files containing specific keywords. Output processing programs, such as Excel, or R can search for these keywords and parse the specific information located below that keyword in the text file. diff --git a/SS330_User_Manual.tex b/SS330_User_Manual.tex index c99d79ce..cebc2c2d 100644 --- a/SS330_User_Manual.tex +++ b/SS330_User_Manual.tex @@ -199,7 +199,7 @@ % ======== Section 11: Likelihoods \input{11likelihoods} %========= Section 12: Running SS - \input{12runningSS} + \input{12runningSS3} % ======== Section 13: Output Files \input{13output} %========= Section 14: R4SS diff --git a/User_Guides/getting_started/Getting_Started_SS3.Rmd b/User_Guides/getting_started/Getting_Started_SS3.Rmd index c464fa37..6f11145b 100644 --- a/User_Guides/getting_started/Getting_Started_SS3.Rmd +++ b/User_Guides/getting_started/Getting_Started_SS3.Rmd @@ -72,7 +72,7 @@ Many output text files are created during a model run. The most useful output fi # Running SS3 -SS3 is typically run through the command line (although it can also be run indirctly via the commandline through an R console). We will introduce the one folder approach, where SS3 is in the same folder as the model files. Other possible approaches to running SS3 include, which are detailed in the ["Running Stock Synthesis" section of the user manual](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS). +SS3 is typically run through the command line (although it can also be run indirctly via the commandline through an R console). We will introduce the one folder approach, where SS3 is in the same folder as the model files. Other possible approaches to running SS3 include, which are detailed in the ["Running Stock Synthesis" section of the user manual](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS3). ## The one folder approach and demonstration of an SS3 model run @@ -174,7 +174,7 @@ Here are some basic checks for when SS3 does not run: + Check that starter.ss references the correct names of the control and data files. + If SS3 starts to read files and then crashes, check warnings.sso and echoinput.sso. The warnings.sso will reveal potential issues with the model, while echoinput.sso will show how far SS3 was able to run. Work backwards from the bottom of echoinput.sso, looking for where SS3 stopped and if the inputs are being read corectly or not. -For further information on troubleshooting, please refer to the SS3 User Manual [“Running Stock Synthesis” subsections](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS), especially [“Re-Starting a Run”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#re-starting-a-run) and [“Debugging Tips”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#debugging-tips). +For further information on troubleshooting, please refer to the SS3 User Manual [“Running Stock Synthesis” subsections](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#sec:RunningSS3), especially [“Re-Starting a Run”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#re-starting-a-run) and [“Debugging Tips”](https://nmfs-stock-synthesis.github.io/doc/SS330_User_Manual.html#debugging-tips). # Where to get additional help From 6716bd7892a79d41374ebdc4b166489689ed6993 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Tue, 17 Oct 2023 10:37:47 -0400 Subject: [PATCH 7/8] make all references to versions consistent --- 12runningSS3.tex | 2 +- 15special.tex | 2 +- 1_4sections.tex | 2 +- 5converting.tex | 2 +- 6starter.tex | 8 +++--- 7forecast.tex | 2 +- 8data.tex | 4 +-- 9control.tex | 26 +++++++++---------- .../getting_started/Getting_Started_SS3.Rmd | 2 +- .../model_step_by_step/model_tutorial.Rmd | 2 +- User_Guides/ss3_model_tips/ss3_model_tips.Rmd | 2 +- _f_mortality.tex | 4 +-- tv_parameter_description.tex | 2 +- 13 files changed, 30 insertions(+), 30 deletions(-) diff --git a/12runningSS3.tex b/12runningSS3.tex index 7dac8a64..2ccbbc89 100644 --- a/12runningSS3.tex +++ b/12runningSS3.tex @@ -193,7 +193,7 @@ \subsubsection{Re-Starting a Run} Model runs can be restarted from a previously estimated set of parameter values. In the starter.ss file, enter a value of 1 on the first numeric input line. This will cause the model to read the file ss.par and use these parameter values in place of the initial values in the control file. This option only works if the number of parameters to be estimated in the new run is the same as the number of parameters in the previous run because only actively estimated parameters are saved to the file ss.par. The file ss.par can be edited with a text editor, so values can be changed and rows can be added or deleted. However, if the resulting number of elements does not match the setup in the control file, then unpredictable results will occur. Because ss.par is a text file, the values stored in it will not give exactly the same initial results as the run just completed. To achieve greater numerical accuracy, the model can also restart from ss.bar which is the binary version of ss.par. In order to do this, the user must make the change described above to the starter.ss file and must also enter -binp ss.bar as one of the command line options. \subsubsection{Optional Output Subfolders} -As of 3.30.19, users can optionally send .sso and .ss\_new extension files to subfolders. To send files with a .sso extension to a subfolder within the model folder, create a subfolder called sso before running the model. To send files with a .ss\_new extension to a separate subfolder, create a folder called ssnew before running the model. +As of v.3.30.19, users can optionally send .sso and .ss\_new extension files to subfolders. To send files with a .sso extension to a subfolder within the model folder, create a subfolder called sso before running the model. To send files with a .ss\_new extension to a separate subfolder, create a folder called ssnew before running the model. \subsection{Putting Stock Synthesis in your PATH} diff --git a/15special.tex b/15special.tex index 948de41f..4e072da6 100644 --- a/15special.tex +++ b/15special.tex @@ -6,7 +6,7 @@ \subsection{Using Time-Varying Parameters} \hypertarget{tvOrder}{} \subsubsection{Time-Varying Parameters} -Starting in SS3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of SS3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3: +Starting in SS3 v.3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of v.3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3: \begin{enumerate} \item Environmental or Density dependent Linkages: Links the base parameter with environmental data or a model derived quantity. \item Parameter deviations: Creates annual deviations from the base parameter during a user-specified range of years. diff --git a/1_4sections.tex b/1_4sections.tex index 75e820fc..c558fcc4 100644 --- a/1_4sections.tex +++ b/1_4sections.tex @@ -44,7 +44,7 @@ \section{File Organization}\label{FileOrganization} \subsection{Output Files} \begin{enumerate} - \item data\_echo.ss\_new: Contains the input data as read by the model. In model versions prior to 3.30.19 a single data.ss\_new file was created that included the echoed data, the expected data values (data\_expval.ss), and any bootstrap data files selected (data\_boot\_x.ss). + \item data\_echo.ss\_new: Contains the input data as read by the model. In model versions prior to v.3.30.19 a single data.ss\_new file was created that included the echoed data, the expected data values (data\_expval.ss), and any bootstrap data files selected (data\_boot\_x.ss). \item data\_expval.ss: Contains the expected data values given the model fit. This file is only created if the value for ``Number of datafiles to produce'' in the starter file is set to 2 or greater. \item data\_boot\_x.ss: A new data file filled with bootstrap data based on the original input data and variances. This file is only created if the value in the ``Number of datafiles to produc'' in the starter file is set to 3 or greater. A separate bootstrap data file will be written for the number of bootstrap data file requests where x in the file name indicates the bootstrap simulation number (e.g., data\_boot\_001.ss, data\_boot\_002.ss,...). \item control.ss\_ new: Updated version of the control file with final parameter values replacing the initial parameter values. diff --git a/5converting.tex b/5converting.tex index 0f168e64..ca08e7f7 100644 --- a/5converting.tex +++ b/5converting.tex @@ -1,6 +1,6 @@ \hypertarget{ConvIssues}{} \section{Converting Files from SS3 v.3.24} -Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes 3.24 files as input and will output 3.30 input and output files. SS\_trans executables are available for v. 3.30.01 - 3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. +Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes 3.24 files as input and will output version 3.30 input and output files. SS\_trans executables are available for v.3.30.01 - v.3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{v.3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. The following file structure and steps are recommended for converting model files: \begin{enumerate} diff --git a/6starter.tex b/6starter.tex index 82439cc4..10278ea3 100644 --- a/6starter.tex +++ b/6starter.tex @@ -101,8 +101,8 @@ \subsection{Starter File Options (starter.ss)} %\pagebreak \hline - 1 & Number of Data Files to Output: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{All output files are sequentially output to data\_echo.ss\_new and need to be parsed by the user into separate data files. The output of the input data file makes no changes, retaining the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data\_echo.ss\_new. As of v.3.30.19, the output file names have changed; now a separate file is created for the echoed data (data\_echo.ss\_new), the expected data values given the model fit (data\_expval.ss), and any requested bootstrap data files (data\_boot\_x.ss where x is the bootstrap number). In versions before 3.30.19, each of these outputs was printed to a single file called data.ss\_new.}} \Tstrut\Bstrut\\ - & 0 = none; As of 3.30.16, none of the .ss\_new files will be produced;& \Bstrut\\ + 1 & Number of Data Files to Output: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{All output files are sequentially output to data\_echo.ss\_new and need to be parsed by the user into separate data files. The output of the input data file makes no changes, retaining the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data\_echo.ss\_new. As of v.3.30.19, the output file names have changed; now a separate file is created for the echoed data (data\_echo.ss\_new), the expected data values given the model fit (data\_expval.ss), and any requested bootstrap data files (data\_boot\_x.ss where x is the bootstrap number). In versions before v.3.30.19, each of these outputs was printed to a single file called data.ss\_new.}} \Tstrut\Bstrut\\ + & 0 = none; As of v.3.30.16, none of the .ss\_new files will be produced;& \Bstrut\\ & 1 = output an annotated replicate of the input data file; & \Tstrut\Bstrut\\ & 2 = add a second data file containing the model's expected values with no added error. ; and & \Tstrut\Bstrut\\ & 3+ = add N-2 parametric bootstrap data files. & \Tstrut\Bstrut\\ @@ -219,7 +219,7 @@ \subsection{Starter File Options (starter.ss)} \hline %\pagebreak - 1 & F report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Selects the denominator to use when reporting the F std report values. A new option to allow for the calculation of a multi-year trailing average in F was implemented in v. 3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average.}} \Tstrut\\ + 1 & F report basis: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Selects the denominator to use when reporting the F std report values. A new option to allow for the calculation of a multi-year trailing average in F was implemented in v.3.30.16. This option is triggered by appending the number of years to calculate the average across where an input of 1 or 11 would result in the SPR\textsubscript{target} with no changes. Alternatively a value of 21 would calculate F as SPR\textsubscript{target} with a 2-year trailing average.}} \Tstrut\\ & 0 = not relative, report raw values; & \\ & 1 = use F std value relative to SPR\textsubscript{target}; & \\ & 2 = use F std value relative to F\textsubscript{MSY}; and & \\ @@ -252,7 +252,7 @@ \subsection{Starter File Options (starter.ss)} % \pagebreak \hline - \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in SS3 v3.30 format. A value of 999 indicates that the control and data files are in a previous SS3 v.3.24 version. The ss\_trans.exe executable should be used and will convert the v.3.24 files the control.ss\_new and data\_echo.ss\_new files to the new format. All ss\_new files are in the SS3 v.3.30 format, so starter.ss\_new has SS3 v.3.30 on the last line. The mortality-growth parameter section has a new sequence and SS3 v.3.30 cannot read a ss.par file produced by SS3 v.3.24 and earlier, so ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from SS3 v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ + \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in SS3 v.3.30 format. A value of 999 indicates that the control and data files are in a previous SS3 v.3.24 version. The ss\_trans.exe executable should be used and will convert the v.3.24 files the control.ss\_new and data\_echo.ss\_new files to the new format. All ss\_new files are in the SS3 v.3.30 format, so starter.ss\_new has SS3 v.3.30 on the last line. The mortality-growth parameter section has a new sequence and SS3 v.3.30 cannot read a ss.par file produced by SS3 v.3.24 and earlier, so ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from SS3 v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ & & \\ & & \\ & & \\ diff --git a/7forecast.tex b/7forecast.tex index 83beac0d..5bce7d56 100644 --- a/7forecast.tex +++ b/7forecast.tex @@ -187,7 +187,7 @@ \subsection{Forecast File Options (forecast.ss)} \hline % \pagebreak - 0.75 \Tstrut & Control Rule Buffer (multiplier between 0-1.0 or -1) & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Control rule catch or F\textsubscript{target} as a fraction of selected catch or F\textsubscript{MSY} proxy. The buffer will be applied to reduce catch from the estimated overfishing limit. The buffer value is a value between 0-1.0 where a value of 1.0 would set catch equal to the overfishing limit. As example if the buffer is applied to catch (Control Rule option 3 or 4 above) the catch will equal the buffer times the overfishing limit. Alternatively a value of -1 will allow the user to input a forecast year specific control rule fraction (added in v. 3.30.13).}} \Bstrut\\ + 0.75 \Tstrut & Control Rule Buffer (multiplier between 0-1.0 or -1) & \multirow{1}{1cm}[-0.25cm]{\parbox{12cm}{Control rule catch or F\textsubscript{target} as a fraction of selected catch or F\textsubscript{MSY} proxy. The buffer will be applied to reduce catch from the estimated overfishing limit. The buffer value is a value between 0-1.0 where a value of 1.0 would set catch equal to the overfishing limit. As example if the buffer is applied to catch (Control Rule option 3 or 4 above) the catch will equal the buffer times the overfishing limit. Alternatively a value of -1 will allow the user to input a forecast year specific control rule fraction (added in v.3.30.13).}} \Bstrut\\ & & \Bstrut\\ & & \Bstrut\\ & & \Bstrut\\ diff --git a/8data.tex b/8data.tex index 74af0186..0ce2dc81 100644 --- a/8data.tex +++ b/8data.tex @@ -315,7 +315,7 @@ \subsection{Bycatch Fleets} \end{enumerate} \end{enumerate} -In version 3.30.14 it was identified that there can be an interaction between the use of bycatch fleets and the search for the $F_{0.1}$ reference point which may results in the search failing. Changes to the search feature were implemented to make the search more robust, however, issue may still be encountered. In these instances it is recommended to not select the $F_{0.1}$ reference point calculation in the forecast file. +In v.3.30.14 it was identified that there can be an interaction between the use of bycatch fleets and the search for the $F_{0.1}$ reference point which may results in the search failing. Changes to the search feature were implemented to make the search more robust, however, issue may still be encountered. In these instances it is recommended to not select the $F_{0.1}$ reference point calculation in the forecast file. \subsection{Predator Fleets} @@ -1183,7 +1183,7 @@ \subsection{Stock (Morph) Composition Data} \item The expected value is combined across sexes. The entered data values will be normalized to sum to one within SS3. \item The ``null'' flag is included here in the data input section and is a reserved spot for future features. \item Note that there is a specific value of minimum compression to add to all values of observed and expected. - \item Warning for earlier versions of SS3: A flaw was identified in the calculation of accumulation by morph. This has been corrected in version 3.30.14. Older versions were incorrectly calculating the catch by morph using the expectation around age-at-length which already was accounting for the accumulation by morph. + \item Warning for earlier versions of SS3: A flaw was identified in the calculation of accumulation by morph. This has been corrected in v.3.30.14. Older versions were incorrectly calculating the catch by morph using the expectation around age-at-length which already was accounting for the accumulation by morph. \end{itemize} \subsection{Selectivity Empirical Data (future feature)} diff --git a/9control.tex b/9control.tex index 6dd8f13f..887b65cf 100644 --- a/9control.tex +++ b/9control.tex @@ -368,7 +368,7 @@ \subsubsection{Natural Mortality} \myparagraph{Age-specific M Linked to Age-Specific Length and Maturity} -This is an experimental option available as of 3.30.17. +This is an experimental option available as of v.3.30.17. A general model for age- and sex-specific natural mortality expands a model developed by \citet{maunder2010bigeye} and \citet{maunder2011M} and is based on the following some assumptions: @@ -731,7 +731,7 @@ \subsubsection{Catch Multiplier} where $C_{obs}$ is the input catch by fleet (observed catch) within the data file and $c_{mult}$ is the estimated (or fixed) catch multiplier. It has year-specific, not season-specific, time-varying capabilities. In the catch likelihood calculation, expected catch is multiplied by the catch multiplier by year and fishery to get $C_{obs}$ before being compared to the observed retained catch as modified by the $c_{mult}$. \subsubsection{Ageing Error Parameters} -These parameters are only included in the control file if one of the ageing error definitions in the data file has requested this feature (by putting a negative value for the ageing error of the age zero fish of one ageing error definition). As of version 3.30.12, these parameters now have time-varying capability. Seven additional full parameter lines are required. The parameter lines specify: +These parameters are only included in the control file if one of the ageing error definitions in the data file has requested this feature (by putting a negative value for the ageing error of the age zero fish of one ageing error definition). As of v.3.30.12, these parameters now have time-varying capability. Seven additional full parameter lines are required. The parameter lines specify: \begin{enumerate} \item Age at which the estimated pattern begins (just linear below this age), this is the start age. \item Bias at start age (as additive offset from unbiased age). @@ -1006,8 +1006,8 @@ \subsection{Spawner-Recruitment} & & 6: Beverton-Holt with flat-top beyond Bzero, 2 parameters: ln(R0) and steepness; \\ & & 7: \hyperlink{Survivorship}{Survivorship function}: 3 parameters: ln(R0), $z_{frac}$, and $\beta$, suitable for sharks and low fecundity stocks to assure recruits are <= population production; \\ %& & 8: \hyperlink{Shepherd}{Shepherd}: 3 parameters: ln(R0), steepness, and shape parameter, $c$;\\ - & & 8: \hyperlink{Shepherd}{Shepherd re-parameterization}: 3 parameters: ln(R0), steepness, and shape parameter, $c$ (added to version 3.30.11 and is in beta mode); and \\ - & & 9: \hyperlink{Ricker2}{Ricker re-parameterization}: 3 parameters: ln(R0), steepness, and Ricker power, $\gamma$ (added to version 3.30.11 and is in beta mode). \Bstrut\\ + & & 8: \hyperlink{Shepherd}{Shepherd re-parameterization}: 3 parameters: ln(R0), steepness, and shape parameter, $c$ (added to v.3.30.11 and is in beta mode); and \\ + & & 9: \hyperlink{Ricker2}{Ricker re-parameterization}: 3 parameters: ln(R0), steepness, and Ricker power, $\gamma$ (added to v.3.30.11 and is in beta mode). \Bstrut\\ \hline 1 \Tstrut & Equilibrium recruitment & Use steepness in initial equilibrium recruitment calculation \\ @@ -1328,11 +1328,11 @@ \subsubsection{Recruitment Deviation Setup} A non-equilibrium initial age composition is achieved by setting the first year of the recruitment deviations before the model start year. These pre-start year recruitment deviations will be applied to the initial equilibrium age composition to adjust this composition before starting the time series. The model first applies the initial F level to an equilibrium age composition to get a preliminary N-at-age vector and the catch that comes from applying the F's to that vector, then it applies the recruitment deviations for the specified number of younger ages in this vector. If the number of estimated ages in the initial age composition is less than maximum age, then the older ages will retain their equilibrium levels. Because the older ages in the initial age composition will have progressively less information from which to estimate their true deviation, the start of the bias adjustment should be set accordingly. \subsection{Fishing Mortality Method} -There are four methods available for calculation of fishing mortality (F): 1) Pope's approximation, 2) Baranov's continuous F with each F as a model parameter, 3) a hybrid F method, and 4) a fleet-specific parameter hybrid F approach (introduced in version 3.30.18). +There are four methods available for calculation of fishing mortality (F): 1) Pope's approximation, 2) Baranov's continuous F with each F as a model parameter, 3) a hybrid F method, and 4) a fleet-specific parameter hybrid F approach (introduced in v.3.30.18). -A new fleet-specific parameter hybrid F approach was introduced in version 3.30.18 and is now the recommended approach for most models. With this approach, some fleets can stay in hybrid F mode while others transition to parameters. For example, bycatch fleets must start with parameters in phase 1, while other fishing fleets can use hybrid F or start with hybrid and transition to parameters at a fleet-specific designated phase. We believe this new method 4 is a superior super-set to current methods 2 (all use parameters and all can start hybrid then switch to parameters) and method 3 (all hybrid for all phases). However, during testing specific situations were identified when this approach may not be the best selection. If there is uncertainty around annual input catch values (e.g., se = 0.15) and some fleets have discard data being fit to as well, the treatment of F as parameters (method 2) may allow for better model fits to the data. +A new fleet-specific parameter hybrid F approach was introduced in v.3.30.18 and is now the recommended approach for most models. With this approach, some fleets can stay in hybrid F mode while others transition to parameters. For example, bycatch fleets must start with parameters in phase 1, while other fishing fleets can use hybrid F or start with hybrid and transition to parameters at a fleet-specific designated phase. We believe this new method 4 is a superior super-set to current methods 2 (all use parameters and all can start hybrid then switch to parameters) and method 3 (all hybrid for all phases). However, during testing specific situations were identified when this approach may not be the best selection. If there is uncertainty around annual input catch values (e.g., se = 0.15) and some fleets have discard data being fit to as well, the treatment of F as parameters (method 2) may allow for better model fits to the data. -The hybrid F method does a Pope's approximation to provide initial values for iterative adjustment of the Baranov continuous F values to closely approximate the observed catch. Prior to version 3.30.18, the hybrid method (method 3) was recommended in most cases. With the hybrid method, the final values are in terms of continuous F, but do not need to be specified as full parameters. In a 2 fishery model, low F case (e.g., similar to natural mortality or lower), the hybrid method is just as fast as the Pope approximation and produces identical results. +The hybrid F method does a Pope's approximation to provide initial values for iterative adjustment of the Baranov continuous F values to closely approximate the observed catch. Prior to v.3.30.18, the hybrid method (method 3) was recommended in most cases. With the hybrid method, the final values are in terms of continuous F, but do not need to be specified as full parameters. In a 2 fishery model, low F case (e.g., similar to natural mortality or lower), the hybrid method is just as fast as the Pope approximation and produces identical results. However, when F is very high, the problem becomes quite computationally stiff for Pope's approximation and the hybrid method so convergence in ADMB may slow due to more sensitive gradients in the log likelihood. In these high F cases it may be better to use F option 2, continuous F as full parameters. It is also advisable to allow the model to start with good values for the F parameters. This can be done by specifying a later phase (>1) under the conditional input for F method = 2 where early phases will use the hybrid method, then switch to F as parameter in later phases and transfer the hybrid F values to the parameter initial values. @@ -1558,7 +1558,7 @@ \subsubsection{Float Q} Then midway through the evolution of the SS3 v.3.24 code lineage a new Q option was introduced based on user recommendations. This option allowed Q to float and to compare the resulting Q value to a prior, hence the information in that prior would pull the model solution in direction of a floated Q that came close to the prior. -Currently, in 3.30, that float with prior capability is fully embraced. All fleets that have any survey or CPUE options need to have a catchability specification and get a base Q parameter in the list. Any of these Q's can be either: +Currently, in v.3.30, that float with prior capability is fully embraced. All fleets that have any survey or CPUE options need to have a catchability specification and get a base Q parameter in the list. Any of these Q's can be either: \begin{itemize} \item Fixed: by not floating and not estimating. @@ -1782,7 +1782,7 @@ \subsubsection{Selectivity Pattern Details} \end{itemize} \myparagraph{Pattern 2 (size) - Older version of selectivity pattern 24 for backward compatibility} -Pattern 2 differs from pattern 24 only in the treatment of sex-specific offset parameter 5. See note in \hyperlink{MaleSelectivityOffset}{Male Selectivity Estimated as Offsets from Female Selectivity} for more information. Pattern 24 was changed in version 3.30.19 with the old parameterization now provided in Pattern 2. +Pattern 2 differs from pattern 24 only in the treatment of sex-specific offset parameter 5. See note in \hyperlink{MaleSelectivityOffset}{Male Selectivity Estimated as Offsets from Female Selectivity} for more information. Pattern 24 was changed in v.3.30.19 with the old parameterization now provided in Pattern 2. \myparagraph{Pattern 5 (size) - Mirror Selectivity} Two parameters select the min and max bin number (not min max size) of the source selectivity pattern. If first parameter has value <=0, then interpreted as a value of 1 (e.g., first bin). If second parameter has value <=0, then interpreted as maximum length bin (e.g., last bin specified in the data file). The mirrored selectivity pattern must have be from a lower fleet number (e.g., already specified before the mirrored fleet). @@ -2206,7 +2206,7 @@ \subsubsection{Retention} \begin{itemize} \item p1 - ascending inflection, \item p2 - ascending slope, - \item p3 - maximum retention controlling the height of the asymptote (smaller values result in lower asymptotes), often a time-varying quantity to match the observed amount of discard. As of v. 3.30.01, this parameter is now input in logit space ranging between -10 and 10. A fixed value of -999 would assume no retention of fish and a value of 999 would set asymptotic retention equal to 1.0, + \item p3 - maximum retention controlling the height of the asymptote (smaller values result in lower asymptotes), often a time-varying quantity to match the observed amount of discard. As of v.3.30.01, this parameter is now input in logit space ranging between -10 and 10. A fixed value of -999 would assume no retention of fish and a value of 999 would set asymptotic retention equal to 1.0, \item p4 - male offset to ascending inflection (arithmetic, not multiplicative), \end{itemize} \item Dome-shaped (add the following 3 parameters): @@ -2279,7 +2279,7 @@ \subsubsection{Sex-Specific Selectivity} Notes: \begin{itemize} \item Male selectivity offsets currently cannot be time-varying because they are offsets from female selectivity, they inherit the time-varying characteristics of the female selectivity. - \item Prior to version 3.30.19 male parameter 5 in pattern 24 scaled only the apical selectivity. This sometimes resulted in strange shapes when the final selectivity, which was shared between females and males in that parameterization, was higher than the estimated apical selectivity. For backwards compatibility to the pattern 24 parameterization prior to 3.30.19, use selectivity pattern 2. + \item Prior to v.3.30.19 male parameter 5 in pattern 24 scaled only the apical selectivity. This sometimes resulted in strange shapes when the final selectivity, which was shared between females and males in that parameterization, was higher than the estimated apical selectivity. For backwards compatibility to the pattern 24 parameterization prior to v.3.30.19, use selectivity pattern 2. \end{itemize} \hypertarget{Dirichletparameter}{} @@ -2445,10 +2445,10 @@ \subsection{Tag Recapture Parameters} Currently, tag parameters cannot be time-varying. -A shortcoming was identified in the recapture calculations when using Pope's F Method and multiple seasons in SS3 prior to v.3.30.14. The internal calculations were corrected in version 3.30.14. Now the Z-at-age is applied internally for calculations of fishing pressure on the population when using the Pope calculations. +A shortcoming was identified in the recapture calculations when using Pope's F Method and multiple seasons in SS3 prior to v.3.30.14. The internal calculations were corrected in v.3.30.14. Now the Z-at-age is applied internally for calculations of fishing pressure on the population when using the Pope calculations. \myparagraph{Mirroring of Tagging Parameters} -In version 3.30.14, the ability to mirror the tagging parameters from another tag group or fleet was added. With this approach, the user can have just one parameter value for each of the five tagging parameter types and mirror all other parameters. Note that parameter lines are still required for the mirrored parameters and only lower numbered parameters can be mirrored. Mirroring is evoked through the phase input in the tagging parameter section. The options are: +In v.3.30.14, the ability to mirror the tagging parameters from another tag group or fleet was added. With this approach, the user can have just one parameter value for each of the five tagging parameter types and mirror all other parameters. Note that parameter lines are still required for the mirrored parameters and only lower numbered parameters can be mirrored. Mirroring is evoked through the phase input in the tagging parameter section. The options are: \begin{itemize} \item No mirroring among tag groups or fleets: phase > -1000, \item Mirror the next lower (i.e., already specified) tag group or fleet: phase = -1000 and set other parameter values the same as next lower Tag Group or fleet, diff --git a/User_Guides/getting_started/Getting_Started_SS3.Rmd b/User_Guides/getting_started/Getting_Started_SS3.Rmd index 6f11145b..116347e8 100644 --- a/User_Guides/getting_started/Getting_Started_SS3.Rmd +++ b/User_Guides/getting_started/Getting_Started_SS3.Rmd @@ -138,7 +138,7 @@ ADMB options can be added to the run when calling the SS3 executable from the co To list all command line options, use one of these calls: `SS3 -?` or `SS3 -help`. More info about the ADMB command line options is available in the [ADMB Manual](http://www.admb-project.org/docs/manuals/) (Chapter 12: Command line options). -To run SS3 without estimation use: `ss3 -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in SS3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. +To run SS3 without estimation use: `ss3 -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in SS3 v.3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. ## Using ss.par for initial values diff --git a/User_Guides/model_step_by_step/model_tutorial.Rmd b/User_Guides/model_step_by_step/model_tutorial.Rmd index c1e926bb..23054074 100644 --- a/User_Guides/model_step_by_step/model_tutorial.Rmd +++ b/User_Guides/model_step_by_step/model_tutorial.Rmd @@ -551,7 +551,7 @@ Varaiance adjustment factors and/or lambdas can be used for data weighting, but # Running the model and afterwards -The model was run using Stock Synthesis 3.30.14 and no additional ADMB command line options. The model should have no issues running, but if you have issues, please see debugging sections in the **Getting Started** and **Developing your first Stock Synthesis model** guides. +The model was run using Stock Synthesis v.3.30.14 and no additional ADMB command line options. The model should have no issues running, but if you have issues, please see debugging sections in the **Getting Started** and **Developing your first Stock Synthesis model** guides. ## Checks for convergence After running the model, open the warning.sso file to check for any warnings from Stock Synthesis. This file shows no warnings: diff --git a/User_Guides/ss3_model_tips/ss3_model_tips.Rmd b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd index b986e98c..79dcc1f3 100644 --- a/User_Guides/ss3_model_tips/ss3_model_tips.Rmd +++ b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd @@ -47,7 +47,7 @@ SS3 has a rich set of features. Some required inputs are conditional on other in The [SS3 user manual](https://github.com/nmfs-stock-synthesis/doc/releases) can be used as a guide to help you edit your model. Conditional inputs are noted in the manual. The SSI can also help guide you through changes in model inputs required as you select different SS3 model options. -If you are unsure if you got the setup right (e.g., adding the correct number of parameter lines for a chosen catchability setup), try running the model with ```maxphase = 0``` in the starter file and ADMB option ```-nohess``` (or for SS3 3.30.16 and greater, run the model with command line options ```-stopph 0 -nohess```, no need to change the starter file). If the model run completes, you can compare the **control.ss_new** file and the first data set in **data.ss_new** to your SS3 input files to make sure SS3 interpreted the values as intended. If the run exits before completion, you can look at **warning.sso** and **echoinput.sso** for clues as to what was wrong with your setup. +If you are unsure if you got the setup right (e.g., adding the correct number of parameter lines for a chosen catchability setup), try running the model with ```maxphase = 0``` in the starter file and ADMB option ```-nohess``` (or for SS3 v.3.30.16 and greater, run the model with command line options ```-stopph 0 -nohess```, no need to change the starter file). If the model run completes, you can compare the **control.ss_new** file and the first data set in **data.ss_new** to your SS3 input files to make sure SS3 interpreted the values as intended. If the run exits before completion, you can look at **warning.sso** and **echoinput.sso** for clues as to what was wrong with your setup. For additional help with model specification, please post your questions on the vlab [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) (for registered SS3 users) or send an email to the SS3 team at NMFS.Stock.Synthesis@noaa.gov. diff --git a/_f_mortality.tex b/_f_mortality.tex index 3b15a908..a2b6c6b2 100644 --- a/_f_mortality.tex +++ b/_f_mortality.tex @@ -30,7 +30,7 @@ \subsection{Fishing Mortality in Stock Synthesis} $F\text{std}_y$ is a standardized measure of the total fishing intensity for a year and is reported in the derived quantities, so variance is calculated for this quantity. See below for how it relates to $annF$. -Terminology and reporting of $\text{ann}F$ and $F\text{std}$ has been slightly revised for clarity in 3.30.15.00 and the description here follows the new conventions. +Terminology and reporting of $\text{ann}F$ and $F\text{std}$ has been slightly revised for clarity in v.3.30.15.00 and the description here follows the new conventions. \myparagraph{$F$ Calculation} SS3 allows for three approaches to estimate the $F'$ that will match the input values for retained catch. Note that SS3 is calculating the $F'$ to match the retained catch conditional on the fraction of total catch that is retained, e.g., the total catch is partitioned into retained and discarded portions. @@ -78,7 +78,7 @@ \subsection{Fishing Mortality in Stock Synthesis} For options 4 and 5 of F\_report\_units, the $F$ is calculated as $Z-M$ where $Z$ is calculated as $ln(N_{t+1,a+1}/N_{t,a})$, thus $Z$ subsumes the effect of $F$. -The ann$F$ is calculated for each year of the estimated time series and of the forecast. Additionally, an ann$F$ is calculated in the benchmark calculations to provide equilibrium values that have the same units as ann$F$ from the time series. In versions previous to 3.30.15, it was labeled inaccurately as $F$std in the output, not ann$F$. For example, in the Management Quantities section of derived quantities prior to 3.30.15, there is a quantity labeled Fstd\_Btgt. This is more accurately labeled as the annual $F$ associated with the biomass target, ann\_F\_Btgt, in 3.30.15. +The ann$F$ is calculated for each year of the estimated time series and of the forecast. Additionally, an ann$F$ is calculated in the benchmark calculations to provide equilibrium values that have the same units as ann$F$ from the time series. In versions previous to v.3.30.15, it was labeled inaccurately as $F$std in the output, not ann$F$. For example, in the Management Quantities section of derived quantities prior to v.3.30.15, there is a quantity labeled Fstd\_Btgt. This is more accurately labeled as the annual $F$ associated with the biomass target, ann\_F\_Btgt, in v.3.30.15. \myparagraph{$F$std} $F$std is a single annual value based on ann$F$ and the relationship to ann$F$ is specified by F\_report\_basis in the starter.ss file. The benchmark ann$F$ may be used to rescale the time series of ann$F$s to become a time series of standardized values representing the intensity of fishing, $F$std. The report basis is selected in the starter file as: diff --git a/tv_parameter_description.tex b/tv_parameter_description.tex index b4f7591b..74b4991e 100644 --- a/tv_parameter_description.tex +++ b/tv_parameter_description.tex @@ -60,7 +60,7 @@ \subsubsection{Specification of Time-Varying Parameters: Long Parameter Lines} \item $P_1 = P_{min} + \frac{R}{1 + e^{-Y_y - X_y }}$. For years after the first year. \end{itemize} \item 6 = mean reverting random walk with penalty to keep the root mean squared error (RMSE) near 1.0. Same as case 4, but with penalty applied. - \item The option of extending the final model year deviation value subsequent years (i.e., into the forecast period) was added in v. 3.30.13. This new option is specified by selecting the appropriate deviation link option and appending a 2 at the front (e.g, 25), which will use the final year deviation value for all forecast years. + \item The option of extending the final model year deviation value subsequent years (i.e., into the forecast period) was added in v.3.30.13. This new option is specified by selecting the appropriate deviation link option and appending a 2 at the front (e.g, 25), which will use the final year deviation value for all forecast years. \end{itemize} where: \begin{itemize} From 4dc7becf6b8a9f21b029ec6689cbee2971bb62b4 Mon Sep 17 00:00:00 2001 From: e-gugliotti-NOAA Date: Tue, 17 Oct 2023 10:59:27 -0400 Subject: [PATCH 8/8] more version standardizing --- 13output.tex | 6 ++-- 15special.tex | 6 ++-- 1_4sections.tex | 8 ++--- 5converting.tex | 12 ++++---- 6starter.tex | 6 ++-- 7forecast.tex | 4 +-- 8data.tex | 20 ++++++------- 9control.tex | 30 +++++++++---------- .../getting_started/Getting_Started_SS3.Rmd | 2 +- .../model_step_by_step/model_tutorial.Rmd | 2 +- User_Guides/ss3_model_tips/ss3_model_tips.Rmd | 2 +- _data_weighting.tex | 4 +-- _forecast_module.tex | 4 +-- tv_parameter_description.tex | 4 +-- 14 files changed, 55 insertions(+), 55 deletions(-) diff --git a/13output.tex b/13output.tex index 833d852f..79f085cb 100644 --- a/13output.tex +++ b/13output.tex @@ -59,7 +59,7 @@ \subsection{Stock Synthesis Summary} Before v.3.30.17, TotBio and SmryBio did not always match values reported in columns of the TIME\_SERIES table of Report.sso. The report file should be used instead of ss\_summary.sso for correct calculation of these quantities before v.3.30.17. Care should be taken when using the TotBio and SmryBio if the model configuration has recruitment after January 1 or in a later season, as TotBio and SmryBio quantities are always calculated on January 1. Consult the detailed age-, area-, and season-specific tables in report.sso for calculations done at times other than January 1. \subsection{SIS table} -The SIS\_table.sso is deprecated as of SS3 v.3.30.17. Please use the \hyperref[sec:r4ss]{r4ss} function \texttt{get\_SIS\_info()} instead. +The SIS\_table.sso is deprecated as of v.3.30.17. Please use the \hyperref[sec:r4ss]{r4ss} function \texttt{get\_SIS\_info()} instead. The SIS\_table.sso file contains model output formatted for reading into the NMFS Species Information System (\href{https://www.st.nmfs.noaa.gov/sis/}{SIS}). This file includes an assessment summary for categories of information (abundance, recruitment, spawners, catch estimates) that are input into the SIS database. A time-series of estimated quantities which aggregates estimates across multiple areas and seasons are provided to summarize model results. Access to the SIS database is granted to all NOAA employees. @@ -193,7 +193,7 @@ \subsection{Bootstrap Data Files} \item Often there is need to explore the removal (not include in the model fitting) of specific years in a data set which can be done by specifying a negative fleet number. If bootstrapping a data file, note that specifying a negative fleet in the data inputs for indices, length composition, or age composition will include the ``observation'' in the model (hence generating predicted values and bootstrap data sets for the data), but not in the negative log likelihood. The ``observation values'' used with negative fleet do not influence the predicted values, except when using tail compression with length or age composition. Non-zero values greater than the minimum tail compression should be used for the observation values when tail compression is being used, as using zeros or values smaller than the minimum tail compression can cause the predicted values to be reported as zero and shift predictions to other bins. - \item As of SS3 v.3.30.15, age and length composition data that use the Dirichlet-Multinomial distribution in the model are generated using the Dirichlet-Multinomial in bootstrap data sets. + \item As of v.3.30.15, age and length composition data that use the Dirichlet-Multinomial distribution in the model are generated using the Dirichlet-Multinomial in bootstrap data sets. \end{itemize} @@ -214,7 +214,7 @@ \subsection{Forecast and Reference Points (Forecast-report.sso)} \subsection{Main Output File, Report.sso} -This is the primary output file. Its major sections (as of SS3 v.3.30.16) are listed below. +This is the primary output file. Its major sections (as of v.3.30.16) are listed below. The sections of the output file are: \begin{itemize} diff --git a/15special.tex b/15special.tex index 4e072da6..fd8b85d6 100644 --- a/15special.tex +++ b/15special.tex @@ -6,7 +6,7 @@ \subsection{Using Time-Varying Parameters} \hypertarget{tvOrder}{} \subsubsection{Time-Varying Parameters} -Starting in SS3 v.3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of v.3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3: +Starting in v.3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of v.3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3: \begin{enumerate} \item Environmental or Density dependent Linkages: Links the base parameter with environmental data or a model derived quantity. \item Parameter deviations: Creates annual deviations from the base parameter during a user-specified range of years. @@ -144,7 +144,7 @@ \section{Detailed Information on Stock Synthesis Processes} \subsection{Jitter} \hypertarget{Jitter}{} -The jitter function has been updated with v.3.30. The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}): +The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure \ref{fig:jitter}): \begin{enumerate} \item A normal distribution is calculated such that the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. \item A jitter shift value, termed ``\textit{K}'', is calculated from the distribution equal to pr(P\textsubscript{CURRENT}). @@ -179,7 +179,7 @@ \subsection{Parameter Priors} The options for parameter priors are described as a function of $Pval$, the value of the parameter for which a prior is being calculated, as well as the parameter bounds in the case of the beta distribution ($Pmax$ and $Pmin$), and the input values for $Prior$ and $Pr\_SD$, which in some cases are the mean and standard deviation, but interpretation depends on the prior type. The Prior Likelihoods below represent the negative log likelihood in all cases. \myparagraph{Prior Types} -Note that the numbering in SS3 v.3.30 is different from that used in SS3 v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs: +Note that the numbering in v.3.30 is different from that used in v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs: \begin{tabular}{ll} $P_\text{init}$ & The value of the parameter for which a prior is being calculated where init can either be \\ diff --git a/1_4sections.tex b/1_4sections.tex index c558fcc4..3ab3e21b 100644 --- a/1_4sections.tex +++ b/1_4sections.tex @@ -7,7 +7,7 @@ \section{Introduction}\label{sec:intro} Assessment models are loosely coupled to other models. For example, an ocean-temperature or circulation model or benthic-habitat map may be directly included in the pre-processing of the fish abundance survey. A time series of a derived ocean factor, like the North Atlantic Oscillation, can be included as an indicator of a change in a population process. Output of a multi-decadal time series of derived fish abundance can be an input to ecosystem and economic models to better understand cumulative impacts and benefits. -Stock Synthesis is an age- and size-structured assessment model in the class of models termed integrated analysis models. Stock Synthesis has evolved since its initial inception in order to model a wide range of fish populations and dynamics. The most recent major revision to Stock Synthesis occurred in 2016, when version 3.30 was introduced. This new version of Stock Synthesis required major revisions to the input files relative to earlier versions (see the \hypertarget{ConvIssues}{Converting Files} section for more information). The acronym for Stock Synthesis has evolved over time with earlier versions being referred to as SS2 (Stock Synthesis v.2.xx) and older versions as SS3 (Stock Synthesis v.3.xx). +Stock Synthesis is an age- and size-structured assessment model in the class of models termed integrated analysis models. Stock Synthesis has evolved since its initial inception in order to model a wide range of fish populations and dynamics. The most recent major revision to Stock Synthesis occurred in 2016, when v.3.30 was introduced. This new version of Stock Synthesis required major revisions to the input files relative to earlier versions (see the \hypertarget{ConvIssues}{Converting Files} section for more information). The acronym for Stock Synthesis has evolved over time with earlier versions being referred to as SS2 (Stock Synthesis v.2.xx) and older versions as SS3 (Stock Synthesis v.3.xx). SS3 has a population sub-model that simulates a stock's growth, maturity, fecundity, recruitment, movement, and mortality processes, an observation sub-model estimates expected values for various types of data, a statistical sub-model characterizes the data's goodness of fit and obtains best-fitting parameters with associated variance, and a forecast sub-model projects needed management quantities. SS3 outputs the quantities, with confidence intervals, needed to implement risk-averse fishery control rules. The model is coded in C++ with parameter estimation enabled by automatic differentiation (\href{http://www.admb-project.org}{admb}). Windows, Linux, and iOS versions are available. Output processing and associated tools are in R, and a graphical interface is in QT. SS3 executables and support material is available on \href{https://github.com/nmfs-stock-synthesis}{GitHub}. The rich feature set in SS3 allows it to be configured for a wide range of situations. SS3 has become the basis for a large fraction of U.S. assessments and many other assessments around the world. @@ -50,10 +50,10 @@ \section{File Organization}\label{FileOrganization} \item control.ss\_ new: Updated version of the control file with final parameter values replacing the initial parameter values. \item starter.ss\_ new: New version of the starter file with annotations. \item Forecast.ss\_ new: New version of the forecast file with annotations. - \item warning.sso: This file contains a list of warnings generated during program execution. Starting in SS3 v.3.30.20 warnings are categorized into either Note or Warning. An item marked as a not denotes settings that the user may want to revise but do not require any additional changes for the model to run. Items marked with Warning are items that may or may not have allowed the model to finish running. Items with a fatal warning caused the model to fail during either reading input files or calculations. Warnings classified as error or adjustment may be causing calculation issues, even if the model was able to finish reading file and running, and should be addressed the user. + \item warning.sso: This file contains a list of warnings generated during program execution. Starting in v.3.30.20 warnings are categorized into either Note or Warning. An item marked as a not denotes settings that the user may want to revise but do not require any additional changes for the model to run. Items marked with Warning are items that may or may not have allowed the model to finish running. Items with a fatal warning caused the model to fail during either reading input files or calculations. Warnings classified as error or adjustment may be causing calculation issues, even if the model was able to finish reading file and running, and should be addressed the user. \item echoinput.sso: This file is produced while reading the input files and includes an annotated echo of the input. The sole purpose of this output file is debugging input errors. \item Report.sso: This file is the primary report file. - \item ss\_summary.sso: Output file that contains all the likelihood components, parameters, derived quantities, total biomass, summary biomass, and catch. This file offers an abridged version of the report file that is useful for quick model evaluation. This file is only available in SS3 v.3.30.08.03 and greater. + \item ss\_summary.sso: Output file that contains all the likelihood components, parameters, derived quantities, total biomass, summary biomass, and catch. This file offers an abridged version of the report file that is useful for quick model evaluation. This file is only available in v.3.30.08.03 and greater. \item CompReport.sso: Observed and expected composition data in a list-based format. \item Forecast-report.sso: Output of management quantities and for forecasts. \item CumReport.sso: This file contains a brief version of the run output, output is appended to current content of file so results of several runs can be collected together. This is useful when a batch of runs is being processed. @@ -61,7 +61,7 @@ \section{File Organization}\label{FileOrganization} \item ss.par: This file contains all estimated and fixed parameters from the model run. \item ss.std, ss.rep, ss.cor etc.: Standard ADMB output files. \item checkup.sso: Contains details of selectivity parameters and resulting vectors. This is written during the first call of the objective function. - \item Gradient.dat: New for SS3 v.3.30, this file shows parameter gradients at the end of the run. + \item Gradient.dat: New for v.3.30, this file shows parameter gradients at the end of the run. \item rebuild.dat: Output formatted for direct input to Andre Punt's rebuilding analysis package. Cumulative output is output to REBUILD.SS (useful when doing MCMC or profiles). \item SIS\_table.sso: Output formatted for reading into the NMFS Species Information System. \item Parmtrace.sso: Parameter values at each iteration. diff --git a/5converting.tex b/5converting.tex index ca08e7f7..00d62a98 100644 --- a/5converting.tex +++ b/5converting.tex @@ -1,6 +1,6 @@ \hypertarget{ConvIssues}{} -\section{Converting Files from SS3 v.3.24} -Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes 3.24 files as input and will output version 3.30 input and output files. SS\_trans executables are available for v.3.30.01 - v.3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{v.3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. +\section{Converting Files from Stock Synthesis v.3.24} +Converting files from version 3.24 to version 3.30 can be performed by using the program ss\_trans.exe. This executable takes v.3.24 files as input and will output v.3.30 input and output files. SS\_trans executables are available for v.3.30.01 - v.3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss\_trans.exe available from the \href{https://github.com/nmfs-stock-synthesis/stock-synthesis/releases/tag/v3.30.17}{v.3.30.17 release page on GitHub} to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. The following file structure and steps are recommended for converting model files: \begin{enumerate} @@ -10,14 +10,14 @@ \section{Converting Files from SS3 v.3.24} \item Review the control (control.ss\_new) file to determine that all model functions converted correctly. The structural changes and assumptions for a couple of the advanced model features are too complicated to convert automatically. See below for some known features that may not convert. When needed, it is recommended to modify the control.ss\_new file, the converted control file, for only the features that failed to convert properly. - \item Change the max phase to a value greater than the last phase in which the a parameter is set to estimated within the control file. Run the new SS3 v.3.30 executable (ss3.exe) within the ``converted'' folder using the renamed ss\_new files created from the transition executable. + \item Change the max phase to a value greater than the last phase in which the a parameter is set to estimated within the control file. Run the new v.3.30 executable (ss3.exe) within the ``converted'' folder using the renamed ss\_new files created from the transition executable. - \item Compare likelihood and model estimates between the SS3 v.3.24 and SS3 v.3.30 model versions. + \item Compare likelihood and model estimates between the v.3.24 and v.3.30 model versions. - \item If desired, update to versions of SS3 > v.3.30.17 by running the new v.3.30 input files with the higher executable. + \item If desired, update to versions of Stock Synthesis > v.3.30.17 by running the new v.3.30 input files with the higher executable. \end{enumerate} -\noindent There are some options that have been substantially changed in SS3 v.3.30, which impedes the automatic converting of SS3 v.3.24 model files. Known examples of SS3 v.3.24 options that cannot be converted, but for which better alternatives are available in SS3 v.3.30 are: +\noindent There are some options that have been substantially changed in v.3.30, which impedes the automatic converting of v.3.24 model files. Known examples of v.3.24 options that cannot be converted, but for which better alternatives are available in v.3.30 are: \begin{enumerate} \item The use of Q deviations, \item Complex birth seasons, diff --git a/6starter.tex b/6starter.tex index 10278ea3..4142c5de 100644 --- a/6starter.tex +++ b/6starter.tex @@ -40,7 +40,7 @@ \subsection{Starter File Options (starter.ss)} control\_ file.ctl & & File name of the control file \Tstrut\\ \hline - 0 & Initial Parameter Values: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Do not set equal to 1 if there have been any changes to the control file that would alter the number or order of parameters stored in the ss.par file. Values in ss.par can be edited, carefully. Do not run ss\_trans.exe from a ss.par from SS3 v.3.24.}}\Tstrut\\ + 0 & Initial Parameter Values: & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{Do not set equal to 1 if there have been any changes to the control file that would alter the number or order of parameters stored in the ss.par file. Values in ss.par can be edited, carefully. Do not run ss\_trans.exe from a ss.par from v.3.24.}}\Tstrut\\ & 0 = use values in control file; and& \\ & 1 = use ss.par after reading setup in the control file. & \\ @@ -124,7 +124,7 @@ \subsection{Starter File Options (starter.ss)} 200 & MCMC thin interval & Number of iterations to remove between the main period of the MCMC run. \Tstrut\\ \hline - 0.0 & \hyperlink{Jitter}{Jitter:} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The jitter function has been revised with SS3 v.3.30. Starting values are now jittered based on a normal distribution with the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, care should be given when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.}} \Tstrut\\ + 0.0 & \hyperlink{Jitter}{Jitter:} & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{The jitter function has been revised with v.3.30. Starting values are now jittered based on a normal distribution with the pr(P\textsubscript{MIN}) = 0.1\% and the pr(P\textsubscript{MAX}) = 99.9\%. A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, care should be given when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.}} \Tstrut\\ & 0 = no jitter done to starting values; and & \\ & >0 starting values will vary with larger jitter values resulting in larger changes from the parameter values in the control or par file. & \\ & & \\ @@ -252,7 +252,7 @@ \subsection{Starter File Options (starter.ss)} % \pagebreak \hline - \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in SS3 v.3.30 format. A value of 999 indicates that the control and data files are in a previous SS3 v.3.24 version. The ss\_trans.exe executable should be used and will convert the v.3.24 files the control.ss\_new and data\_echo.ss\_new files to the new format. All ss\_new files are in the SS3 v.3.30 format, so starter.ss\_new has SS3 v.3.30 on the last line. The mortality-growth parameter section has a new sequence and SS3 v.3.30 cannot read a ss.par file produced by SS3 v.3.24 and earlier, so ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from SS3 v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ + \hypertarget{Convert}{3.30} & Model version check value. & \multirow{1}{1cm}[-0.25cm]{\parbox{12.5cm}{A value of 3.30 indicates that the control and data files are currently in v.3.30 format. A value of 999 indicates that the control and data files are in a previous v.3.24 version. The ss\_trans.exe executable should be used and will convert the v.3.24 files the control.ss\_new and data\_echo.ss\_new files to the new format. All ss\_new files are in the v.3.30 format, so starter.ss\_new has v.3.30 on the last line. The mortality-growth parameter section has a new sequence and v.3.30 cannot read a ss.par file produced by v.3.24 and earlier, so ensure that read par file option at the top of the starter file is set to 0. The \hyperlink{ConvIssues}{Converting Files from Stock Synthesis v.3.24} section has additional information on model features that may impede file conversion.}} \Tstrut\Bstrut\\ & & \\ & & \\ & & \\ diff --git a/7forecast.tex b/7forecast.tex index 5bce7d56..df269521 100644 --- a/7forecast.tex +++ b/7forecast.tex @@ -100,7 +100,7 @@ \subsection{Forecast File Options (forecast.ss)} & 5 = input annual F scalar. & \Bstrut\\ \hline - 10 & N forecast years (must be >= 1) & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{At least one forecast year now required if the Forecast option above is >=0 (Note: SS3 v.3.24 allowed zero forecast years).}} \Tstrut\\ + 10 & N forecast years (must be >= 1) & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{At least one forecast year now required if the Forecast option above is >=0 (Note: v.3.24 allowed zero forecast years).}} \Tstrut\\ & & \\ \hline @@ -114,7 +114,7 @@ \subsection{Forecast File Options (forecast.ss)} Option 1: & \multicolumn{2}{l}{\multirow{1}{1cm}[-0.15cm]{\parbox{18.5cm}{This approach for forecast year ranges is no longer recommended because blocks, random effects, and other time-varying parameter changes can now operate on forecast years and the new approach provides better control averaging.}}} \Tstrut\Bstrut\\ & & \Tstrut\Bstrut\\ - 0 0 0 0 0 0 & Enter 6 Forecast Year Values & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{To continue to use this pre-SS3.20.22 approach, enter 6 values: beginning and ending years for selectivity, relative Fs, and recruitment distribution. These are used to create means over the specified range of years. Values can be entered as the actual year, -999 for start year, or values of 0 or -integer to be relative endyr. It is important to note:}} \Tstrut\Bstrut\\ + 0 0 0 0 0 0 & Enter 6 Forecast Year Values & \multirow{1}{1cm}[-0.15cm]{\parbox{12cm}{To continue to use this pre-v.3.20.22 approach, enter 6 values: beginning and ending years for selectivity, relative Fs, and recruitment distribution. These are used to create means over the specified range of years. Values can be entered as the actual year, -999 for start year, or values of 0 or -integer to be relative endyr. It is important to note:}} \Tstrut\Bstrut\\ & & \Tstrut\Bstrut\\ & & \Tstrut\Bstrut\\ \pagebreak diff --git a/8data.tex b/8data.tex index 0ce2dc81..944b7187 100644 --- a/8data.tex +++ b/8data.tex @@ -103,7 +103,7 @@ \subsubsection{Subseasons and Timing of Events} \item Survey body weight and size composition is calculated using the nearest subseason. \item Reproductive output now has specified spawn timing (in months fraction) and interpolates growth to that timing. \item Survey numbers calculated at cruise survey timing using $e^{-z}$. - \item Continuous Z for entire season. Same as applied in version v.3.24. + \item Continuous Z for entire season. Same as applied in version v.3.24. \end{itemize} \subsection{Terminology} @@ -115,8 +115,8 @@ \subsection{Model Dimensions} \hline \textbf{Value} & \textbf{Description} \Tstrut\Bstrut\\ \hline - \#V3.30.XX.XX & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Model version number. This is written by SS3 in the new files and a good idea to keep updated in the input files.}} \Tstrut\\ - & \Bstrut\\ + \#V3.30.XX.XX & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Model version number. This is written by SS3 in the new files and a good idea to keep updated in the input files.}} \Tstrut\\ + & \Bstrut\\ \hline \#C data using new survey & \multirow{1}{1cm}[-0.1cm]{\parbox{12cm}{Data file comment. Must start with \#C to be retained then written to top of various output files. These comments can occur anywhere in the data file, but must have \#C in columns 1-2.}} \Tstrut\\ @@ -172,7 +172,7 @@ \subsection{Model Dimensions} \subsection{Fleet Definitions} -\hypertarget{GenericFleets}{The} catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (SS3 v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey. +\hypertarget{GenericFleets}{The} catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey. \begin{center} \begin{tabular}{p{2cm} p{2cm} p{2cm} p{2cm} p{2cm} p{4cm}} @@ -196,7 +196,7 @@ \subsection{Fleet Definitions} \begin{itemize} \item 1 = fleet with input catches; \item 2 = bycatch fleet (all catch discarded) and invoke extra input for treatment in equilibrium and forecast; - \item 3 = survey: assumes no catch removals even if associated catches are specified below. If you would like to remove survey catch set fleet type to option = 1 with specific month timing for removals (defined below in the ``Timing'' section); and + \item 3 = survey: assumes no catch removals even if associated catches are specified below. If you would like to remove survey catch set fleet type to option = 1 with specific month timing for removals (defined below in the ``Timing'' section); and \item 4 = predator (M2) fleet that adds additional mortality without a fleet F (added in v.3.30.18). Ideal for modeling large mortality events such as fish kills or red tide. Requires additional long parameter lines for a second mortality component (M2) in the control file after the natural mortality/growth parameter lines (entered immediately after the fraction female parameter line). \end{itemize} @@ -234,7 +234,7 @@ \subsection{Fleet Definitions} A catch multiplier can be useful when trying to explore historical unrecorded catches or ongoing illegal and unregulated catches. The catch multiplier is a full parameter line in the control file and has the ability to be time-varying. \subsection{Bycatch Fleets} -The option to include bycatch fleets was introduced in SS3 v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored. +The option to include bycatch fleets was introduced in v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored. A fishing fleet is designated as a bycatch fleet by indicating that its fleet type is 2. A bycatch fleet creates a fishing mortality, same as a fleet of type 1, but a bycatch fleet has all catch discarded so the input value for retained catch is ignored. However, an input value for retained catch is still needed to indicate that the bycatch fleet was active in that year and season. A catch multiplier cannot be used with bycatch fleets because catch multiplier works on retained catch. SS3 will expect that the retention function for this fleet will be set in the selectivity section to type 3, indicating that all selected catch is discarded dead. It is necessary to specify a selectivity pattern for the bycatch fleet and, due to generally lack of data, to externally derive values for the parameters of this selectivity. @@ -523,7 +523,7 @@ \subsection{Discard} \item 0 = normal distribution, value of error in data file is interpreted as CV of the observation; \item -1 = normal distribution, value of error in data file is interpreted as standard error of the observation; \item -2 = lognormal distribution, value of error in data file is interpreted as standard error of the observation in log space; and - \item -3 = truncated normal distribution (new with SS3 v.3.30, needs further testing), value of error in data file is interpreted as standard error of the observation. This is a good option for low observed discard rates. + \item -3 = truncated normal distribution (new with v.3.30, needs further testing), value of error in data file is interpreted as standard error of the observation. This is a good option for low observed discard rates. \end{itemize} \myparagraph{Discard Notes} @@ -533,7 +533,7 @@ \subsection{Discard} \item Zero (0.0) is a legitimate discard observation, unless lognormal error structure is used. \item Duplicate discard observations from a fleet for the same year are not allowed. \item Observations can be entered in any order, except if the super-period feature is used. - \item Note that in the control file you will enter information for retention such that 1-retention is the amount discarded. All discard is assumed dead, unless you enter information for discard mortality. Retention and discard mortality can be either size-based or age-based (new with SS3 v.3.30). + \item Note that in the control file you will enter information for retention such that 1-retention is the amount discarded. All discard is assumed dead, unless you enter information for discard mortality. Retention and discard mortality can be either size-based or age-based (new with v.3.30). \end{itemize} \myparagraph{Cautionary Note} @@ -711,7 +711,7 @@ \subsection{Length Composition Data Structure} \end{itemize} \myparagraph{Minimum Sample Size} -The minimum value (floor) for all sample sizes. This value must be at least 0.001. Conditional age-at-length data may have observations with sample sizes less than 1. SS3 v.3.24 had an implicit minimum sample size value of 1. +The minimum value (floor) for all sample sizes. This value must be at least 0.001. Conditional age-at-length data may have observations with sample sizes less than 1. Version 3.24 had an implicit minimum sample size value of 1. \myparagraph{Additional information on Dirichlet Parameter Number and Effective Sample Sizes} If the Dirichlet-multinomial error distribution is selected, indicate here which of a list of Dirichlet-multinomial parameters will be used for this fleet. So each fleet could use a unique Dirichlet-multinomial parameter, or all could share the same, or any combination of unique and shared. The requested number of Dirichlet-multinomial parameters are specified as parameter lines in the control file immediately after the selectivity parameter section. Please note that age-compositions Dirichlet-multinomial parameters are continued after length-compositions, so a model with one fleet and both data types would presumably require two new Dirichlet-multinomial parameters. @@ -795,7 +795,7 @@ \subsection{Length Composition Data} \myparagraph{Note} When processing data to be input into SS3, all observed fish of sizes smaller than the first bin should be added to the first bin and all observed fish larger than the last bin should be condensed into the last bin. -The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in SS3 v.3.30, the model will continue to read length composition data until an pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read. +The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in v.3.30, the model will continue to read length composition data until an pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read. Each observation can be stored as one row for ease of data management in a spreadsheet and for sorting of the observations. However, the 6 header values, the female vector and the male vector could each be on a separate line because ADMB reads values consecutively from the input file and will move to the next line as necessary to read additional values. diff --git a/9control.tex b/9control.tex index 887b65cf..1482e4cb 100644 --- a/9control.tex +++ b/9control.tex @@ -78,7 +78,7 @@ \subsection{Parameter Line Elements} \end{tabular} \end{center} -Note that relative to SS3 v.3.24, the order of PRIOR SD and PRIOR TYPE have been switched and the PRIOR TYPE options have been renumbered. +Note that relative to Stock Synthesis v.3.24, the order of PRIOR SD and PRIOR TYPE have been switched and the PRIOR TYPE options have been renumbered. The full parameter line (14 in length) syntax for the mortality-growth, spawn-recruitment, catchability, and selectivity sections provides additional controls to give the parameter time-varying properties. If a parameter (a full parameter line of length 14) is set up to be time-varying (i.e., parameter time blocks, annual deviations), short parameter lines, the first 7 elements, are required to be specified immediately after the main parameter block (i.e., mortality-growth parameter section). Additional information regard time-varying parameters and how to implement them is in the \hyperlink{TVpara}{Using Time-Varying Parameters} section. @@ -156,7 +156,7 @@ \subsubsection{Settlement Timing for Recruits and Distribution} \endlastfoot 1 \Tstrut & &\multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{Recruitment distribution method. This section controls which combinations of growth pattern x area x settlement will get a portion of the total recruitment coming from each spawning. Options:}} \\ \\ - & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{1 = no longer available (used the SS3 v.3.24 or earlier setup);}} \\ + & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{1 = no longer available (used the Stock Synthesis v.3.24 or earlier setup);}} \\ & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{2 = main effects for growth pattern, settle timing, and area;}} \\ & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{3 = each settle entity; and}} \\ & & \multirow{2}{4cm}[-0.1cm]{\parbox{12cm}{4 = none, no parameters (only if growth pattern x settlement x area = 1).}} \Bstrut\\ @@ -348,7 +348,7 @@ \subsubsection{Auto-generation} 1 & & Environmental/Block/Deviation adjust method for all time-varying parameters. \Tstrut\\ & & 1 = warning relative to base parameter bounds; and \\ - & & 3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., SS3 v.3.24) is no longer an option. \Bstrut\\ + & & 3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., v.3.24) is no longer an option. \Bstrut\\ \multicolumn{2}{l}{1 1 1 1 1} & Auto-generation of time-varying parameter lines. Five values control auto-generation for parameter block sections: 1-biology, 2-spawn-recruitment, 3-catchability, 4-tag (future), and 5-selectivity. \\ & & The accepted values are: \\ @@ -506,7 +506,7 @@ \subsubsection{Growth} \myparagraph{Mean size-at-maximum age} -The mean size of fish in the max age age bin depends upon how close the growth curve is to Linf by the time it reaches max age and the mortality rate of fish after they reach max age. Users specify the mortality rate to use in this calculation during the initial equilibrium year. This must be specified by the user and should be reasonably close to M plus initial F. In SS3 v.3.30, this uses the von Bertalanffy growth out to 3 times the maximum population age and decays the numbers at age by exp(-value set here). For subsequent years of the time series, the model should update the size-at-maximum age according to the weighted average mean size of fish already at maximum age and the size of fish just graduating into maximum age. Unfortunately, this updating is only happening in years with time-varying growth. This will hopefully be fixed in a the future version. +The mean size of fish in the max age age bin depends upon how close the growth curve is to Linf by the time it reaches max age and the mortality rate of fish after they reach max age. Users specify the mortality rate to use in this calculation during the initial equilibrium year. This must be specified by the user and should be reasonably close to M plus initial F. In v.3.30, this uses the von Bertalanffy growth out to 3 times the maximum population age and decays the numbers at age by exp(-value set here). For subsequent years of the time series, the model should update the size-at-maximum age according to the weighted average mean size of fish already at maximum age and the size of fish just graduating into maximum age. Unfortunately, this updating is only happening in years with time-varying growth. This will hopefully be fixed in a the future version. \myparagraph{Age-specific K} This option creates age-specific K multipliers for each age of a user-specified age range, with independent multiplicative factors for each age in the range and for each growth pattern / sex. The null value is 1.0 and each age's K is set to the next earlier age's K times the value of the current age's multiplier. Each of these multipliers is entered as a full parameter line, so inherits all time-varying capabilities of full parameters. The lower end of this age range cannot extend younger than the specified age for which the first growth parameter applies. This is a beta model feature, so examine output closely to assure you are getting the size-at-age pattern you expect. Beware of using this option in a model with seasons within year because the K deviations are indexed solely by integer age according to birth year. There is no offset for birth season timing effects, nor is there any seasonal interpolation of the age-varying K. @@ -550,9 +550,9 @@ \subsubsection{Growth} \Tstrut 25 & & Growth Amax (A2): Reference age for second size-at-age L2 (post-settlement) parameter. Use 999 to treat as L infinity. \Bstrut\\ \hline - \Tstrut 0.20 & & Exponential decay for growth above maximum age (plus group: fixed at 0.20 in SS3 v.3.24; should approximate initial Z). Alternative Options: \\ - & & -998 = Disable growth above maximum age (plus group) similar to earlier versions of SS3 (prior to SS3 v.3.24); and \\ - & & -999 = Replicate the simpler calculation done in SS3 v.3.24. \Bstrut\\ + \Tstrut 0.20 & & Exponential decay for growth above maximum age (plus group: fixed at 0.20 in v.3.24; should approximate initial Z). Alternative Options: \\ + & & -998 = Disable growth above maximum age (plus group) similar to earlier versions of SS3 (prior to v.3.24); and \\ + & & -999 = Replicate the simpler calculation done in v.3.24. \Bstrut\\ \hline 0 & & Placeholder for future growth feature. \Tstrut\Bstrut\\ @@ -1145,7 +1145,7 @@ \subsubsection{Spawner-Recruitment Parameter Setup} 0.60 \Tstrut & $\sigma_R$ & Standard deviation of natural log recruitment. This parameter has two related roles. It penalizes deviations from the spawner-recruitment curve, and it defines the offset between the arithmetic mean spawner-recruitment curve (as calculated from ln(R0) and steepness) and the expected geometric mean (which is the basis from which the deviations are calculated. Thus the value of $\sigma_R$ must be selected to approximate the true average recruitment deviation. See \hypertarget{TuneSigmaR}{Tuning $\sigma_R$} section below for additional guidance on how to tune $\sigma_R$. \Bstrut\\ %\hline - 0\Tstrut & Regime Parameter & This replaces the R1 offset parameter. It can have a block for the initial equilibrium year, so can fully replicate the functionality of the previous R1 offset approach. The SR regime parameter is intended to have a base value of 0.0 and not be estimated. Similar to cohort-growth deviation, it serves simply as a base for adding time-varying adjustments. This concept is similar to the old environment effect on deviates feature in SS3 v.3.24 and earlier. \Bstrut\\ + 0\Tstrut & Regime Parameter & This replaces the R1 offset parameter. It can have a block for the initial equilibrium year, so can fully replicate the functionality of the previous R1 offset approach. The SR regime parameter is intended to have a base value of 0.0 and not be estimated. Similar to cohort-growth deviation, it serves simply as a base for adding time-varying adjustments. This concept is similar to the old environment effect on deviates feature in v.3.24 and earlier. \Bstrut\\ \hline 0 & Autocorrelation & Autocorrelation in recruitment. \Tstrut\Bstrut\\ @@ -1556,7 +1556,7 @@ \subsubsection{Mirrored Q with offset} \subsubsection{Float Q} The use and development of float in Q has evolved over time within SS3. The original approach in earlier versions of SS3 (version 3.24 and before) is that with Q ``float'' the units of the survey or fishery CPUE were treated as dimensionless so the Q was adjusted within each model iteration to maintain a mean difference of 0.0 between observed and expected (usually in natural log space). In contrast, Q as a parameter (float = 0) one had the ability to interpret the absolute scaling of Q and put a prior on it to help guide the model solution. Also, with Q as a parameter the code allowed for Q to be time-varying. -Then midway through the evolution of the SS3 v.3.24 code lineage a new Q option was introduced based on user recommendations. This option allowed Q to float and to compare the resulting Q value to a prior, hence the information in that prior would pull the model solution in direction of a floated Q that came close to the prior. +Then midway through the evolution of the v.3.24 code lineage a new Q option was introduced based on user recommendations. This option allowed Q to float and to compare the resulting Q value to a prior, hence the information in that prior would pull the model solution in direction of a floated Q that came close to the prior. Currently, in v.3.30, that float with prior capability is fully embraced. All fleets that have any survey or CPUE options need to have a catchability specification and get a base Q parameter in the list. Any of these Q's can be either: @@ -1571,13 +1571,13 @@ \subsubsection{Float Q} Q relates the units of the survey or CPUE to the population abundance, not the population density per unit area. But many surveys and most fishery CPUE is a proportional to mean fish density per unit area. This does not have any impact in a one area model because the role of area is absorbed into the value of Q. In a multi-area model, one may want to assert that the relative difference in CPUE between two areas is informative about the relative abundance between the areas. However, CPUE is a measure of fish density per unit area, so one may want to multiply CPUE by area before putting the data into the model so that asserting the same Q for the two areas will be informative about relative abundance. -In SS3 v.3.30.13, a new catchability option has been added that allows Q to be mirrored and to add an offset to ln(Q) of the primary area when calculating the ln(Q) for the dependent area. The offset is a parameter and, hence, can be estimated and have a prior. This option allows the CPUE data to stay in density units and the effect of relative stock area is contained in the value of the ln(Q) offset. +In v.3.30.13, a new catchability option has been added that allows Q to be mirrored and to add an offset to ln(Q) of the primary area when calculating the ln(Q) for the dependent area. The offset is a parameter and, hence, can be estimated and have a prior. This option allows the CPUE data to stay in density units and the effect of relative stock area is contained in the value of the ln(Q) offset. \subsubsection{Catchabilty Time-Varying Parameters} Time-Varying catchability can be used. Details on how to specify time-varying parameters can be found in the \hyperlink{tvOrder}{Time-Varying Parameter Specification and Setup} section. -\subsubsection{Q Conversion Issues Between SS3 v.3.24 and v.3.30} -In SS3 v.3.24 it was common to use the deviation approach implemented as if it was survey specific blocks to create a time-varying Q for a single survey. In some cases, only one year's deviation was made active in order to implement, in effect, a block for Q. The transition executable (sstrans.exe) cannot convert this, but an analogous approach is available in SS3 v.3.30 because true blocks can now be used, as well as environmental links and annual deviations. Also note that deviations in SS3 v.3.24 were survey specific (so no parameter for years with no survey). In SS3 v.3.30, deviations are always year-specific, so you might have a deviation created for a year with no survey. +\subsubsection{Q Conversion Issues Between Stock Synthesis v.3.24 and v.3.30} +In v.3.24 it was common to use the deviation approach implemented as if it was survey specific blocks to create a time-varying Q for a single survey. In some cases, only one year's deviation was made active in order to implement, in effect, a block for Q. The transition executable (sstrans.exe) cannot convert this, but an analogous approach is available in v.3.30 because true blocks can now be used, as well as environmental links and annual deviations. Also note that deviations in v.3.24 were survey specific (so no parameter for years with no survey). In v.3.30, deviations are always year-specific, so you might have a deviation created for a year with no survey. \subsection{Selectivity and Discard} For each fleet and survey, read a definition line for size selectivity and retention. @@ -1634,7 +1634,7 @@ \subsection{Selectivity and Discard} \myparagraph{Age Selectivity} For each fleet and survey, read a definition line for age selectivity. The 4 values to be read are the same as for the size-selectivity. -As of SS3 v.3.30.15, for some selectivity patterns the user can specify the minimum age of selected fish. Most selectivity curves by default select age 0 fish (i.e., inherently specify the minimum age of selected fish as 0). However, it is fairly common for the age bins specified in the data file to start at age 1. This means that any age 0 fish selected are pooled up into the age 1' bin, which will have a detrimental effect on fitting age-composition data. In order to prevent the selection of age 0 (or older) fish, the user can specify the minimum selected age for some selectivity patterns (12, 13, 14, 16, 18, 26, or 27) in versions of SS3 v.3.30.15 and later. For example, if the minimum selected age is 1 (so that age 0 fish are not selected), selectivity pattern type can be specified as 1XX, where XX is the selectivity pattern. A more specific example is if selectivity is age-logistic and the minimum selected age desired is 1, the selectivity pattern would be specified as 112 (the regular age-logistic selectivity pattern is option 12). The user can also select higher minimum selected ages, if desired; for example, 212 would be the age-logistic selectivity pattern with a minimum selected age of 2 (so that age 0 and 1 fish are not selected). +As of v.3.30.15, for some selectivity patterns the user can specify the minimum age of selected fish. Most selectivity curves by default select age 0 fish (i.e., inherently specify the minimum age of selected fish as 0). However, it is fairly common for the age bins specified in the data file to start at age 1. This means that any age 0 fish selected are pooled up into the age 1' bin, which will have a detrimental effect on fitting age-composition data. In order to prevent the selection of age 0 (or older) fish, the user can specify the minimum selected age for some selectivity patterns (12, 13, 14, 16, 18, 26, or 27) in versions of v.3.30.15 and later. For example, if the minimum selected age is 1 (so that age 0 fish are not selected), selectivity pattern type can be specified as 1XX, where XX is the selectivity pattern. A more specific example is if selectivity is age-logistic and the minimum selected age desired is 1, the selectivity pattern would be specified as 112 (the regular age-logistic selectivity pattern is option 12). The user can also select higher minimum selected ages, if desired; for example, 212 would be the age-logistic selectivity pattern with a minimum selected age of 2 (so that age 0 and 1 fish are not selected). \subsubsection{Reading the Selectivity and Retention Parameters} Read the required number of parameter setup lines as specified by the definition lines above. The complete order of the parameter setup lines is: @@ -2525,7 +2525,7 @@ \subsection{Lambdas (Emphasis Factors)} \myparagraph{Lambda Usage Notes} \hypertarget{SaAlambda}{If} the CV for size-at-age is being estimated and the model contains mean size-at-age data, then the flag for inclusion of the + ln(stddev) term in the likelihood must be included. Otherwise, the model will always get a better fit to the mean size-at-age data by increasing the parameter for CV of size-at-age. -The reading of the lambda values has been substantially altered with SS3 v.3.30. Instead of reading a matrix containing all the needed lambda values, the model now just reads those elements that will be given a value other than 1.0. After reading the datafile, the model sets lambda equal to 0.0 if there are no data for a particular fleet/data type, and a value of 1.0 if data exist. So beware if your data files had data but you had set the lambda to 0.0 in a previous version of SS3. First read an integer for the number of changes. +The reading of the lambda values has been substantially altered with v.3.30. Instead of reading a matrix containing all the needed lambda values, the model now just reads those elements that will be given a value other than 1.0. After reading the datafile, the model sets lambda equal to 0.0 if there are no data for a particular fleet/data type, and a value of 1.0 if data exist. So beware if your data files had data but you had set the lambda to 0.0 in a previous version of SS3. First read an integer for the number of changes. \begin{longtable}{p{3cm} p{3cm} p{2cm} p{3cm} p{3cm}} @@ -2560,7 +2560,7 @@ \subsection{Lambdas (Emphasis Factors)} \end{longtable} \end{center} -Starting in SS3 v.3.30.16, the application of a lambda to initial equilibrium catch is now fleet specific. In previous versions, a single lambda was applied in the same manner across all fleets with an initial equilibrium catch specified. +Starting in v.3.30.16, the application of a lambda to initial equilibrium catch is now fleet specific. In previous versions, a single lambda was applied in the same manner across all fleets with an initial equilibrium catch specified. \pagebreak diff --git a/User_Guides/getting_started/Getting_Started_SS3.Rmd b/User_Guides/getting_started/Getting_Started_SS3.Rmd index 116347e8..aa190ab8 100644 --- a/User_Guides/getting_started/Getting_Started_SS3.Rmd +++ b/User_Guides/getting_started/Getting_Started_SS3.Rmd @@ -138,7 +138,7 @@ ADMB options can be added to the run when calling the SS3 executable from the co To list all command line options, use one of these calls: `SS3 -?` or `SS3 -help`. More info about the ADMB command line options is available in the [ADMB Manual](http://www.admb-project.org/docs/manuals/) (Chapter 12: Command line options). -To run SS3 without estimation use: `ss3 -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in SS3 v.3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. +To run SS3 without estimation use: `ss3 -stopph 0`. This will speed up your run by not optimizing. Often `-stopph 0` is used with the `-nohess` option to speed up the run even more. To run SS3 with no estimation in v.3.30.15 and earlier, change the max phase in the starter.ss file to 0 and run the exe with the `–nohess` option. ## Using ss.par for initial values diff --git a/User_Guides/model_step_by_step/model_tutorial.Rmd b/User_Guides/model_step_by_step/model_tutorial.Rmd index 23054074..f36afebd 100644 --- a/User_Guides/model_step_by_step/model_tutorial.Rmd +++ b/User_Guides/model_step_by_step/model_tutorial.Rmd @@ -327,7 +327,7 @@ Option 0 is used for natural mortality because only 1 value is being assumed. Gr 1 # GrowthModel: 1=vonBert with L1&L2; 2=Richards with L1&L2; 3=age_specific_K_incr; 4=age_specific_K_decr; 5=age_specific_K_each; 6=NA; 7=NA; 8=growth cessation 0 #_Age(post-settlement)_for_L1;linear growth below this 25 #_Growth_Age_for_L2 (999 to use as Linf) --999 #_exponential decay for growth above maxage (value should approx initial Z; -999 replicates 3.24; -998 to not allow growth above maxage) +-999 #_exponential decay for growth above maxage (value should approx initial Z; -999 replicates v.3.24; -998 to not allow growth above maxage) 0 #_placeholder for future growth feature # 0 #_SD_add_to_LAA (set to 0.1 for SS2 V1.x compatibility) diff --git a/User_Guides/ss3_model_tips/ss3_model_tips.Rmd b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd index 79dcc1f3..95edf269 100644 --- a/User_Guides/ss3_model_tips/ss3_model_tips.Rmd +++ b/User_Guides/ss3_model_tips/ss3_model_tips.Rmd @@ -47,7 +47,7 @@ SS3 has a rich set of features. Some required inputs are conditional on other in The [SS3 user manual](https://github.com/nmfs-stock-synthesis/doc/releases) can be used as a guide to help you edit your model. Conditional inputs are noted in the manual. The SSI can also help guide you through changes in model inputs required as you select different SS3 model options. -If you are unsure if you got the setup right (e.g., adding the correct number of parameter lines for a chosen catchability setup), try running the model with ```maxphase = 0``` in the starter file and ADMB option ```-nohess``` (or for SS3 v.3.30.16 and greater, run the model with command line options ```-stopph 0 -nohess```, no need to change the starter file). If the model run completes, you can compare the **control.ss_new** file and the first data set in **data.ss_new** to your SS3 input files to make sure SS3 interpreted the values as intended. If the run exits before completion, you can look at **warning.sso** and **echoinput.sso** for clues as to what was wrong with your setup. +If you are unsure if you got the setup right (e.g., adding the correct number of parameter lines for a chosen catchability setup), try running the model with ```maxphase = 0``` in the starter file and ADMB option ```-nohess``` (or for v.3.30.16 and greater, run the model with command line options ```-stopph 0 -nohess```, no need to change the starter file). If the model run completes, you can compare the **control.ss_new** file and the first data set in **data.ss_new** to your SS3 input files to make sure SS3 interpreted the values as intended. If the run exits before completion, you can look at **warning.sso** and **echoinput.sso** for clues as to what was wrong with your setup. For additional help with model specification, please post your questions on the vlab [forums](https://vlab.noaa.gov/web/stock-synthesis/public-forums) (for registered SS3 users) or send an email to the SS3 team at NMFS.Stock.Synthesis@noaa.gov. diff --git a/_data_weighting.tex b/_data_weighting.tex index 0129e29c..7bf93b15 100644 --- a/_data_weighting.tex +++ b/_data_weighting.tex @@ -192,7 +192,7 @@ \subsection{Data Weighting} \end{minipage} \end{small} -\item Reset any existing variance adjustments factors that might have been used for the McAllister-Ianelli or Francis tuning methods. In 3.24 this means setting the values to 1, in SS3 v.3.30, you can simply delete or comment-out the rows with the adjustments. +\item Reset any existing variance adjustments factors that might have been used for the McAllister-Ianelli or Francis tuning methods. In v.3.24 this means setting the values to 1, in v.3.30, you can simply delete or comment-out the rows with the adjustments. \end{itemize} The \texttt{SS\_output} function in r4ss returns table like the following: @@ -210,7 +210,7 @@ \subsection{Data Weighting} If the reported $\theta/(1+\theta)$ ratio is close to 1.0, that indicates that the model is trying to tune the sample size as high as possible. In this case, the $ln(\theta)$ parameters should be fixed at a high value, like the upper bound of 20, which will result in 100\% weight being applied to the input sample sizes. An alternative would be to manually change the input sample sizes to a higher value so that the estimated weighting will be less than 100%. -Note that a constant of integration was added to the Dirichlet-multinomial likelihood equation in SS3 v.3.30.17. This will change the likelihood value, but parameter estimates and expected values should remain the same as in previous versions of SS3. +Note that a constant of integration was added to the Dirichlet-multinomial likelihood equation in v.3.30.17. This will change the likelihood value, but parameter estimates and expected values should remain the same as in previous versions of SS3. Some challenges posed by the Dirichlet-multinomial data-weighting approach: \begin{enumerate} diff --git a/_forecast_module.tex b/_forecast_module.tex index 391d8975..91a13a00 100644 --- a/_forecast_module.tex +++ b/_forecast_module.tex @@ -2,7 +2,7 @@ \subsection{Forecast Module: Benchmark and Forecasting Calculations} \label{sec:forecast} -SS3 v.3.20 introduced substantial upgrades to the benchmark and forecast module. The general intent was to make the forecast outputs more consistent with the requirement to set catch limits that have a known probability of exceeding the overfishing limit. In addition, this upgrade addressed several inadequacies with the previous module, including: +Stock Synthesis v.3.20 introduced substantial upgrades to the benchmark and forecast module. The general intent was to make the forecast outputs more consistent with the requirement to set catch limits that have a known probability of exceeding the overfishing limit. In addition, this upgrade addressed several inadequacies with the previous module, including: \begin{itemize} \item The average selectivity and relative F was the same for the benchmark and the forecast calculations; @@ -14,7 +14,7 @@ \subsection{Forecast Module: Benchmark and Forecasting Calculations} \item The forecast allowed for a blend of fixed input catches and catches calculated from target F; this is not optimal for calculation of the variance of F conditioned on a catch policy that sets annual catch limits (ACLs). \end{itemize} -The V3.20 module addressed these issues by: +The v.3.20 module addressed these issues by: \begin{itemize} \item Providing for unique specification of a range of years from which to calculate average selectivity for benchmark, average selectivity for forecast, relative F for benchmark, and relative F for forecast; \item Create a new specification for the range of years over which to average size-at-age and fecundity-at-age for the benchmark calculation. In a setup with time-varying growth, it may make sense to do this over the entire range of years in the time series. Note that some additional quantities still use their endyr values, notably the migration rates and the allocation of recruitments among areas. This will be addressed shortly; diff --git a/tv_parameter_description.tex b/tv_parameter_description.tex index 74b4991e..df295b8b 100644 --- a/tv_parameter_description.tex +++ b/tv_parameter_description.tex @@ -49,7 +49,7 @@ \subsubsection{Specification of Time-Varying Parameters: Long Parameter Lines} \item $X_y = \rho*X_{y-1} + \text{dev}_y*\text{dev}_{se}$ \item $P_y = P_{base,y} + X_y$ \end{itemize} - \item 5 = mean reverting random walk with $\rho$ and a logit transformation to stay within the minimum and maximum parameter bounds (approach added in SS3 v.3.30.16) + \item 5 = mean reverting random walk with $\rho$ and a logit transformation to stay within the minimum and maximum parameter bounds (approach added in v.3.30.16) \begin{itemize} \item $X_1 = \text{dev}_1*\text{dev}_{se}$ \item $R = P_{max} - P_{min}$ @@ -150,7 +150,7 @@ \subsubsection{Specification of Time-Varying Parameters: Short Parameter Lines} \hline \end{longtable} -In SS3 v.3.30, the time-varying input short parameter lines are organized such that all parameters that affect a base parameter are clustered together with time blocks (or trend) first, then environmental linkages, then parameter deviations. For example, if the mortality-growth (MG) base parameters 3 and 7 had time varying changes, the order would look like: +In Stock Synthesis v.3.30, the time-varying input short parameter lines are organized such that all parameters that affect a base parameter are clustered together with time blocks (or trend) first, then environmental linkages, then parameter deviations. For example, if the mortality-growth (MG) base parameters 3 and 7 had time varying changes, the order would look like: \begin{center} \begin{longtable}{p{5cm} p{10cm}}