Appendix E. Notes on the Jack Mackerel MSE Framework
Published
2025-08-20
1 Overview
This document is a collection of notes on the structure and behavior of key objects used in the jmMSE Management Strategy Evaluation (MSE) framework for SPRFMO Jack Mackerel. It documents the modeling components (h1, om, perf), performance calculations, and the use of getSlick() and FLslick() to generate evaluation plots.
This modular framework allows for flexible and transparent testing of MPs within the MSE simulation, including full customization of estimation, control rules, and implementation behavior. This notebook supports the JM MSE development process and is intended for use during scenario comparison, workshop reporting, and trade-off evaluation.
1.1 Defining Management Procedures using mpCtrl and mseCtrl
In the mse package, Management Procedures (MPs) are constructed as modular sequences of functional components that simulate how a fishery would be managed under alternative strategies. These strategies are defined using mpCtrl, which organizes component modules defined via mseCtrl.
This modular design allows you to:
Select estimation methods for stock status (est)
Define harvest control rules (hcr, phcr)
Simulate implementation systems (isys)
Include optional technical measures (tm)
1.1.1 Structure of mpCtrl
The mpCtrl() constructor takes a named list of components. Each component must be an mseCtrl object that defines the function to use (method) and its input parameters (args).
Flexible integration with simulated operating models
By separating each component of an MP into a function-object pair, the mse package supports reproducible, configurable, and extensible MSE design workflows.
To analyze the behavior of bufferdelta.hcr() over the range of index values used as input (i.e., the stock status metric like “depletion” or “zscore”), the key output to examine is the harvest control multiplier hcrm—which determines how much the TAC is adjusted relative to the previous TAC. This multiplier is a piecewise function of the index value at the data year.
1.1.7 Harvest Control Rule (HCR) with Buffer Delta
This HCR formulation uses a smoothed transition based on a buffer zone around a biomass or metric target. The response scalar \(h(m)\), applied to the previous catch, is defined based on the relative metric value \(m\) (e.g., standardized index or depletion level), and follows a piecewise logic:
Let:
\(m\): observed metric (e.g., index value)
\(t\): target level
\(w\): buffer width
\(l = t - 2w\): limit threshold
\(b_{\text{low}} = t - w\): buffer lower bound
\(b_{\text{upp}} = t + w\): buffer upper bound
\(r\): slope ratio
Then the Harvest Control Rule (HCR) response multiplier \(h(m)\) is:
\[
h(m) =
\begin{cases}
\frac{1}{2} \left(\frac{m}{l}\right)^2, & \text{if } m \leq l \\
\frac{1}{2} \left(1 + \frac{m - l}{b_{\text{low}} - l} \right), & \text{if } l < m < b_{\text{low}} \\
1, & \text{if } b_{\text{low}} \leq m < b_{\text{upp}} \\
1 + r \cdot \frac{1}{2(b_{\text{low}} - l)} (m - b_{\text{upp}}), & \text{if } m \geq b_{\text{upp}} \\
\end{cases}
\]
The resulting Total Allowable Catch (TAC) is calculated as:
Linear increase starting at 1 (slope = sloperatio)
Moderate increase
🔎 Example with Default Parameters
If you use the defaults:
target = 0.5
width = 1
sloperatio = 0.2
Then:
• bufflow = -0.5, buffupp = 1.5, lim = -1.5
• The flat zone is from -0.5 to 1.5 (note: with depletion metric, this wide range makes sense for standardized metrics like z-scores, but not raw depletion)
If using depletion as the metric, you’d typically want:
target = 0.4
width = 0.1
sloperatio = 0.2
→ lim = 0.2, bufflow = 0.3, buffupp = 0.5 So the response curve looks like:
This shows the piecewise nature of the multiplier and can be tailored to any input metric (depletion, zscore, etc.).
1.2 Overview of cpuescore
In the jmMSE framework, different CPUE scoring functions are used to inform harvest control rules (HCRs). These functions standardize or compare CPUE time series across simulations and reference periods. The three primary scoring methods are:
cpuescore.z
cpuescore.mean
cpuescore.level
These names were modified from the original cpuescore functions to provide more clarity. Future iterations may refactor this naming convention to reflect that they are more generally indices (e.g., from surveys). This may avoid confusion terminology that relates to fishery-dependent indices–i.e., CPUEs.
1.3 1. Z-score Standardization: cpuescore.z
This method standardizes the CPUE values by subtracting the mean and dividing by the standard deviation across simulations:
Useful when you want to assess relative anomalies in CPUE from expected trends.
1.4 2. Mean Ratio: cpuescore.mean
This method compares the mean CPUE in recent years (dy) to a reference mean CPUE: \[
\text{score}_i = \frac{\bar{\text{CPUE}}_{dy, i}}{\bar{\text{CPUE}}_{ref, i}}
\] This is a relative index level and is not standardized:
A list containing the full OM, OEM, and IEM for hypothesis H1 (qs file)
om
Iterated subset of the Operating Model from h1
oem, iem
Observation and implementation error models; extracted from h1
omperf
Performance metrics of OM alone, usually C, F, SB for conditioning years
perf
Combined data frame of MP simulation performance results
getSlick
Function that merges MP/OM results and constructs a Slick summary object
FLslick
Constructor function that builds and returns a Slick object for plotting
sli
The returned Slick object for visualization (Kobe, Quilt, Spider, etc.)
ctrl
A list of control parameters for MPs (e.g., estimation methods, tuning devs)
condition
Not found in current project files; possibly a misidentified object
1.8.1 Helper Functions to Plot Results
The Slick software, in addition to providing an interface for examining OMs and MPs, also provides some helper functions to visualize the results of the MPs simulations directly. The examples in the following code chunk illustrate some of these features.
The Slick object is the core summary container created by the getSlick() and FLslick() functions. It contains performance data used for visualization and evaluation across multiple Management Procedures (MPs) and Operating Models (OMs).
Slot
Contents
@Boxplot
MP × OM × performance indicators (boxplots)
@Kobe
SB/SBMSY vs F/FMSY over kobeyrs
@Quilt
Heatmap of average performance
@Spider
Scaled performance for visual trade-offs
@Timeseries
Time series of F, C, SB
@Tradeoff
Mean trade-off indicators (post-OM years)
@MPs, @OMs
Metadata: MP and OM definitions and labels
The R code below demonstrates how to load the operating model (OM) and compute baseline performance metrics so that the OM performance with MP results using the getSlick() function can be run.
Show the code
# Load OM and compute baseline performanceh1 <- om #qread("data/h1_1.07.qs")om <-iter(h1$om, seq(100))omperf <-performance(om, years =1970:2023, statistics = statistics[c("C", "F", "SB")])perf <-readPerformance(here::here("demo","performance.dat.gz"))head(perf)head(omperf)summary(omperf)# Combine with MP results (perf), filtering to "tune" runssli <-getSlick(perf, omperf, kobeyrs =2034:2042)sli <-getSlick(perf[grep("tune", run)], omperf, kobeyrs =2034:2042)