`A207-NBG.Rmd`

Neuenschwander, Branson, and Gsponer (2008) (NBG) introduced a derivative of the CRM for dose-escalation clinical trials using the model:

\[ \text{logit} p_i = \alpha + \exp{(\beta)} \log{(x_i / d^*)}, \]

where \(p_i\) is the probability of toxicity at the \(i\)th dose, \(x_i\), and \(d^*\) is a reference dose. Here \(\alpha\) and \(\beta\) are model parameters on which the authors place a bivariate normal prior. This model is very similar to the two-parameter logistic CRM, implemented with `stan_crm(model = 'logistic2')`

. However, a notable difference is that the dose, \(x_i\), enters the model as a covariate. This dispenses with the toxicity skeleton that is used in the CRM.

`escalation`

The heavy lifting required to fit the model is performed by `trialr`

and `rstan`

. `escalation`

merely composes the model fit in such a way that it can be used with the myriad dose-selection option provided in this package.

For illustration, let us reproduce the analysis in Neuenschwander, Branson, and Gsponer (2008) that the authors used to demonstrate the flexibility of a two-parameter approach. In a trial of 15 doses, the investigators saw outcomes:

library(escalation) #> Loading required package: magrittr dose <- c(1, 2.5, 5, 10, 15, 20, 25, 30, 40, 50, 75, 100, 150, 200, 250) outcomes <- '1NNN 2NNNN 3NNNN 4NNNN 7TT'

Creating a dose-escalation model with NBG’s parameters:

model <- get_trialr_nbg(real_doses = dose, d_star = 250, target = 0.3, alpha_mean = 2.15, alpha_sd = 0.84, beta_mean = 0.52, beta_sd = 0.8, seed = 2020)

and fitting the model to the observed outcomes:

fit <- model %>% fit(outcomes) fit #> Patient-level data: #> # A tibble: 17 x 4 #> Patient Cohort Dose Tox #> <int> <int> <int> <int> #> 1 1 1 1 0 #> 2 2 1 1 0 #> 3 3 1 1 0 #> 4 4 2 2 0 #> 5 5 2 2 0 #> 6 6 2 2 0 #> 7 7 2 2 0 #> 8 8 3 3 0 #> 9 9 3 3 0 #> 10 10 3 3 0 #> 11 11 3 3 0 #> 12 12 4 4 0 #> 13 13 4 4 0 #> 14 14 4 4 0 #> 15 15 4 4 0 #> 16 16 5 7 1 #> 17 17 5 7 1 #> #> Dose-level data: #> # A tibble: 16 x 9 #> RealDose dose tox n empiric_tox_rate mean_prob_tox median_prob_tox #> <dbl> <ord> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 NA NoDo… 0 0 0 0 0 #> 2 1 1 0 3 0 0.0126 0.00563 #> 3 2.5 2 0 4 0 0.0325 0.0197 #> 4 5 3 0 4 0 0.0675 0.0493 #> 5 10 4 0 4 0 0.138 0.119 #> 6 15 5 0 0 NaN 0.206 0.189 #> 7 20 6 0 0 NaN 0.269 0.256 #> 8 25 7 2 2 1 0.326 0.319 #> 9 30 8 0 0 NaN 0.378 0.374 #> 10 40 9 0 0 NaN 0.466 0.469 #> 11 50 10 0 0 NaN 0.537 0.547 #> 12 75 11 0 0 NaN 0.662 0.677 #> 13 100 12 0 0 NaN 0.739 0.758 #> 14 150 13 0 0 NaN 0.826 0.847 #> 15 200 14 0 0 NaN 0.872 0.893 #> 16 250 15 0 0 NaN 0.899 0.920 #> # … with 2 more variables: admissible <lgl>, recommended <lgl> #> #> The model targets a toxicity level of 0.3. #> The model advocates continuing at dose 7.

we see that dose 7 is selected for the next cohort using the metric of selecting the dose with posterior expected probability of toxicity closest to the target. In the above output, `mean_prob_tox`

broadly matches the values plotted in the lower right panel of Figure 1 in Neuenschwander, Branson, and Gsponer (2008).

There are a few minor shortcomings of the NBG implementation in `escalation`

& `trialr`

. Firstly, NBG propose a bivariate normal prior distribution on \(\alpha\) and \(\beta\). However, the implementation in `trialr`

currently uses independent normal priors. Hopefully, this will be addressed in a future release of `trialr`

.

Furthermore, NBG propose a method for selecting dose that accounts for the probability of recommending an overdose. That logic is currently not implemented in `escalation`

. However, a proposal that addresses the same issue was presented by Mozgunov and Jaki (2020), is implemented in `escalation`

, and can be applied to the NBG method:

model2 <- model %>% select_dose_by_cibp(a = 0.3)

Fitting the new model to the same outcomes:

fit2 <- model2 %>% fit(outcomes)

Rather than sticking at dose 7, the design now prefers to de-escalate to dose 6:

fit2 %>% recommended_dose() #> [1] 6

Mozgunov & Jaki’s method was published in relation to the CRM design, but it can be applied in `escalation`

to any model providing posterior samples via the `prob_tox_samples`

method, including Neuenschwander *et al.*’s method illustrated here.

We can use the `get_dose_paths`

function in `escalation`

to calculate exhaustive model recommendations in response to every possible set of outcomes in future cohorts.

For instance, at the start of a trial using the NBG model detailed above, we can examine all possible paths a trial might take in the first two cohorts of three patients, starting at dose 2:

paths1 <- model %>% get_dose_paths(cohort_sizes = c(3, 3), next_dose = 2) graph_paths(paths1)

We can then compare these to the similar advice from the model that adds Mozgunov & Jaki’s criterion:

paths2 <- model2 %>% get_dose_paths(cohort_sizes = c(3, 3), next_dose = 2) graph_paths(paths2)

We can see in several situations that the second model is more conservative in escalation, achieving the goal of the authors. Perhaps unexpectedly, however, the second design escalations to dose 11 after initial outcomes `2NNN`

, slightly more aggessively than the default model which identifies dose 10.

Dose-paths can also be run for in-progress trials where some outcomes have been established. For more information on working with dose-paths, refer to the dose-paths vignette.

We can use the `simulate_trials`

function to calculate operating characteristics for a design. Let us take the example above, append it with behaviour to stop when the lowest dose is too toxic, when 9 patients have already been evaluated at the candidate dose, or when a sample size of \(n=24\) is reached:

dose <- c(1, 2.5, 5, 10, 15, 20, 25, 30, 40, 50, 75, 100, 150, 200, 250) model <- get_trialr_nbg(real_doses = dose, d_star = 250, target = 0.3, alpha_mean = 2.15, alpha_sd = 0.84, beta_mean = 0.52, beta_sd = 0.8, seed = 2020) %>% stop_when_too_toxic(dose = 1, tox_threshold = 0.3, confidence = 0.8) %>% stop_when_n_at_dose(dose = 'recommended', n = 9) %>% stop_at_n(n = 24)

For the sake of speed, we will run just ten iterations:

num_sims <- 10

In real life, however, we would naturally run many thousands of iterations.

Then let us investigate under the following true probabilities of toxicity:

sc1 <- c(0.01, 0.03, 0.10, 0.17, 0.25, 0.35, 0.45, 0.53, 0.60, 0.65, 0.69, 0.72, 0.75, 0.79, 0.80)

The simulated behaviour is:

set.seed(123) sims <- model %>% simulate_trials(num_sims = num_sims, true_prob_tox = sc1, next_dose = 1) sims #> Number of iterations: 10 #> #> Number of doses: 15 #> #> True probability of toxicity: #> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 #> 0.01 0.03 0.10 0.17 0.25 0.35 0.45 0.53 0.60 0.65 0.69 0.72 0.75 0.79 0.80 #> #> Probability of recommendation: #> NoDose 1 2 3 4 5 6 7 8 9 10 #> 0.0 0.0 0.0 0.0 0.2 0.2 0.3 0.3 0.0 0.0 0.0 #> 11 12 13 14 15 #> 0.0 0.0 0.0 0.0 0.0 #> #> Probability of administration: #> 1 2 3 4 5 6 7 8 9 10 11 #> 0.1282 0.0000 0.0128 0.1410 0.1667 0.1538 0.1026 0.0641 0.0897 0.1410 0.0000 #> 12 13 14 15 #> 0.0000 0.0000 0.0000 0.0000 #> #> Sample size: #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> 21.0 24.0 24.0 23.4 24.0 24.0 #> #> Total toxicities: #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> 6.00 7.25 8.00 7.80 8.75 9.00 #> #> Trial duration: #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> 17.20 21.70 22.80 22.32 23.20 24.60

We see that the chances of stopping for excess toxicity and recommending no dose are low. Doses 4-7 are the favourites to be identified.

For more information on running dose-finding simulations, refer to the simulation vignette.

Mozgunov, Pavel, and Thomas Jaki. 2020. “Improving Safety of the Continual Reassessment Method via a Modified Allocation Rule.” *Statistics in Medicine* 39 (7): 906–22. https://doi.org/10.1002/sim.8450.

Neuenschwander, Beat, Michael Branson, and Thomas Gsponer. 2008. “Critical aspects of the Bayesian approach to phase I cancer trials.” *Statistics in Medicine* 27: 2420–39. https://doi.org/10.1002/sim.3230.