BASSLINE (BAyeSian Survival anaLysIs usiNg shapE mixtures of log-normal distributions) uses shape mixtures of log-normal distributions to fit data with fat tails and has been adapted from code produced for Objective Bayesian Survival Analysis Using Shape Mixtures of Log-Normal Distributions (Vallejos and Steel 2015). Some of the functions have been rewritten in C++ for increased performance.
5 distributions from the log-normal family are supported by
BASSLINE
:
As well as MCMC (Markov chain Monte Carlo) algorithms for the 5 distributions, additional functions which allow log-marginal likelihood estimators and deviance information criteria to be calculated are provided. Case deletion analysis and outlier detection are also supported.
This vignette demonstrates how to use the BASSLINE
package to carry out survival analysis using the included
cancer
data-set from the veterans administration lung
cancer trial.
Essential parameters for running the MCMC are :
N
: total number of iterationsthin
: length of the thinning period (i.e. only every
thin
iterations will be stored in the output)burn
: length of burn-in period (i.e. the initial
burn
iterations that will be discarded from the
output)Time
: Vector of survival timesCens
: Vector indicating if observations are
censoredX
: Matrix of covariates for each observationStarting values for the βs
and σ2 are randomly
sampled from an appropriate distribution if not given by the user as
arguments. Additional arguments allow the type of prior to be specified,
and the observations to be specified as set or point observations. See
the documentation for any of the MCMC functions included with
BASSLINE
for more information on these additional
arguments.
Note that BASSLINE does not support factors/ levels. Factors should
be converted to separate binary variables for each level which can be
easily done via the provided BASSLINE_convert
function. For
example:
Time | Cens | treatment |
---|---|---|
10 | 1 | A |
15 | 1 | B |
24 | 0 | C |
21 | 1 | A |
can be converted by simply passing the dataframe object to the function.
Time | Cens | treatment.A | treatment.B | treatment.C |
---|---|---|---|---|
10 | 1 | 1 | 0 | 0 |
15 | 1 | 0 | 1 | 0 |
24 | 0 | 0 | 0 | 1 |
21 | 1 | 1 | 0 | 0 |
Included with BASSLINE
is an example data set,
cancer
. This data has been obtained from a study conducted
by the US Veterans Administration where male patients with advanced
inoperable lung cancer were given either standard or experimental
chemotherapy treatment (Kalbfleisch and Prentice
2002). 137 patients took part in the trial, 9 of whom left the
study before their death and are thus right censored. Various covariates
were also documented for each patient.
Viewing the first 5 observations shows the data set’s format:
Time | Cens | Intercept | Treat | Type.1 | Type.2 | Type.3 | Status | MFD | Age | Prior |
---|---|---|---|---|---|---|---|---|---|---|
72 | 1 | 1 | 0 | 1 | 0 | 0 | 60 | 7 | 69 | 0 |
411 | 1 | 1 | 0 | 1 | 0 | 0 | 70 | 5 | 64 | 1 |
228 | 1 | 1 | 0 | 1 | 0 | 0 | 60 | 3 | 38 | 0 |
126 | 1 | 1 | 0 | 1 | 0 | 0 | 60 | 9 | 63 | 1 |
118 | 1 | 1 | 0 | 1 | 0 | 0 | 70 | 11 | 65 | 1 |
The first column of cancer
denotes the survival time for
the observation. The second column denotes the censored status for the
observation (0 for right censored; 1 for not censored ). All remaining
columns are covariates. Additional information can be found in the
documentation for cancer
.
MCMC chains can be easily generated by providing the aforementioned essential parameters. As previous discussed, starting values are randomly sampled if not provided by the user. The user will possibly obtain improved results from experimenting with these starting values.
Please note N = 1000, as used in these examples, is not enough to reach convergence and is only used as a demonstration. The user is advised to run longer chains with a longer burn-in period for more accurate estimations (especially for the log-exponential power model).
# Log-normal
LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = Time,
Cens = Cens, X = cancer[,3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 1.1
#> Initial beta 2 : 0.67
#> Initial beta 3 : -0.03
#> Initial beta 4 : -1.97
#> Initial beta 5 : 0.69
#> Initial beta 6 : -0.7
#> Initial beta 7 : 0.38
#> Initial beta 8 : 0.57
#> Initial beta 9 : -1.22
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 0.96
# Log-student's T
LST <- MCMC_LST(N = 1000, thin = 20, burn = 40 , Time = Time, Cens = Cens,
X = cancer[,3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 0.32
#> Initial beta 2 : -1.29
#> Initial beta 3 : 0.98
#> Initial beta 4 : -0.08
#> Initial beta 5 : -0.13
#> Initial beta 6 : 0.29
#> Initial beta 7 : 0.9
#> Initial beta 8 : 0.52
#> Initial beta 9 : 0.78
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 4.1
#>
#> Sampling initial nu from a Gamma(2, 2) distribution
#> Initial nu : 0.79
#>
#> AR nu : 0.34
# Log-Laplace
LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = Time, Cens = Cens,
X = cancer[,3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 1.21
#> Initial beta 2 : 0.46
#> Initial beta 3 : 0.21
#> Initial beta 4 : 0.25
#> Initial beta 5 : -0.63
#> Initial beta 6 : 1.27
#> Initial beta 7 : 0.55
#> Initial beta 8 : 0.88
#> Initial beta 9 : 0.21
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 1.79
#Log-exponential power
LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = Time, Cens = Cens,
X = cancer[,3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 1.18
#> Initial beta 2 : -0.35
#> Initial beta 3 : -0.72
#> Initial beta 4 : 0.07
#> Initial beta 5 : -0.14
#> Initial beta 6 : 0.49
#> Initial beta 7 : 0.18
#> Initial beta 8 : -0.95
#> Initial beta 9 : 0.16
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 0.69
#>
#> Sampling initial alpha from a Uniform(1, 2) distribution
#> Initial alpha : 1.71
#>
#> AR beta 1 : 0.58
#> AR beta 2 : 0.67
#> AR beta 3 : 0.74
#> AR beta 4 : 0.68
#> AR beta 5 : 0.78
#> AR beta 6 : 0.03
#> AR beta 7 : 0.06
#> AR beta 8 : 0.03
#> AR beta 9 : 0.71
#> AR sigma2 : 0.88
#> AR alpha : 0.07
#Log-logistic
LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = Time, Cens = Cens,
X = cancer[,3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : -0.78
#> Initial beta 2 : 0.74
#> Initial beta 3 : 1.19
#> Initial beta 4 : -1.48
#> Initial beta 5 : -0.12
#> Initial beta 6 : -1.64
#> Initial beta 7 : 1.38
#> Initial beta 8 : -1.15
#> Initial beta 9 : 0.81
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 2.89
After generating MCMC chains, their suitability should be assessed.
This can be done, in part, via the Trace_Plot
function
included with BASSLINE
which will plot a chain for a
variable across (non-discarded) iterations. We will investigate the
chain for β1 from
the log-normal model.
For additional analysis of chains, the coda
package is
recommended which offers many different functions:
ACF plots are also available via coda
:
The deviance information criterion (DIC), a hierarchical modeling generalization of the Akaike information criterion (Spiegelhalter et al. 2002), can be easily calculated for the 5 models.
If the deviance is defined as
D(θ, y) = −2log (f(y|θ))
then
DIC = D(θ̄) + 2pD
Where pD is the number of effective parameters.
Each member of the log-normal family has a function to calculate DIC. We will present an example using the log-normal distribution and compare two models with differing covariates (all of the available covariates vs. only the last 4 covariates).
LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = Time,
Cens = Cens, X = cancer[,3:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 1.44
#> Initial beta 2 : 0.22
#> Initial beta 3 : -0.46
#> Initial beta 4 : 1.34
#> Initial beta 5 : 0.55
#> Initial beta 6 : 0.02
#> Initial beta 7 : -2.55
#> Initial beta 8 : -0.05
#> Initial beta 9 : 0.3
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 1.33
DIC_LN(Time = Time, Cens = Cens, X = cancer[,3:11], chain = LN)
#> Effective number of parameters : 7.63
#> Actual number of parameters : 10
#> [1] 1447.125
# Reduced model
LN.2 <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = Time,
Cens = Cens, X = cancer[,8:11])
#> Sampling initial betas from a Normal(0, 1) distribution
#> Initial beta 1 : 0.74
#> Initial beta 2 : 1.28
#> Initial beta 3 : 0.63
#> Initial beta 4 : 0.52
#>
#> Sampling initial sigma^2 from a Gamma(2, 2) distribution
#> Initial sigma^2 : 2.28
DIC_LN(Time = Time, Cens = Cens, X = cancer[,8:11], chain = LN.2)
#> Effective number of parameters : 4.89
#> Actual number of parameters : 5
#> [1] 1455.021
The log-marginal likelihood is given by:
log (m(t)) = log (∫−∞∞∫0∞∫Θf(t ∣ β, σ2, θ)π(β, σ2, θ)dβ dσ2 dθ)
And can be easily estimated using the supplied function for each distribution which is based on the work of Chib (Chib 1995) and Chib and Jeliaskov (Chib and Jeliazkov 2001). The function will return a list which includes the log-marginal likelihood estimation, the log-likelihood ordinate, the log-prior ordinate, and the log-posterior ordinates. Messages detailing the progress of the algorithm are provided to the user.
LML_LN(thin = 20, Time, Cens = Cens, X = cancer[,3:11], chain = LN)
#> Likelihood ordinate ready!
#> Prior ordinate ready!
#> Posterior ordinate sigma2 ready!
#> Posterior ordinate beta ready!
#> $LL.ord
#> [1] -715.9365
#>
#> $LP.ord
#> [1] -0.2470357
#>
#> $LPO.sigma2
#> [1] 0.8033505
#>
#> $LPO.beta
#> [1] 14.97406
#>
#> $LML
#> [1] -731.961
Leave-one-out cross-validation analysis is also available for all 5 of the supported distributions. The functions returns matrices with n rows.
Its first column contains the logarithm of the conditional predictive ordinate (CPO) (Geisser and Eddy 1979). Larger values of the CPO indicate better predictive accuracy.
The second and third columns contain the Kullback–Leibler (KL) divergence between π(β, σ2, θ ∣ t−i) and π(β, σ2, θ ∣ t) and its calibration index pi (Cho et al. 2008) respectively. The later is used in order to evaluate the existence of influential observations. If pi is substantially larger than 0.5, observation i is declared influential. Suggested cut-off values are 0.8 and 0.9.
logCPO | KL | CALIBRATION |
---|---|---|
-5.440174 | 0.0067233 | 0.5577853 |
-7.590519 | 0.0524852 | 0.6578360 |
-6.925747 | 0.0315844 | 0.6237084 |
-5.935745 | 0.0090586 | 0.5669963 |
-5.861062 | 0.0059235 | 0.5542611 |
It is sensible to report the number of observations which are deemed influential for a given cutoff which can be easily found by the user.
Outlier detection is available for a specified observation (obs) for the log-student’s T, log-Laplace, log-logistic, log-exponential power models. This returns a unique number corresponding to the Bayes Factor associated with the test M0 : Λobs = λref versus M0 : Λobs ≠ λref (with all other Λj, j ≠ obs free). The value of λref is required as input.
The recommended value of λref is 1 with the exception of the log-logistic model where we recommend 0.4 instead. The calculations which support these recommendations can be found in the original paper (Vallejos and Steel 2015).