Title: | Bayesian Survival Analysis Using Shape Mixtures of Log-Normal Distributions |
---|---|
Description: | Mixtures of life distributions provide a convenient framework for survival analysis: particularly when standard models such as the Weibull are unable to capture some features from the data. These mixtures can also account for unobserved heterogeneity or outlying observations. BASSLINE uses shape mixtures of log-normal distributions and has particular applicability to data with fat tails. |
Authors: | Catalina Vallejos [aut], Nathan Constantine-Cooke [cre, aut] |
Maintainer: | Nathan Constantine-Cooke <[email protected]> |
License: | GPL-3 |
Version: | 0.0.0.9010 |
Built: | 2024-10-13 04:19:19 UTC |
Source: | https://github.com/nathansam/BASSLINE |
BASSLINE's functions require a numeric matrix be provided. This function converts a dataframe of mixed variable types (numeric and factors) to a matrix. A factor with $m$ levels is converted to $m$ columns with binary values used to denote which level the observation belongs to.
BASSLINE_convert(df)
BASSLINE_convert(df)
df |
A dataframe intended for conversion |
A numeric matrix suitable for BASSLINE functions
library(BASSLINE) Time <- c(5,15,15) Cens <- c(1,0,1) experiment <- as.factor(c("chem1", "chem2", "chem3")) age <- c(15,35,20) df <- data.frame(Time, Cens, experiment, age) converted <- BASSLINE_convert(df)
library(BASSLINE) Time <- c(5,15,15) Cens <- c(1,0,1) experiment <- as.factor(c("chem1", "chem2", "chem3")) age <- c(15,35,20) df <- data.frame(Time, Cens, experiment, age) converted <- BASSLINE_convert(df)
This returns a unique number corresponding to the Bayes Factor
associated to the test $M_0: \Lambda_{obs} = \lambda_{ref}$
versus
$M_1: \Lambda_{obs}\neq \lambda_{ref}$
(with all other
$\Lambda_j,\neq obs$
free). The value of $\lambda_{ref}$
is
required as input. The user should expect long running times for the
log-Student’s t model, in which case a reduced chain given
$\Lambda_{obs} = \lambda_{ref}$
needs to be generated
BF_lambda_obs_LLAP(obs, ref, X, chain)
BF_lambda_obs_LLAP(obs, ref, X, chain)
obs |
Indicates the number of the observation under analysis |
ref |
Reference value |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function updates |
#' library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLAP.outlier <- BF_lambda_obs_LLAP(1,1, X = cancer[, 3:11], chain = LLAP)
#' library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLAP.outlier <- BF_lambda_obs_LLAP(1,1, X = cancer[, 3:11], chain = LLAP)
This returns a unique number corresponding to the Bayes Factor
associated to the test $M_0: \Lambda_{obs} = \lambda_{ref}$
versus
$M_1: \Lambda_{obs}\neq \lambda_{ref}$
(with all other
$\Lambda_j,\neq obs$
free). The value of $\lambda_{ref}$
is
required as input. The user should expect long running times for the
log-Student’s t model, in which case a reduced chain given
$\Lambda_{obs} = \lambda_{ref}$
needs to be generated
BF_lambda_obs_LLOG(ref, obs, X, chain)
BF_lambda_obs_LLOG(ref, obs, X, chain)
ref |
Reference value |
obs |
Indicates the number of the observation under analysis |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.Outlier <- BF_lambda_obs_LLOG(1,1, X = cancer[, 3:11], chain = LLOG)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.Outlier <- BF_lambda_obs_LLOG(1,1, X = cancer[, 3:11], chain = LLOG)
This returns a unique number corresponding to the Bayes Factor
associated to the test $M_0: \Lambda_{obs} = \lambda_{ref}$
versus
$M_1: \Lambda_{obs}\neq \lambda_{ref}$
(with all other
$\Lambda_j,\neq obs$
free). The value of $\lambda_{ref}$
is
required as input. The user should expect long running times for the
log-Student’s t model, in which case a reduced chain given
$\Lambda_{obs} = \lambda_{ref}$
needs to be generated
BF_lambda_obs_LST( N, thin, burn, ref, obs, Time, Cens, X, chain, Q = 1, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
BF_lambda_obs_LST( N, thin, burn, ref, obs, Time, Cens, X, chain, Q = 1, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period |
ref |
Reference value |
obs |
Indicates the number of the observation under analysis |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
Q |
Update period for the |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
ar |
Optimal acceptance rate for the adaptive Metropolis-Hastings updates |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.Outlier <- BF_lambda_obs_LST(N = 100, thin = 20 , burn = 1, ref = 1, obs = 1, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LST)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.Outlier <- BF_lambda_obs_LST(N = 100, thin = 20 , burn = 1, ref = 1, obs = 1, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LST)
This returns a unique number corresponding to the Bayes Factor
associated to the test $M_0: \Lambda_{obs} = \lambda_{ref}$
versus
$M_1: \Lambda_{obs}\neq \lambda_{ref}$
(with all other
$\Lambda_j,\neq obs$
free). The value of $\lambda_{ref}$
is
required as input. The user should expect long running times for the
log-Student’s t model, in which case a reduced chain given
$\Lambda_{obs} = \lambda_{ref}$
needs to be generated
BF_u_obs_LEP( N, thin, burn, ref, obs, Time, Cens, X, chain, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
BF_u_obs_LEP( N, thin, burn, ref, obs, Time, Cens, X, chain, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period |
ref |
Reference value |
obs |
Indicates the number of the observation under analysis |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
ar |
Optimal acceptance rate for the adaptive Metropolis-Hastings updates |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) alpha <- mean(LEP[, 11]) uref <- 1.6 + 1 / alpha LEP.Outlier <- BF_u_obs_LEP(N = 100, thin = 20, burn =1 , ref = uref, obs = 1, Time = cancer[, 1], Cens = cancer[, 2], cancer[, 3:11], chain = LEP)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) alpha <- mean(LEP[, 11]) uref <- 1.6 + 1 / alpha LEP.Outlier <- BF_u_obs_LEP(N = 100, thin = 20, burn =1 , ref = uref, obs = 1, Time = cancer[, 1], Cens = cancer[, 2], cancer[, 3:11], chain = LEP)
Data from a trial in which a therapy (standard or test chemotherapy) was randomly applied to 137 patients who were diagnosed with inoperable lung cancer. The survival times of the patients were measured in days since treatment.
cancer
cancer
A matrix with 137 rows and 8 variables:
Survival time (in days)
0 or 1. If 0 the observation is right censored
The intercept
The treatment applied to the patient (0: standard, 1: test)
The histological type of the tumor (1: type 1, 0: otherwise)
The histological type of the tumor (1: type 2, 0: otherwise)
The histological type of the tumor (1: type 3, 0: otherwise)
A continuous index representing the status of the patient: 10—30 completely hospitalized, 40—60 partial confinement, 70—90 able to care for self.
The time between the diagnosis and the treatment (in months)
Age (in years)
Prior therapy, 0 or 10
Appendix I of Kalbfleisch and Prentice (1980).
Leave-one-out cross validation analysis. The function returns a
matrix with n rows. The first column contains the logarithm of the CPO
(Geisser and Eddy, 1979). Larger values of the CPO indicate better
predictive accuracy of the model. The second and third columns contain
the KL divergence between $\pi(\beta, \sigma^2, \theta | t_{-i})$
and $\pi(\beta, \sigma^2, \theta | t)$
and its calibration index
$p_i$
, respectively.
CaseDeletion_LEP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
CaseDeletion_LEP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LEP.CD <- CaseDeletion_LEP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LEP)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LEP.CD <- CaseDeletion_LEP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LEP)
Leave-one-out cross validation analysis. The function returns a
matrix with n rows. The first column contains the logarithm of the CPO
(Geisser and Eddy, 1979). Larger values of the CPO indicate better
predictive accuracy of the model. The second and third columns contain
the KL divergence between $\pi(\beta, \sigma^2, \theta | t_{-i})$
and $\pi(\beta, \sigma^2, \theta | t)$
and its calibration index
$p_i$
, respectively.
CaseDeletion_LLAP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
CaseDeletion_LLAP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLAP.CD <- CaseDeletion_LLAP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLAP)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLAP.CD <- CaseDeletion_LLAP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLAP)
Leave-one-out cross validation analysis. The function returns a
matrix with n rows. The first column contains the logarithm of the CPO
(Geisser and Eddy, 1979). Larger values of the CPO indicate better
predictive accuracy of the model. The second and third columns contain
the KL divergence between $\pi(\beta, \sigma^2, \theta | t_{-i})$
and $\pi(\beta, \sigma^2, \theta | t)$
and its calibration index
$p_i$
, respectively.
CaseDeletion_LLOG(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
CaseDeletion_LLOG(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.CD <- CaseDeletion_LLOG(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLOG)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.CD <- CaseDeletion_LLOG(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLOG)
Leave-one-out cross validation analysis. The function returns a
matrix with n rows. The first column contains the logarithm of the CPO
(Geisser and Eddy, 1979). Larger values of the CPO indicate better
predictive accuracy of the model. The second and third columns contain
the KL divergence between $\pi(\beta, \sigma^2, \theta | t_{-i})$
and $\pi(\beta, \sigma^2, \theta | t)$
and its calibration index
$p_i$
, respectively.
CaseDeletion_LN(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
CaseDeletion_LN(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations.LM LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LN.CD <- CaseDeletion_LN(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LN)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations.LM LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LN.CD <- CaseDeletion_LN(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LN)
Leave-one-out cross validation analysis. The function returns a
matrix with n rows. The first column contains the logarithm of the CPO
(Geisser and Eddy, 1979). Larger values of the CPO indicate better
predictive accuracy of the model. The second and third columns contain
the KL divergence between $\pi(\beta, \sigma^2, \theta | t_{-i})$
and $\pi(\beta, \sigma^2, \theta | t)$
and its calibration index
$p_i$
, respectively.
CaseDeletion_LST(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
CaseDeletion_LST(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.CD <- CaseDeletion_LST(Time = cancer[, 1], Cens = cancer[, 2], cancer[, 3:11], chain = LST)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.CD <- CaseDeletion_LST(Time = cancer[, 1], Cens = cancer[, 2], cancer[, 3:11], chain = LST)
Deviance information criterion is based on the deviance function
$D(\theta, y) = -2 log(f(y|\theta))$
but also incorporates a
penalization factor of the complexity of the model
DIC_LEP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
DIC_LEP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LEP.DIC <- DIC_LEP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LEP)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LEP.DIC <- DIC_LEP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LEP)
Deviance information criterion is based on the deviance function
$D(\theta, y) = -2 log(f(y|\theta))$
but also incorporates a
penalization factor of the complexity of the model
DIC_LLAP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
DIC_LLAP(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLAP.DIC <- DIC_LLAP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLAP)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLAP.DIC <- DIC_LLAP(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLAP)
Deviance information criterion is based on the deviance function
$D(\theta, y) = -2 log(f(y|\theta))$
but also incorporates a
penalization factor of the complexity of the model
DIC_LLOG(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
DIC_LLOG(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.DIC <- DIC_LLOG(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLOG)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.DIC <- DIC_LLOG(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLOG)
Deviance information criterion is based on the deviance function
$D(\theta, y) = -2 log(f(y|\theta))$
but also incorporates a
penalization factor of the complexity of the model
DIC_LN(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
DIC_LN(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations.LM LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LN.DIC <- DIC_LN(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LN)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations.LM LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LN.DIC <- DIC_LN(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LN)
Deviance information criterion is based on the deviance function
$D(\theta, y) = -2 log(f(y|\theta))$
but also incorporates a
penalization factor of the complexity of the model
DIC_LST(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
DIC_LST(Time, Cens, X, chain, set = TRUE, eps_l = 0.5, eps_r = 0.5)
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.DIC <- DIC_LST(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LST)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.DIC <- DIC_LST(Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LST)
Log-marginal likelihood estimator for the log-exponential power model
LML_LEP( thin, Time, Cens, X, chain, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
LML_LEP( thin, Time, Cens, X, chain, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
thin |
Thinning period. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=100 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 100, thin = 2, burn = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LEP.LML <- LML_LEP(thin = 2, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LEP)
library(BASSLINE) # Please note: N=100 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 100, thin = 2, burn = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LEP.LML <- LML_LEP(thin = 2, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LEP)
Log-marginal likelihood estimator for the log-Laplace model
LML_LLAP( thin, Time, Cens, X, chain, Q = 1, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
LML_LLAP( thin, Time, Cens, X, chain, Q = 1, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
thin |
Thinning period. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
Q |
Update period for the |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
Log-marginal likelihood estimator for the log-logistic model
LML_LLOG( thin, Time, Cens, X, chain, Q = 10, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, N.AKS = 3 )
LML_LLOG( thin, Time, Cens, X, chain, Q = 10, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, N.AKS = 3 )
thin |
Thinning period. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
Q |
Update period for the |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
N.AKS |
Maximum number of terms of the Kolmogorov-Smirnov density used for the rejection sampling when updating mixing parameters (default value: 3) |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.LML <- LML_LLOG(thin = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLOG)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LLOG.LML <- LML_LLOG(thin = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LLOG)
Log-marginal Likelihood estimator for the log-normal model
LML_LN( thin, Time, Cens, X, chain, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
LML_LN( thin, Time, Cens, X, chain, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
thin |
Thinning period. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations.LM LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LN.LML <- LML_LN(thin = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LN)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations.LM LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LN.LML <- LML_LN(thin = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LN)
Log-marginal Likelihood estimator for the log-student's t model
LML_LST( thin, Time, Cens, X, chain, Q = 1, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
LML_LST( thin, Time, Cens, X, chain, Q = 1, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
thin |
Thinning period. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
chain |
MCMC chains generated by a BASSLINE MCMC function |
Q |
Update period for the |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.LML <- LML_LST(thin = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LST)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) LST.LML <- LML_LST(thin = 20, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11], chain = LST)
Adaptive Metropolis-within-Gibbs algorithm with univariate Gaussian random walk proposals for the log-exponential model
MCMC_LEP( N, thin, burn, Time, Cens, X, beta0 = NULL, sigma20 = NULL, alpha0 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
MCMC_LEP( N, thin, burn, Time, Cens, X, beta0 = NULL, sigma20 = NULL, alpha0 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period. Must be a multiple of thin. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
beta0 |
Starting values for |
sigma20 |
Starting value for |
alpha0 |
Starting value for |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
ar |
Optimal acceptance rate for the adaptive Metropolis-Hastings updates |
A matrix with $N / thin + 1$
rows. The columns are the MCMC
chains for $\beta$
($k$
columns), $\sigma^2$
(1 column),
$\theta$
(1 column, if appropriate), $u$
($n$
columns, not
provided for log-normal model), $\log(t)$
($n$
columns, simulated
via data augmentation) and the logarithm of the adaptive variances (the
number varies among models). The latter allows the user to evaluate if
the adaptive variances have been stabilized.
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations (especially for the log-exponential power model). LEP <- MCMC_LEP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
Adaptive Metropolis-within-Gibbs algorithm with univariate Gaussian random walk proposals for the log-Laplace model
MCMC_LLAP( N, thin, burn, Time, Cens, X, Q = 1, beta0 = NULL, sigma20 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
MCMC_LLAP( N, thin, burn, Time, Cens, X, Q = 1, beta0 = NULL, sigma20 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period. Must be a multiple of thin. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
Q |
Update period for the |
beta0 |
Starting values for |
sigma20 |
Starting value for |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
A matrix with $N / thin + 1$
rows. The columns are the MCMC
chains for $\beta$
($k$
columns), $\sigma^2$
(1 column),
$\theta$
(1 column, if appropriate), $\lambda$
($n$
columns,
not provided for log-normal model), $\log(t)$
($n$
columns,
simulated via data augmentation) and the logarithm of the adaptive
variances (the number varies among models). The latter allows the user to
evaluate if the adaptive variances have been stabilized.
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLAP <- MCMC_LLAP(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
Adaptive Metropolis-within-Gibbs algorithm with univariate Gaussian random walk proposals for the log-logistic model
MCMC_LLOG( N, thin, burn, Time, Cens, X, Q = 10, beta0 = NULL, sigma20 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, N.AKS = 3 )
MCMC_LLOG( N, thin, burn, Time, Cens, X, Q = 10, beta0 = NULL, sigma20 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, N.AKS = 3 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period. Must be a multiple of thin. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
Q |
Update period for the |
beta0 |
Starting values for |
sigma20 |
Starting value for |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
N.AKS |
Maximum number of terms of the Kolmogorov-Smirnov density used for the rejection sampling when updating mixing parameters (default value: 3) |
A matrix with $N / thin + 1$
rows. The columns are the MCMC
chains for $\beta$
($k$
columns), $\sigma^2$
(1 column),
$\theta$
(1 column, if appropriate), $\lambda$
($n$
columns,
not provided for log-normal model), $\log(t)$
($n$
columns,
simulated via data augmentation) and the logarithm of the adaptive
variances (the number varies among models). The latter allows the user to
evaluate if the adaptive variances have been stabilized.
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LLOG <- MCMC_LLOG(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
Adaptive Metropolis-within-Gibbs algorithm with univariate Gaussian random walk proposals for the log-normal model (no mixture)
MCMC_LN( N, thin, burn, Time, Cens, X, beta0 = NULL, sigma20 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
MCMC_LN( N, thin, burn, Time, Cens, X, beta0 = NULL, sigma20 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period. Must be a multiple of thin. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
beta0 |
Starting values for |
sigma20 |
Starting value for |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
A matrix with $(N - burn) / thin + 1$
rows. The columns are the
MCMC chains for $\beta$
($k$
columns), $\sigma^2$
(1 column),
$\theta$
(1 column, if appropriate), $\log(t)$
($n$
columns,
simulated via data augmentation) and the logarithm of the adaptive
variances (the number varies among models). The latter allows the user to
evaluate if the adaptive variances have been stabilized.
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
Adaptive Metropolis-within-Gibbs algorithm with univariate Gaussian random walk proposals for the log-student's T model (no mixture)
MCMC_LST( N, thin, burn, Time, Cens, X, Q = 1, beta0 = NULL, sigma20 = NULL, nu0 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
MCMC_LST( N, thin, burn, Time, Cens, X, Q = 1, beta0 = NULL, sigma20 = NULL, nu0 = NULL, prior = 2, set = TRUE, eps_l = 0.5, eps_r = 0.5, ar = 0.44 )
N |
Total number of iterations. Must be a multiple of thin. |
thin |
Thinning period. |
burn |
Burn-in period. Must be a multiple of thin. |
Time |
Vector containing the survival times. |
Cens |
Censoring indication (1: observed, 0: right-censored). |
X |
Design matrix with dimensions |
Q |
Update period for the |
beta0 |
Starting values for |
sigma20 |
Starting value for |
nu0 |
Starting value for |
prior |
Indicator of prior (1: Jeffreys, 2: Type I Ind. Jeffreys, 3: Ind. Jeffreys). |
set |
Indicator for the use of set observations (1: set observations, 0: point observations). The former is strongly recommended over the latter as point observations cause problems in the context of Bayesian inference (due to continuous sampling models assigning zero probability to a point). |
eps_l |
Lower imprecision |
eps_r |
Upper imprecision |
ar |
Optimal acceptance rate for the adaptive Metropolis-Hastings updates |
A matrix with $N / thin + 1$
rows. The columns are the MCMC
chains for $\beta$
($k$
columns), $\sigma^2$
(1 column),
$\theta$
(1 column, if appropriate), $\lambda$
($n$
columns,
not provided for log-normal model), $\log(t)$
($n$
columns,
simulated via data augmentation) and the logarithm of the adaptive
variances (the number varies among models). The latter allows the user to
evaluate if the adaptive variances have been stabilized.
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LST <- MCMC_LST(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11])
Plots the chain across (non-discarded) iterations for a specified observation
Trace_plot(variable = NULL, chain = NULL)
Trace_plot(variable = NULL, chain = NULL)
variable |
Indicates the index of the variable |
chain |
MCMC chains generated by a BASSLINE MCMC function |
A ggplot2 object
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) Trace_plot(1, LN)
library(BASSLINE) # Please note: N=1000 is not enough to reach convergence. # This is only an illustration. Run longer chains for more accurate # estimations. LN <- MCMC_LN(N = 1000, thin = 20, burn = 40, Time = cancer[, 1], Cens = cancer[, 2], X = cancer[, 3:11]) Trace_plot(1, LN)