Fit lasso penalized linear regression path for a Filebacked Big Matrix. Covariates can be added to correct for confounders.

big_spLogReg(
  X,
  y01.train,
  ind.train = rows_along(X),
  ind.col = cols_along(X),
  covar.train = NULL,
  base.train = NULL,
  pf.X = NULL,
  pf.covar = NULL,
  alphas = 1,
  K = 10,
  ind.sets = NULL,
  nlambda = 200,
  nlam.min = 50,
  n.abort = 10,
  dfmax = 50000,
  warn = TRUE,
  ncores = 1,
  ...
)

Arguments

X

An object of class FBM.

y01.train

Vector of responses, corresponding to ind.train. Must be only 0s and 1s.

ind.train

An optional vector of the row indices that are used, for the training part. If not specified, all rows are used. Don't use negative indices.

ind.col

An optional vector of the column indices that are used. If not specified, all columns are used. Don't use negative indices.

covar.train

Matrix of covariables to be added in each model to correct for confounders (e.g. the scores of PCA), corresponding to ind.train. Default is NULL and corresponds to only adding an intercept to each model. You can use covar_from_df() to convert from a data frame.

base.train

Vector of base predictions. Model will be learned starting from these predictions. This can be useful if you want to previously fit a model with large-effect variables that you don't want to penalize.

pf.X

A multiplicative factor for the penalty applied to each coefficient. If supplied, pf.X must be a numeric vector of the same length as ind.col. Default is all 1. The purpose of pf.X is to apply differential penalization if some coefficients are thought to be more likely than others to be in the model. Setting SOME to 0 allows to have unpenalized coefficients.

pf.covar

Same as pf.X, but for covar.train.

alphas

The elastic-net mixing parameter that controls the relative contribution from the lasso (l1) and the ridge (l2) penalty. The penalty is defined as $$ \alpha||\beta||_1 + (1-\alpha)/2||\beta||_2^2.$$ alpha = 1 is the lasso penalty and alpha in between 0 (1e-4) and 1 is the elastic-net penalty. Default is 1. You can pass multiple values, and only one will be used (optimized by grid-search).

K

Number of sets used in the Cross-Model Selection and Averaging (CMSA) procedure. Default is 10.

ind.sets

Integer vectors of values between 1 and K specifying which set each index of the training set is in. Default randomly assigns these values but it can be useful to set this vector for reproducibility, or if you want to refine the grid-search over alphas using the same sets.

nlambda

The number of lambda values. Default is 200.

nlam.min

Minimum number of lambda values to investigate. Default is 50.

n.abort

Number of lambda values for which prediction on the validation set must decrease before stopping. Default is 10.

dfmax

Upper bound for the number of nonzero coefficients. Default is 50e3 because, for large data sets, computational burden may be heavy for models with a large number of nonzero coefficients.

warn

Whether to warn if some models may not have reached a minimum. Default is TRUE.

ncores

Number of cores used. Default doesn't use parallelism. You may use nb_cores.

...

Arguments passed on to COPY_biglasso_main

lambda.min.ratio

The smallest value for lambda, as a fraction of lambda.max. Default is .0001 if the number of observations is larger than the number of variables and .001 otherwise.

eps

Convergence threshold for inner coordinate descent. The algorithm iterates until the maximum change in the objective after any coefficient update is less than eps times the null deviance. Default value is 1e-5.

max.iter

Maximum number of iterations. Default is 1000.

return.all

Deprecated. Now always return all models.

Value

Return an object of class big_sp_list (a list of length(alphas) x K) that has 3 methods predict, summary and plot.

Details

This is a modified version of one function of package biglasso. It adds the possibility to train models with covariables and use many types of FBM (not only double ones). Yet, it only corresponds to screen = "SSR" (Sequential Strong Rules).

Also, to remove the choice of the lambda parameter, we introduce the Cross-Model Selection and Averaging (CMSA) procedure:

  1. This function separates the training set in K folds (e.g. 10).

  2. In turn,

    • each fold is considered as an inner validation set and the others (K - 1) folds form an inner training set,

    • the model is trained on the inner training set and the corresponding predictions (scores) for the inner validation set are computed,

    • the vector of scores which maximizes log-likelihood is determined,

    • the vector of coefficients corresponding to the previous vector of scores is chosen.

  3. The K resulting vectors of coefficients are then averaged into one final vector of coefficients.

References

Tibshirani, R., Bien, J., Friedman, J., Hastie, T., Simon, N., Taylor, J. and Tibshirani, R. J. (2012), Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74: 245-266. https://doi.org/10.1111/j.1467-9868.2011.01004.x.

Zeng, Y., and Breheny, P. (2017). The biglasso Package: A Memory- and Computation-Efficient Solver for Lasso Model Fitting with Big Data in R. arXiv preprint arXiv:1701.05936. https://arxiv.org/abs/1701.05936.

Privé, F., Aschard, H., and Blum, M. G.B. (2019). Efficient implementation of penalized regression for genetic risk prediction. Genetics, 212: 65-74. https://doi.org/10.1534/genetics.119.302019.

See also

Examples

set.seed(2) # simulating some data N <- 230 M <- 730 X <- FBM(N, M, init = rnorm(N * M, sd = 5)) y01 <- as.numeric((rowSums(X[, 1:10]) + 2 * rnorm(N)) > 0) covar <- matrix(rnorm(N * 3), N) ind.train <- sort(sample(nrow(X), 150)) ind.test <- setdiff(rows_along(X), ind.train) # fitting model for multiple lambdas and alphas test <- big_spLogReg(X, y01[ind.train], ind.train = ind.train, covar.train = covar[ind.train, ], alphas = c(1, 0.5, 0.1, 0.01), warn = FALSE) # peek at the models plot(test)
summary(test, sort = TRUE)
#> # A tibble: 4 x 7 #> alpha validation_loss intercept beta nb_var message all_conv #> <dbl> <dbl> <dbl> <list> <int> <list> <lgl> #> 1 1 0.425 0.0615 <dbl [733]> 154 <chr [10]> TRUE #> 2 0.5 0.435 0.0758 <dbl [733]> 164 <chr [10]> TRUE #> 3 0.1 0.482 0.139 <dbl [733]> 215 <chr [10]> TRUE #> 4 0.01 0.554 0.258 <dbl [733]> 454 <chr [10]> TRUE
summary(test, sort = TRUE)$message
#> [[1]] #> [1] "No more improvement" "No more improvement" "No more improvement" #> [4] "No more improvement" "No more improvement" "No more improvement" #> [7] "No more improvement" "No more improvement" "No more improvement" #> [10] "No more improvement" #> #> [[2]] #> [1] "No more improvement" "No more improvement" "No more improvement" #> [4] "No more improvement" "No more improvement" "No more improvement" #> [7] "No more improvement" "No more improvement" "No more improvement" #> [10] "No more improvement" #> #> [[3]] #> [1] "No more improvement" "No more improvement" "No more improvement" #> [4] "No more improvement" "No more improvement" "No more improvement" #> [7] "No more improvement" "No more improvement" "No more improvement" #> [10] "No more improvement" #> #> [[4]] #> [1] "No more improvement" "No more improvement" "No more improvement" #> [4] "No more improvement" "No more improvement" "No more improvement" #> [7] "No more improvement" "No more improvement" "No more improvement" #> [10] "No more improvement" #>
# prediction for other data -> only the best alpha is used summary(test, best.only = TRUE)
#> # A tibble: 1 x 7 #> alpha validation_loss intercept beta nb_var message all_conv #> <dbl> <dbl> <dbl> <list> <int> <list> <lgl> #> 1 1 0.425 0.0615 <dbl [733]> 154 <chr [10]> TRUE
pred <- predict(test, X, ind.row = ind.test, covar.row = covar[ind.test, ]) AUC(pred, y01[ind.test])
#> [1] 0.8693182
library(ggplot2) qplot(pred, fill = as.logical(y01[ind.test]), geom = "density", alpha = I(0.4)) + labs(fill = "Case?") + theme_bigstatsr() + theme(legend.position = c(0.52, 0.8))