Fit solution paths for linear or logistic regression models penalized by lasso (alpha = 1) or elastic-net (1e-4 < alpha < 1) over a grid of values for the regularization parameter lambda.

COPY_biglasso_main(
X,
y.train,
ind.train,
ind.col,
covar.train,
family = c("gaussian", "binomial"),
alphas = 1,
K = 10,
ind.sets = NULL,
nlambda = 200,
lambda.min.ratio = if (n > p) 1e-04 else 0.001,
nlam.min = 50,
n.abort = 10,
base.train = NULL,
pf.X = NULL,
pf.covar = NULL,
eps = 1e-05,
max.iter = 1000,
dfmax = 50000,
lambda.min = if (n > p) 1e-04 else 0.001,
power_scale = 1,
return.all = FALSE,
warn = TRUE,
ncores = 1
)

## Arguments

family

Either "gaussian" (linear) or "binomial" (logistic).

alphas

The elastic-net mixing parameter that controls the relative contribution from the lasso (l1) and the ridge (l2) penalty. The penalty is defined as $$\alpha||\beta||_1 + (1-\alpha)/2||\beta||_2^2.$$ alpha = 1 is the lasso penalty and alpha in between 0 (1e-4) and 1 is the elastic-net penalty. Default is 1. You can pass multiple values, and only one will be used (optimized by grid-search).

K

Number of sets used in the Cross-Model Selection and Averaging (CMSA) procedure. Default is 10.

ind.sets

Integer vectors of values between 1 and K specifying which set each index of the training set is in. Default randomly assigns these values but it can be useful to set this vector for reproducibility, or if you want to refine the grid-search over alphas using the same sets.

nlambda

The number of lambda values. Default is 200.

lambda.min.ratio

The smallest value for lambda, as a fraction of lambda.max. Default is .0001 if the number of observations is larger than the number of variables and .001 otherwise.

nlam.min

Minimum number of lambda values to investigate. Default is 50.

n.abort

Number of lambda values for which prediction on the validation set must decrease before stopping. Default is 10.

base.train

Vector of base predictions. Model will be learned starting from these predictions. This can be useful if you want to previously fit a model with large-effect variables that you don't want to penalize.

pf.X

A multiplicative factor for the penalty applied to each coefficient. If supplied, pf.X must be a numeric vector of the same length as ind.col. Default is all 1. The purpose of pf.X is to apply differential penalization if some coefficients are thought to be more likely than others to be in the model. Setting SOME to 0 allows to have unpenalized coefficients.

pf.covar

Same as pf.X, but for covar.train. You might want to set some to 0 as variables with large effects can mask small effects in penalized regression.

eps

Convergence threshold for inner coordinate descent. The algorithm iterates until the maximum change in the objective after any coefficient update is less than eps times the null deviance. Default value is 1e-5.

max.iter

Maximum number of iterations. Default is 1000.

dfmax

Upper bound for the number of nonzero coefficients. Default is 50e3 because, for large data sets, computational burden may be heavy for models with a large number of nonzero coefficients.

lambda.min

This parameter has been renamed lambda.min.ratio and is now deprecated.

power_scale

When using lasso (alpha = 1), penalization to apply that is equivalent to scaling genotypes dividing by (standard deviation)^power_scale. Default is 1 and corresponding to standard scaling. Using 0 would correspond to using unscaled variables and using 0.5 is Pareto scaling. If you e.g. use power_scale = c(0, 0.5, 1), the best value in CMSA will be used (just like with alphas).

Multiplicative penalty factor to apply to variables in the form of 1 / m_j^power_adaptive, where m_j is the marginal statistic for variable j. Default is 0, which effectively disables this option. If you e.g. use power_adaptive = c(0, 0.5, 1.5), the best value in CMSA will be used (just like with alphas).

return.all

Deprecated. Now always return all models.

warn

Whether to warn if some models may not have reached a minimum. Default is TRUE.

## Details

The objective function for linear regression (family = "gaussian") is $$\frac{1}{2n}\textrm{RSS} + \textrm{penalty},$$ for logistic regression (family = "binomial") it is $$-\frac{1}{n} loglike + \textrm{penalty}.$$