graphnet native implementation of graphnet algorithm.
DESCRIPTION
Elastic net linear and logistic regression. Note that this algorithm
allows for the GraphNet generalization that allows coupling between
features. Requires specification of the lasso penalty term L1 and
(optionally) the ridge penalty term L2. The latter can also be a matrix
to allow coupling of variables.
REFERENCE Regularization paths for generalized linear models via
coordinate descent by Friedman et al.
Grosenick L, Klingenberg B, Knutson B. A family of interpretable
multivariate models for regression and classification of whole-brain
fMRI data. stanford.edu.
EXAMPLE
X = rand(10,20); Y = [1 1 1 1 1 2 2 2 2 2]';
m = dml.graphnet('family','binomial','L1',0.1);
m = m.train(X,Y);
Z = m.test(X);
DEVELOPER
Marcel van Gerven (m.vangerven@donders.ru.nl)
L1 |
lasso penalty |
L2 |
nfeatures x nfeatures ridge penalty |
conv |
plot of the convergence of the parameters sum(abs(beta-betaold)); |
df |
degrees of freedom |
family |
gaussian, binomial, or multinomial |
indims |
dimensions of the input data (excluding the trial dim and time dim in time series data) |
maxiter |
maximum number of iterations for native elastic net |
restart |
when false, starts at the previously learned parameters; needed for online learning and grid search |
tolerance |
tolerance in the error for native elastic net |
verbose |
whether or not to generate diagnostic output |
weights |
regression weights (offset last) |