MADlib
1.1 A newer version is available
User Documentation
|
This module implements elastic net regularization for linear and logistic regression problems.
View short help messages using the following statements:
-- Summary of Elastic Net Regularization madlib.elastic_net_train() -- Training function syntax and output table format madlib.elastic_net_train('usage') -- Prediction function syntax madlib.elastic_net_train('predict') -- Syntax for gaussian/linear model madlib.elastic_net_train('gaussian') madlib.elastic_net_train('linear') -- Syntax for binomial/logistic model madlib.elastic_net_train('binomial') madlib.elastic_net_train('logistic') -- Parameter formats for optimizers madlib.elastic_net_train('fista') madlib.elastic_net_train('igd')
madlib.elastic_net_train( tbl_source, tbl_result, col_dep_var, col_ind_var, regress_family, alpha, lambda_value, standardize, grouping_col, optimizer := NULL, optimizer_params := NULL, excluded := NULL, max_iter := 10000, tolerance := 1e-6)
elastic_net_train()
on a subset of the data with a limited max_iter before applying it to the full data set with a large max_iter. In the pre-run, you can adjust the parameters to get the best performance and then apply the best set of parameters to the whole data set.Text value. The name of the table containing the training data.
Text value. Name of the generated table containing the output model.
Text value. An expression for the dependent variable. Both col_dep_var and col_ind_var can be valid Postgres expressions. For example, col_dep_var = 'log(y+1)'
, and col_ind_var = 'array[exp(x[1]), x[2], 1/(1+x[3])]'
. In the binomial case, you can use a Boolean expression, for example, col_dep_var = 'y < 0'
.
Text value. An expression for the independent variables. Use '*'
to specify all columns of tbl_source except those listed in the excluded string. If col_dep_var is a column name, it is automatically excluded from the independent variables. However, if col_dep_var is a valid Postgres expression, any column names used within the expression are only excluded if they are explicitly included in the excluded argument. It is a good idea to add all column names involved in the dependent variable expression to the excluded string.
Text value. The regression type, either 'gaussian' ('linear') or 'binomial' ('logistic').
Float8 value. Elastic net control parameter, value in [0, 1].
Float8 value. Regularization parameter, positive.
Boolean value. Whether to normalize the data. Setting this to True usually yields better results and faster convergence. Default: True.
Text value. Not currently implemented. Any non-NULL value is ignored. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY
clause. When this value is null, no grouping is used and a single result model is generated. Default value: NULL.
Text value. Name of optimizer, either 'fista' or 'igd'. Default: 'fista'.
Text value. Optimizer parameters, delimited with commas. The parameters differ depending on the value of optimizer. See the descriptions below for details. Default: NULL.
Text value. A comma-delimited list of column names excluded from features. For example, 'col1, col2'
. If the col_ind_var is an array, excluded is a list of the integer array positions to exclude, for example '1,2'
. If this argument is NULL or an empty string ''
, no columns are excluded.
Integer value. The maximum number of iterations that are allowed. Default: 10000.
When the elastic_net_train() optimizer argument value is 'fista', the optimizer_params argument has the following format:
'max_stepsize = ..., eta = ..., warmup = ..., warmup_lambdas = ..., warmup_lambda_no = ..., warmup_tolerance = ..., use_active_set = ..., activeset_tolerance = ..., random_stepsize = ...'
If stepsize does not work stepsize / eta is tried. Must be greater than 1. The default is 2.
If warmup is True, a series of lambda values, which is strictly descent and ends at the lambda value that the user wants to calculate, is used. The larger lambda gives very sparse solution, and the sparse solution again is used as the initial guess for the next lambda's solution, which speeds up the computation for the next lambda. For larger data sets, this can sometimes accelerate the whole computation and may be faster than computation on only one lambda value. The default is False.
The lambda value series to use when warmup is True. The default is NULL, which means that lambda values will be automatically generated.
How many lambdas are used in warm-up. If warmup_lambdas is not NULL, this value is overridden by the number of provided lambda values. The default is 15.
The value of tolerance used during warmup. The default is the same as the tolerance argument.
If use_active_set is True, an active-set method is used to speed up the computation. Considerable speedup is obtained by organizing the iterations around the active set of features—those with nonzero coefficients. After a complete cycle through all the variables, we iterate on only the active set until convergence. If another complete cycle does not change the active set, we are done, otherwise the process is repeated. The default is False.
The value of tolerance used during active set calculation. The default is the same as tolerance
.
When the elastic_net_train() optimizer argument value is 'igd', the optimizer_params argument has the following format:
'stepsize = ..., step_decay = ..., threshold = ..., warmup = ..., warmup_lambdas = ..., warmup_lambda_no = ..., warmup_tolerance = ..., parallel = ...'
tolerance
. madlib.elastic_net_predict( '<regress_family>', coefficients, intercept, ind_var ) FROM tbl_result, tbl_new_sourceThe above function returns a double value for each data point. When predicting with binomial models, the return value is 1 if the predicted result is True, and 0 if the prediction is False.
There are several different formats of the prediction function:
Alternatively, you can use another prediction function that stores the prediction result in a table. This is useful if you want to use elastic net together with the general cross validation function.
You do not need to specify whether the model is "linear" or "logistic" because this information is already included in the result table.
sql> DROP TABLE IF EXISTS houses; sql> CREATE TABLE houses (id INT, tax INT, bedroom INT, bath FLOAT, price INT, size INT, lot INT); sql> COPY houses FROM STDIN WITH DELIMITER '|'; 1 | 590 | 2 | 1 | 50000 | 770 | 22100 2 | 1050 | 3 | 2 | 85000 | 1410 | 12000 3 | 20 | 3 | 1 | 22500 | 1060 | 3500 4 | 870 | 2 | 2 | 90000 | 1300 | 17500 5 | 1320 | 3 | 2 | 133000 | 1500 | 30000 6 | 1350 | 2 | 1 | 90500 | 820 | 25700 7 | 2790 | 3 | 2.5 | 260000 | 2130 | 25000 8 | 680 | 2 | 1 | 142500 | 1170 | 22000 9 | 1840 | 3 | 2 | 160000 | 1500 | 19000 10 | 3680 | 4 | 2 | 240000 | 2790 | 20000 11 | 1660 | 3 | 1 | 87000 | 1030 | 17500 12 | 1620 | 3 | 2 | 118600 | 1250 | 20000 13 | 3100 | 3 | 2 | 140000 | 1760 | 38000 14 | 2070 | 2 | 3 | 148000 | 1550 | 14000 15 | 650 | 3 | 1.5 | 65000 | 1450 | 12000 \.
sql> DROP TABLE IF EXISTS houses_en; sql> SELECT madlib.elastic_net_train( 'houses', 'houses_en', 'price', 'array[tax, bath, size]', 'gaussian', 0.5, 0.1, true, null, 'fista', '', null, 10000, 1e-6);
-- Turn on expanded display to make it easier to read results. sql> \x on sql> SELECT * from houses_en;
sql> SELECT *, price - predict as residual FROM ( SELECT houses.*, madlib.elastic_net_predict( 'gaussian', m.coef_nonzero, m.intercept, array[tax,bath,size] ) as predict FROM houses, houses_en m) s;
Elastic net regularization seeks to find a weight vector that, for any given training example set, minimizes:
\[\min_{w \in R^N} L(w) + \lambda \left(\frac{(1-\alpha)}{2} \|w\|_2^2 + \alpha \|w\|_1 \right)\]
where \(L\) is the metric function that the user wants to minimize. Here \( \alpha \in [0,1] \) and \( lambda \geq 0 \). If \(alpha = 0\), we have the ridge regularization (known also as Tikhonov regularization), and if \(\alpha = 1\), we have the LASSO regularization.
For the Gaussian response family (or linear model), we have
\[L(\vec{w}) = \frac{1}{2}\left[\frac{1}{M} \sum_{m=1}^M (w^{t} x_m + w_{0} - y_m)^2 \right] \]
For the Binomial response family (or logistic model), we have
\[ L(\vec{w}) = \sum_{m=1}^M\left[y_m \log\left(1 + e^{-(w_0 + \vec{w}\cdot\vec{x}_m)}\right) + (1-y_m) \log\left(1 + e^{w_0 + \vec{w}\cdot\vec{x}_m}\right)\right]\ , \]
where \(y_m \in {0,1}\).
To get better convergence, one can rescale the value of each element of x
\[ x' \leftarrow \frac{x - \bar{x}}{\sigma_x} \]
and for Gaussian case we also let
\[y' \leftarrow y - \bar{y} \]
and then minimize with the regularization terms. At the end of the calculation, the orginal scales will be restored and an intercept term will be obtained at the same time as a by-product.
Note that fitting after scaling is not equivalent to directly fitting.
[1] Elastic net regularization. http://en.wikipedia.org/wiki/Elastic_net_regularization
[2] Beck, A. and M. Teboulle (2009), A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. on Imaging Sciences 2(1), 183-202.
[3] Shai Shalev-Shwartz and Ambuj Tewari, Stochastic Methods for l1 Regularized Loss Minimization. Proceedings of the 26th International Conference on Machine Learning, Montreal, Canada, 2009.