MADlib
1.1 A newer version is available
User Documentation
|
It models a linear relationship of a scalar dependent variable \( y \) to one or more explanatory independent variables \( x \) to build a model of coefficients.
linregr_train(source_table, out_table, dependent_varname, independent_varname, input_group_cols := NULL, heteroskedasticity_option := NULL)
Text value. The name of the table containing the training data.
Text value. Name of the generated table containing the output model.
Text value. Expression to evaluate for the dependent variable.
Text value. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant 1
term in the independent variable list.
Text value. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY
clause. When this value is null, no grouping is used and a single result model is generated. Default value: NULL.
Any grouping columns provided during training. Present only if the grouping option is used.
Float array. Vector of the coefficients of the regression.
Float. R-squared coefficient of determination of the model.
Float array. Vector of the standard error of the coefficients.
Float array. Vector of the t-statistics of the coefficients.
Float array. Vector of the p-values of the coefficients.
Float array. The condition number of the \(X^{*}X\) matrix. A high condition number is usually an indication that there may be some numeric instability in the result yielding a less reliable model. A high condition number often results when there is a significant amount of colinearity in the underlying design matrix, in which case other regression techniques, such as elastic net regression, may be more appropriate.
Float. The Breush-Pagan statistic of heteroskedacity. Present only if the heteroskedacity argument was set to True when the model was trained.
linregr_predict( coef, col_ind )
CREATE TABLE houses (id INT, tax INT, bedroom INT, bath FLOAT, price INT, size INT, lot INT); COPY houses FROM STDIN WITH DELIMITER '|'; 1 | 590 | 2 | 1 | 50000 | 770 | 22100 2 | 1050 | 3 | 2 | 85000 | 1410 | 12000 3 | 20 | 3 | 1 | 22500 | 1060 | 3500 4 | 870 | 2 | 2 | 90000 | 1300 | 17500 5 | 1320 | 3 | 2 | 133000 | 1500 | 30000 6 | 1350 | 2 | 1 | 90500 | 820 | 25700 7 | 2790 | 3 | 2.5 | 260000 | 2130 | 25000 8 | 680 | 2 | 1 | 142500 | 1170 | 22000 9 | 1840 | 3 | 2 | 160000 | 1500 | 19000 10 | 3680 | 4 | 2 | 240000 | 2790 | 20000 11 | 1660 | 3 | 1 | 87000 | 1030 | 17500 12 | 1620 | 3 | 2 | 118600 | 1250 | 20000 13 | 3100 | 3 | 2 | 140000 | 1760 | 38000 14 | 2070 | 2 | 3 | 148000 | 1550 | 14000 15 | 650 | 3 | 1.5 | 65000 | 1450 | 12000 \.
-- A single regression for all the data SELECT madlib.linregr_train( 'houses', 'houses_linregr', 'price', 'array[1, tax, bath, size]'); -- 3 output models, one for each value of "bedroom" SELECT madlib.linregr_train( 'houses', 'houses_linregr_bedroom', 'price', 'array[1, tax, bath, size]', 'bedroom');
-- Set extended display on for easier reading of output \x on SELECT * from houses_linregr; SELECT * FROM houses_linregr_bedroom; -- Alternatively you can unnest the results for easier reading of output \x off SELECT unnest(array['intercept','tax','bath','size']) as attribute, unnest(coef) as coefficient, unnest(std_err) as standard_error, unnest(t_stats) as t_stat, unnest(p_values) as pvalue FROM houses_linregr;
SELECT houses.*, madlib.linregr_predict(array[1,tax,bath,size], m.coef) as predict, price - madlib.linregr_predict(array[1,tax,bath,size], m.coef) as residual FROM houses, houses_linregr m;
Ordinary least-squares (OLS) linear regression refers to a stochastic model in which the conditional mean of the dependent variable (usually denoted \( Y \)) is an affine function of the vector of independent variables (usually denoted \( \boldsymbol x \)). That is,
\[ E[Y \mid \boldsymbol x] = \boldsymbol c^T \boldsymbol x \]
for some unknown vector of coefficients \( \boldsymbol c \). The assumption is that the residuals are i.i.d. distributed Gaussians. That is, the (conditional) probability density of \( Y \) is given by
\[ f(y \mid \boldsymbol x) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \exp\left(-\frac{1}{2 \sigma^2} \cdot (y - \boldsymbol x^T \boldsymbol c)^2 \right) \,. \]
OLS linear regression finds the vector of coefficients \( \boldsymbol c \) that maximizes the likelihood of the observations.
Let
Maximizing the likelihood is equivalent to maximizing the log-likelihood \( \sum_{i=1}^n \log f(y_i \mid \boldsymbol x_i) \), which simplifies to minimizing the residual sum of squares \( RSS \) (also called sum of squared residuals or sum of squared errors of prediction),
\[ RSS = \sum_{i=1}^n ( y_i - \boldsymbol c^T \boldsymbol x_i )^2 = (\boldsymbol y - X \boldsymbol c)^T (\boldsymbol y - X \boldsymbol c) \,. \]
The first-order conditions yield that the \( RSS \) is minimized at
\[ \boldsymbol c = (X^T X)^+ X^T \boldsymbol y \,. \]
Computing the total sum of squares \( TSS \), the explained sum of squares \( ESS \) (also called the regression sum of squares), and the coefficient of determination \( R^2 \) is done according to the following formulas:
\begin{align*} ESS & = \boldsymbol y^T X \boldsymbol c - \frac{ \| y \|_1^2 }{n} \\ TSS & = \sum_{i=1}^n y_i^2 - \frac{ \| y \|_1^2 }{n} \\ R^2 & = \frac{ESS}{TSS} \end{align*}
Note: The last equality follows from the definition \( R^2 = 1 - \frac{RSS}{TSS} \) and the fact that for linear regression \( TSS = RSS + ESS \). A proof of the latter can be found, e.g., at: http://en.wikipedia.org/wiki/Sum_of_squares
We estimate the variance \( Var[Y - \boldsymbol c^T \boldsymbol x \mid \boldsymbol x] \) as
\[ \sigma^2 = \frac{RSS}{n - k} \]
and compute the t-statistic for coefficient \( i \) as
\[ t_i = \frac{c_i}{\sqrt{\sigma^2 \cdot \left( (X^T X)^{-1} \right)_{ii} }} \,. \]
The \( p \)-value for coefficient \( i \) gives the probability of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( \( c_i = 0 \)) is true. Letting \( F_\nu \) denote the cumulative density function of student-t with \( \nu \) degrees of freedom, the \( p \)-value for coefficient \( i \) is therefore
\[ p_i = \Pr(|T| \geq |t_i|) = 2 \cdot (1 - F_{n - k}( |t_i| )) \]
where \( T \) is a student-t distributed random variable with mean 0.
The condition number [2] \( \kappa(X) = \|X\|_2\cdot\|X^{-1}\|_2\) is computed as the product of two spectral norms [3]. The spectral norm of a matrix \(X\) is the largest singular value of \(X\) i.e. the square root of the largest eigenvalue of the positive-semidefinite matrix \(X^{*}X\):
\[ \|X\|_2 = \sqrt{\lambda_{\max}\left(X^{*}X\right)}\ , \]
where \(X^{*}\) is the conjugate transpose of \(X\). The condition number of a linear regression problem is a worst-case measure of how sensitive the result is to small perturbations of the input. A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.
[1] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 21 October 2009, http://www.stat.cmu.edu/~cshalizi/350/lectures/17/lecture-17.pdf
[2] Wikipedia: Condition Number, http://en.wikipedia.org/wiki/Condition_number.
[3] Wikipedia: Spectral Norm, http://en.wikipedia.org/wiki/Spectral_norm#Spectral_norm
[4] Wikipedia: Breusch–Pagan test, http://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test
[5] Wikipedia: Heteroscedasticity-consistent standard errors, http://en.wikipedia.org/wiki/Heteroscedasticity-consistent_standard_errors