User Documentation
 All Files Functions Groups
Linear Regression
+ Collaboration diagram for Linear Regression:

About:
Ordinary Least Squares Regression, also called Linear Regression, is a statistical model used to fit linear models.

It models a linear relationship of a scalar dependent variable \( y \) to one or more explanatory independent variables \( x \) to build a model of coefficients.

Training Function
linregr_train(source_table, out_table,
              dependent_varname,
              independent_varname,
              input_group_cols := NULL,
              heteroskedasticity_option := NULL)
source_table

Text value. The name of the table containing the training data.

out_table

Text value. Name of the generated table containing the output model.

dependent_varname

Text value. Expression to evaluate for the dependent variable.

independent_varname

Text value. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant 1 term in the independent variable list.

input_group_cols

Text value. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY clause. When this value is null, no grouping is used and a single result model is generated. Default value: NULL.

heteroskedasticity_option
Boolean value. When True, the heteroskedacity of the model is also calculated and returned with the results. Default value: False.

Output Table
The output table produced by the linear regression training function contains the following columns.
<...>

Any grouping columns provided during training. Present only if the grouping option is used.

coef

Float array. Vector of the coefficients of the regression.

r2

Float. R-squared coefficient of determination of the model.

std_err

Float array. Vector of the standard error of the coefficients.

t_stats

Float array. Vector of the t-statistics of the coefficients.

p_values

Float array. Vector of the p-values of the coefficients.

condition_no

Float array. The condition number of the \(X^{*}X\) matrix. A high condition number is usually an indication that there may be some numeric instability in the result yielding a less reliable model. A high condition number often results when there is a significant amount of colinearity in the underlying design matrix, in which case other regression techniques, such as elastic net regression, may be more appropriate.

bp_stats

Float. The Breush-Pagan statistic of heteroskedacity. Present only if the heteroskedacity argument was set to True when the model was trained.

bp_p_value
Float. The Breush-Pagan calculated p-value. Present only if the heteroskedacity parameter was set to True when the model was trained.

Prediction Function
linregr_predict(
  coef,
  col_ind
)
coef

Float array. Vector of the coefficients of regression.

col_ind

Float array. An array containing the independent variable column names.

Examples
  1. Create an input data set.
    CREATE TABLE houses (id INT, tax INT, bedroom INT, bath FLOAT, price INT,
                size INT, lot INT);
    COPY houses FROM STDIN WITH DELIMITER '|';
      1 |  590 |       2 |    1 |  50000 |  770 | 22100
      2 | 1050 |       3 |    2 |  85000 | 1410 | 12000
      3 |   20 |       3 |    1 |  22500 | 1060 |  3500
      4 |  870 |       2 |    2 |  90000 | 1300 | 17500
      5 | 1320 |       3 |    2 | 133000 | 1500 | 30000
      6 | 1350 |       2 |    1 |  90500 |  820 | 25700
      7 | 2790 |       3 |  2.5 | 260000 | 2130 | 25000
      8 |  680 |       2 |    1 | 142500 | 1170 | 22000
      9 | 1840 |       3 |    2 | 160000 | 1500 | 19000
     10 | 3680 |       4 |    2 | 240000 | 2790 | 20000
     11 | 1660 |       3 |    1 |  87000 | 1030 | 17500
     12 | 1620 |       3 |    2 | 118600 | 1250 | 20000
     13 | 3100 |       3 |    2 | 140000 | 1760 | 38000
     14 | 2070 |       2 |    3 | 148000 | 1550 | 14000
     15 |  650 |       3 |  1.5 |  65000 | 1450 | 12000
    \.
    
  2. Train a regression model.
    -- A single regression for all the data
    SELECT madlib.linregr_train(
        'houses', 'houses_linregr', 'price', 'array[1, tax, bath, size]');
    
    -- 3 output models, one for each value of "bedroom"
    SELECT madlib.linregr_train(
        'houses', 'houses_linregr_bedroom', 'price', 'array[1, tax, bath, size]',
        'bedroom');
    
  3. Examine the resulting models.
    -- Set extended display on for easier reading of output
    \x on
    SELECT * from houses_linregr;
    SELECT * FROM houses_linregr_bedroom;
    
    -- Alternatively you can unnest the results for easier reading of output
    \x off
    SELECT unnest(array['intercept','tax','bath','size']) as attribute,
           unnest(coef) as coefficient,
           unnest(std_err) as standard_error,
           unnest(t_stats) as t_stat,
           unnest(p_values) as pvalue
    FROM houses_linregr;
    
  4. Use the prediction function to evaluate residuals.
    SELECT houses.*,
           madlib.linregr_predict(array[1,tax,bath,size], m.coef) as predict,
             price - madlib.linregr_predict(array[1,tax,bath,size], m.coef)
                as residual
    FROM houses, houses_linregr m;
    

See Also
File linear.sql_in, documenting the SQL training functions
linregr()
logregr_train()
elastic_net_train()
Huber White Variance
Clustered Variance
Cross Validation

Technical Background

Ordinary least-squares (OLS) linear regression refers to a stochastic model in which the conditional mean of the dependent variable (usually denoted \( Y \)) is an affine function of the vector of independent variables (usually denoted \( \boldsymbol x \)). That is,

\[ E[Y \mid \boldsymbol x] = \boldsymbol c^T \boldsymbol x \]

for some unknown vector of coefficients \( \boldsymbol c \). The assumption is that the residuals are i.i.d. distributed Gaussians. That is, the (conditional) probability density of \( Y \) is given by

\[ f(y \mid \boldsymbol x) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \exp\left(-\frac{1}{2 \sigma^2} \cdot (y - \boldsymbol x^T \boldsymbol c)^2 \right) \,. \]

OLS linear regression finds the vector of coefficients \( \boldsymbol c \) that maximizes the likelihood of the observations.

Let

Maximizing the likelihood is equivalent to maximizing the log-likelihood \( \sum_{i=1}^n \log f(y_i \mid \boldsymbol x_i) \), which simplifies to minimizing the residual sum of squares \( RSS \) (also called sum of squared residuals or sum of squared errors of prediction),

\[ RSS = \sum_{i=1}^n ( y_i - \boldsymbol c^T \boldsymbol x_i )^2 = (\boldsymbol y - X \boldsymbol c)^T (\boldsymbol y - X \boldsymbol c) \,. \]

The first-order conditions yield that the \( RSS \) is minimized at

\[ \boldsymbol c = (X^T X)^+ X^T \boldsymbol y \,. \]

Computing the total sum of squares \( TSS \), the explained sum of squares \( ESS \) (also called the regression sum of squares), and the coefficient of determination \( R^2 \) is done according to the following formulas:

\begin{align*} ESS & = \boldsymbol y^T X \boldsymbol c - \frac{ \| y \|_1^2 }{n} \\ TSS & = \sum_{i=1}^n y_i^2 - \frac{ \| y \|_1^2 }{n} \\ R^2 & = \frac{ESS}{TSS} \end{align*}

Note: The last equality follows from the definition \( R^2 = 1 - \frac{RSS}{TSS} \) and the fact that for linear regression \( TSS = RSS + ESS \). A proof of the latter can be found, e.g., at: http://en.wikipedia.org/wiki/Sum_of_squares

We estimate the variance \( Var[Y - \boldsymbol c^T \boldsymbol x \mid \boldsymbol x] \) as

\[ \sigma^2 = \frac{RSS}{n - k} \]

and compute the t-statistic for coefficient \( i \) as

\[ t_i = \frac{c_i}{\sqrt{\sigma^2 \cdot \left( (X^T X)^{-1} \right)_{ii} }} \,. \]

The \( p \)-value for coefficient \( i \) gives the probability of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( \( c_i = 0 \)) is true. Letting \( F_\nu \) denote the cumulative density function of student-t with \( \nu \) degrees of freedom, the \( p \)-value for coefficient \( i \) is therefore

\[ p_i = \Pr(|T| \geq |t_i|) = 2 \cdot (1 - F_{n - k}( |t_i| )) \]

where \( T \) is a student-t distributed random variable with mean 0.

The condition number [2] \( \kappa(X) = \|X\|_2\cdot\|X^{-1}\|_2\) is computed as the product of two spectral norms [3]. The spectral norm of a matrix \(X\) is the largest singular value of \(X\) i.e. the square root of the largest eigenvalue of the positive-semidefinite matrix \(X^{*}X\):

\[ \|X\|_2 = \sqrt{\lambda_{\max}\left(X^{*}X\right)}\ , \]

where \(X^{*}\) is the conjugate transpose of \(X\). The condition number of a linear regression problem is a worst-case measure of how sensitive the result is to small perturbations of the input. A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.

Literature:

[1] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 21 October 2009, http://www.stat.cmu.edu/~cshalizi/350/lectures/17/lecture-17.pdf

[2] Wikipedia: Condition Number, http://en.wikipedia.org/wiki/Condition_number.

[3] Wikipedia: Spectral Norm, http://en.wikipedia.org/wiki/Spectral_norm#Spectral_norm

[4] Wikipedia: Breusch–Pagan test, http://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test

[5] Wikipedia: Heteroscedasticity-consistent standard errors, http://en.wikipedia.org/wiki/Heteroscedasticity-consistent_standard_errors