2.1.0
User Documentation for Apache MADlib

Binomial logistic regression models the relationship between a dichotomous dependent variable and one or more predictor variables. The dependent variable may be a Boolean value or a categorial variable that can be represented with a Boolean expression. The probabilities describing the possible outcomes of a single trial are modeled, as a function of the predictor variables, using a logistic function.

Training Function
The logistic regression training function has the following format:
logregr_train( source_table,
               out_table,
               dependent_varname,
               independent_varname,
               grouping_cols,
               max_iter,
               optimizer,
               tolerance,
               verbose
             )
Arguments
source_table

TEXT. Name of the table containing the training data.

out_table

TEXT. Name of the generated table containing the output model.

The output table produced by the logistic regression training function contains the following columns:

<...>

TEXT. Grouping columns, if provided in input. This could be multiple columns depending on the grouping_cols input.

coef

FLOAT8. Vector of the coefficients of the regression.

log_likelihood

FLOAT8. The log-likelihood \( l(\boldsymbol c) \).

std_err

FLOAT8[]. Vector of the standard error of the coefficients.

z_stats

FLOAT8[]. Vector of the z-statistics of the coefficients.

p_values

FLOAT8[]. Vector of the p-values of the coefficients.

odds_ratios

FLOAT8[]. The odds ratio, \( \exp(c_i) \).

condition_no

FLOAT8[]. The condition number of the \(X^{*}X\) matrix. A high condition number is usually an indication that there may be some numeric instability in the result yielding a less reliable model. A high condition number often results when there is a significant amount of colinearity in the underlying design matrix, in which case other regression techniques may be more appropriate.

num_rows_processed

INTEGER. The number of rows actually processed, which is equal to the total number of rows in the source table minus the number of skipped rows.

num_missing_rows_skipped

INTEGER. The number of rows skipped during the training. A row will be skipped if the independent_varname is NULL or contains NULL values.

num_iterations

INTEGER. The number of iterations actually completed. This would be different from the nIterations argument if a tolerance parameter is provided and the algorithm converges before all iterations are completed.

variance_covariance FLOAT[]. Variance/covariance matrix.

A summary table named <out_table>_summary is also created at the same time, which has the following columns:

method

'logregr' for logistic regression.

source_table

The data source table name.

out_table

The output table name.

dependent_varname

The dependent variable name.

independent_varname

The independent variable names.

optimizer_params

A string that contains all the optimizer parameters, and has the form of 'optimizer=..., max_iter=..., tolerance=...'

num_all_groups

How many groups of data were fit by the logistic model.

num_failed_groups

How many groups failed in training.

num_rows_processed

The total number of rows used in the computation.

num_missing_rows_skipped

The total number of rows skipped.

grouping_cols Names of the grouping columns.

dependent_varname

TEXT. Name of the dependent variable column (of type BOOLEAN) in the training data, or an expression evaluating to a BOOLEAN.

independent_varname

TEXT. Expression list to evaluate for the independent variables. An intercept variable is not assumed so it is common to provide an explicit intercept term by including a single constant 1 term in the independent variable list.

grouping_cols (optional)

TEXT, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single model is generated for the whole data set.

max_iter (optional)

INTEGER, default: 20. The maximum number of iterations allowed.

optimizer (optional)

TEXT, default: 'irls'. The name of the optimizer to use:

'newton' or 'irls' Iteratively reweighted least squares
'cg' conjugate gradient
'igd' incremental gradient descent.

tolerance (optional)

FLOAT8, default: 0.0001. The difference between log-likelihood values in successive iterations that indicate convergence. A zero disables the convergence criterion, so that execution stops after the maximum iterations have completed, as set in the 'max_iter' parameter above.

verbose (optional)
BOOLEAN, default: FALSE. Provides verbose output of the results of training.
Note
For p-values, we just return the computation result directly. Other statistical packages like 'R' produce the same result, but on printing the result to screen, another format function is used and any p-value that is smaller than the machine epsilon (the smallest positive floating-point number 'x' such that '1 + x != 1') will be printed on screen as "< xxx" (xxx is the value of the machine epsilon). Although the result may look different, they are in fact the same.

Prediction Function
Two prediction functions are provided. One predicts the boolean value of the dependent variable, and the other predicts the probability of the value of the dependent variable being 'True'. Syntax is the same for both functions.

The function to predict the boolean value (True/False) of the dependent variable has the following syntax:

logregr_predict(coefficients,
                ind_var
               )

The function to predict the probability of the dependent variable being 'True' has the following syntax:

logregr_predict_prob(coefficients,
                     ind_var
                    )

Arguments

coefficients

DOUBLE PRECISION[]. Model coefficients obtained from training logregr_train().

ind_var
Independent variables expressed as a DOUBLE array. This should be the same length as the array obtained by evaluation of the 'independent_varname' argument in logregr_train().

Examples
  1. Create the training data table. This data set is related to predicting a second heart attack given treatment and health factors.
    DROP TABLE IF EXISTS patients;
    CREATE TABLE patients( id INTEGER NOT NULL,
                           second_attack INTEGER,
                           treatment INTEGER,
                           trait_anxiety INTEGER);
    INSERT INTO patients VALUES
    (1,  1, 1, 70),
    (2,  1, 1, 80),
    (3,  1, 1, 50),
    (4,  1, 0, 60),
    (5,  1, 0, 40),
    (6,  1, 0, 65),
    (7,  1, 0, 75),
    (8,  1, 0, 80),
    (9,  1, 0, 70),
    (10, 1, 0, 60),
    (11, 0, 1, 65),
    (12, 0, 1, 50),
    (13, 0, 1, 45),
    (14, 0, 1, 35),
    (15, 0, 1, 40),
    (16, 0, 1, 50),
    (17, 0, 0, 55),
    (18, 0, 0, 45),
    (19, 0, 0, 50),
    (20, 0, 0, 60);
    
  2. Train a regression model.
    DROP TABLE IF EXISTS patients_logregr, patients_logregr_summary;
    SELECT madlib.logregr_train( 'patients',                             -- Source table
                                 'patients_logregr',                     -- Output table
                                 'second_attack',                        -- Dependent variable
                                 'ARRAY[1, treatment, trait_anxiety]',   -- Feature vector
                                 NULL,                                   -- Grouping
                                 20,                                     -- Max iterations
                                 'irls'                                  -- Optimizer to use
                               );
    
    Note that in the example above we are dynamically creating the array of independent variables from column names. If you have large numbers of independent variables beyond the PostgreSQL limit of maximum columns per table, you would typically pre-build the arrays and store them in a single column.
  3. View the regression results.
    -- Set extended display on for easier reading of output
    \x on
    SELECT * from patients_logregr;
    
    Result:
    coef                     | {-6.36346994178192,-1.02410605239327,0.119044916668607}
    log_likelihood           | -9.41018298388876
    std_err                  | {3.21389766375099,1.17107844860319,0.0549790458269317}
    z_stats                  | {-1.97998524145757,-0.874498248699539,2.16527796868916}
    p_values                 | {0.0477051870698145,0.381846973530455,0.0303664045046183}
    odds_ratios              | {0.00172337630923221,0.359117354054956,1.12642051220895}
    condition_no             | 326.081922791575
    num_rows_processed       | 20
    num_missing_rows_skipped | 0
    num_iterations           | 5
    variance_covariance      | {{10.329138193064,-0.474304665195738,-0.171995901260057}, ...
    
  4. Alternatively, unnest the arrays in the results for easier reading of output:
    \x off
    SELECT unnest(array['intercept', 'treatment', 'trait_anxiety']) as attribute,
           unnest(coef) as coefficient,
           unnest(std_err) as standard_error,
           unnest(z_stats) as z_stat,
           unnest(p_values) as pvalue,
           unnest(odds_ratios) as odds_ratio
        FROM patients_logregr;
    
    Result:
       attribute   |    coefficient    |   standard_error   |       z_stat       |       pvalue       |     odds_ratio
    ---------------+-------------------+--------------------+--------------------+--------------------+---------------------
     intercept     | -6.36346994178192 |   3.21389766375099 |  -1.97998524145757 | 0.0477051870698145 | 0.00172337630923221
     treatment     | -1.02410605239327 |   1.17107844860319 | -0.874498248699539 |  0.381846973530455 |   0.359117354054956
     trait_anxiety | 0.119044916668607 | 0.0549790458269317 |   2.16527796868916 | 0.0303664045046183 |    1.12642051220895
    (3 rows)
    
  5. Predict the dependent variable using the logistic regression model. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.)
    \x off
    -- Display prediction value along with the original value
    SELECT p.id, madlib.logregr_predict(coef, ARRAY[1, treatment, trait_anxiety]),
           p.second_attack::BOOLEAN
    FROM patients p, patients_logregr m
    ORDER BY p.id;
    
    Result:
      id | logregr_predict | second_attack
    ----+-----------------+---------------
      1 | t               | t
      2 | t               | t
      3 | f               | t
      4 | t               | t
      5 | f               | t
      6 | t               | t
      7 | t               | t
      8 | t               | t
      9 | t               | t
     10 | t               | t
     11 | t               | f
     12 | f               | f
     13 | f               | f
     14 | f               | f
     15 | f               | f
     16 | f               | f
     17 | t               | f
     18 | f               | f
     19 | f               | f
     20 | t               | f
    (20 rows)
    
  6. Predict the probability of the dependent variable being TRUE.
    \x off
    -- Display prediction value along with the original value
    SELECT p.id, madlib.logregr_predict_prob(coef, ARRAY[1, treatment, trait_anxiety]),
           p.second_attack::BOOLEAN
    FROM patients p, patients_logregr m
    ORDER BY p.id;
    
    Result:
     id | logregr_predict_prob | second_attack
    ----+----------------------+---------------
      1 |    0.720223028941527 | t
      2 |    0.894354902502048 | t
      3 |    0.192269541755171 | t
      4 |    0.685513072239347 | t
      5 |    0.167747881508857 | t
      6 |     0.79809810891514 | t
      7 |    0.928568075752503 | t
      8 |    0.959305763693571 | t
      9 |    0.877576117431452 | t
     10 |    0.685513072239347 | t
     11 |    0.586700895943317 | f
     12 |    0.192269541755171 | f
     13 |    0.116032010632994 | f
     14 |   0.0383829143134982 | f
     15 |   0.0674976224147597 | f
     16 |    0.192269541755171 | f
     17 |    0.545870774302621 | f
     18 |    0.267675422387132 | f
     19 |    0.398618639285111 | f
     20 |    0.685513072239347 | f
    (20 rows)
    

Notes
All table names can be optionally schema qualified (current_schemas() would be searched if a schema name is not provided) and all table and column names should follow case-sensitivity and quoting rules per the database. (For instance, 'mytable' and 'MyTable' both resolve to the same entity, i.e. 'mytable'. If mixed-case or multi-byte characters are desired for entity names then the string should be double-quoted; in this case the input would be '"MyTable"').

Technical Background

(Binomial) logistic regression refers to a stochastic model in which the conditional mean of the dependent dichotomous variable (usually denoted \( Y \in \{ 0,1 \} \)) is the logistic function of an affine function of the vector of independent variables (usually denoted \( \boldsymbol x \)). That is,

\[ E[Y \mid \boldsymbol x] = \sigma(\boldsymbol c^T \boldsymbol x) \]

for some unknown vector of coefficients \( \boldsymbol c \) and where \( \sigma(x) = \frac{1}{1 + \exp(-x)} \) is the logistic function. Logistic regression finds the vector of coefficients \( \boldsymbol c \) that maximizes the likelihood of the observations.

Let

By definition,

\[ P[Y = y_i | \boldsymbol x_i] = \sigma((-1)^{(1 - y_i)} \cdot \boldsymbol c^T \boldsymbol x_i) \,. \]

Maximizing the likelihood \( \prod_{i=1}^n \Pr(Y = y_i \mid \boldsymbol x_i) \) is equivalent to maximizing the log-likelihood \( \sum_{i=1}^n \log \Pr(Y = y_i \mid \boldsymbol x_i) \), which simplifies to

\[ l(\boldsymbol c) = -\sum_{i=1}^n \log(1 + \exp((-1)^{(1 - y_i)} \cdot \boldsymbol c^T \boldsymbol x_i)) \,. \]

The Hessian of this objective is \( H = -X^T A X \) where \( A = \text{diag}(a_1, \dots, a_n) \) is the diagonal matrix with \( a_i = \sigma(\boldsymbol c^T \boldsymbol x) \cdot \sigma(-\boldsymbol c^T \boldsymbol x) \,. \) Since \( H \) is non-positive definite, \( l(\boldsymbol c) \) is convex. There are many techniques for solving convex optimization problems. Currently, logistic regression in MADlib can use one of three algorithms:

We estimate the standard error for coefficient \( i \) as

\[ \mathit{se}(c_i) = \left( (X^T A X)^{-1} \right)_{ii} \,. \]

The Wald z-statistic is

\[ z_i = \frac{c_i}{\mathit{se}(c_i)} \,. \]

The Wald \( p \)-value for coefficient \( i \) gives the probability (under the assumptions inherent in the Wald test) of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( \( c_i = 0 \)) is true. Letting \( F \) denote the cumulative density function of a standard normal distribution, the Wald \( p \)-value for coefficient \( i \) is therefore

\[ p_i = \Pr(|Z| \geq |z_i|) = 2 \cdot (1 - F( |z_i| )) \]

where \( Z \) is a standard normally distributed random variable.

The odds ratio for coefficient \( i \) is estimated as \( \exp(c_i) \).

The condition number is computed as \( \kappa(X^T A X) \) during the iteration immediately preceding convergence (i.e., \( A \) is computed using the coefficients of the previous iteration). A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.

Literature

A selection of references pertaining to logistic regression, with some good pointers to other literature.

[1] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 18 November 2009, http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf

[2] Thomas P. Minka: A comparison of numerical optimizers for logistic regression, 2003 (revised Mar 26, 2007), http://research.microsoft.com/en-us/um/people/minka/papers/logreg/minka-logreg.pdf

[3] Paul Komarek, Andrew W. Moore: Making Logistic Regression A Core Data Mining Tool With TR-IRLS, IEEE International Conference on Data Mining 2005, pp. 685-688, http://komarix.org/ac/papers/tr-irls.short.pdf

[4] D. P. Bertsekas: Incremental gradient, subgradient, and proximal methods for convex optimization: a survey, Technical report, Laboratory for Information and Decision Systems, 2010, http://web.mit.edu/dimitrib/www/Incremental_Survey_LIDS.pdf

[5] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro: Robust stochastic approximation approach to stochastic programming, SIAM Journal on Optimization, 19(4), 2009, http://www2.isye.gatech.edu/~nemirovs/SIOPT_RSA_2009.pdf

Related Topics

File logistic.sql_in documenting the training function

logregr_train()

elastic_net_train()

Linear Regression

Multinomial Regression

Ordinal Regression

Robust Variance

Clustered Variance

Cross Validation

Marginal Effects