MADlib  1.4.1
User Documentation
 All Files Functions Variables Groups
Principal Component Analysis

Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (i.e., accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to (i.e., uncorrelated with) the preceding components.

See the Technical Background for an introduction to principal component analysis and the implementation notes.

Training Function
The training functions have the following formats:
pca_train( source_table,  
           out_table, 
           row_id, 
           k, 
           grouping_cols, 
           lanczos_iter, 
           use_correlation, 
           result_summary_table
         ) 
and
pca_sparse_train( source_table, 
                  out_table, 
                  row_id, 
                  col_id, 
                  val_id, 
                  row_dim,  
                  col_dim, 
                  k, 
                  grouping_cols, 
                  lanczos_iter, 
                  use_correlation, 
                  result_summary_table
                )

Arguments

source_table

TEXT. Name of the input table containing the data for PCA training. The input data matrix should have \( N \) rows and \( M \) columns, where \( N \) is the number of data points, and \( M \) is the number of features for each data point.

A dense input table is expected to be in the one of the two standard MADlib dense matrix formats, and a sparse input table should be in the standard MADlib sparse matrix format.

The two standard MADlib dense matrix formats are

{TABLE|VIEW} source_table (
    row_id INTEGER,
    row_vec FLOAT8[],
)

and

{TABLE|VIEW} source_table (
    row_id INTEGER,
    col1 FLOAT8,
    col2 FLOAT8,
    ...
)

Note that the column name row_id is taken as an input parameter, and should contain a list of row indices (starting at 0) for the input matrix.

The input table for sparse PCA is expected to be in the form:

{TABLE|VIEW} source_table (
    ...
    row_id INTEGER,
    col_id INTEGER,
    val_id FLOAT8,
    ...
)

The row_id and col_id columns specify which entries in the matrix are nonzero, and the val_id column defines the values of the nonzero entries.

out_table

TEXT. The name of the table that will contain the output. The output is divided into three tables.

The primary output table (out_table) encodes the principal components with the k highest eigenvalues. The table has the following columns:

row_id Eigenvalue rank in descending order of the eigenvalue size.
principal_components Vectors containing elements of the principal components.
eigen_values The eigenvalues associated with each principal component.

The table out_table_means contains the column means. This table has just one column:

column_means A vector containing the column means for the input matrix.

The optional table result_summary_table contains information about the performance of the PCA. The contents of this table are described under the result_summary_table argument.

row_id

TEXT. Column name containing the row IDs in the input source table.

col_id

TEXT. Name of 'col_id' column in sparse matrix representation (sparse matrices only).

val_id

TEXT. Name of 'val_id' column in sparse matrix representation (sparse matrices only).

row_dim

INTEGER. The number of rows in the sparse matrix (sparse matrices only).

col_dim

INTEGER. The number of columns in the sparse matrix (sparse matrices only).

k

INTEGER. The number of principal components to calculate from the input data.

grouping_cols (optional)

TEXT, default: NULL. Currently grouping_cols is present as a placeholder for forward compatibility. The parameter is planned to be implemented as a comma-separated list of column names, with the source data grouped using the combination of all the columns. An independent PCA model will be computed for each combination of the grouping columns.

lanczos_iter (optional)

INTEGER, default: minimum of {k+40, smallest matrix dimension}. The number of Lanczos iterations for the SVD calculation. The Lanczos iteration number roughly corresponds to the accuracy of the SVD calculation, and a higher iteration number corresponds to greater accuracy but longer computation time. The number of iterations must be at least as large as the value of k, but no larger than the smallest dimension of the matrix. If the iteration number is given as zero, then the default number of iterations is used.

use_correlation (optional)

BOOLEAN, default FALSE. Whether to use the correlation matrix for calculating the principal components instead of the covariance matrix. Currently use_correlation is a placeholder for forward compatibility, and this value must be set to false.

result_summary_table (optional)

TEXT, default NULL. Name of the optional summary table. When NULL, no summary table is generated.

This sumary table has the following columns:

rows_used INTEGER. Number of data points in the input.
exec_time (ms) FLOAT8. Number of milliseconds for the PCA calculation to run.
iter INTEGER. Number of iterations used in the SVD calculation.
recon_error FLOAT8. The absolute error in the SVD approximation.
relative_recon_error FLOAT8. The relative error in the SVD approximation.
use_correlation BOOLEAN. Indicates if the correlation matrix was used.

Examples
  1. View online help for the PCA training function.
    SELECT madlib.pca_train();
    
  2. Create the sample data.
    DROP TABLE IF EXISTS mat;
    CREATE TABLE mat (
        row_id integer,
        row_vec double precision[]
    );
    COPY mat (row_id, row_vec) FROM stdin;
    0   {1,2,3}
    1   {2,1,2}
    2   {3,2,1}
    
    
  3. Run the PCA function:
    DROP TABLE result_table;
    SELECT pca_train( 'mat',
                      'result_table',
                      'row_id',
                      3
                    );
    
  4. View the PCA results:
    SELECT * FROM result_table;
    
    Result
     row_id |                     principal_components                     |     eigen_values
    --------+--------------------------------------------------------------+----------------------
          0 | {0.707106781186547,0.408248290459781,-0.577350269192513}     |                    2
          2 | {-0.707106781186547,0.408248290459781,-0.577350269192512}    | 1.26294130828989e-08
          1 | {2.08166817117217e-17,-0.816496580931809,-0.577350269183852} |    0.816496580927726
    

Notes
  • Table names can be optionally schema qualified (current_schemas() would be searched if a schema name is not provided) and all table and column names should follow case-sensitivity and quoting rules per the database. (For instance, 'mytable' and 'MyTable' both resolve to the same entity, i.e. 'mytable'. If mixed-case or multi-byte characters are desired for entity names then the string should be double-quoted; in this case the input would be '"MyTable"').
  • Because of the centering step in PCA (see Technical Background), sparse matrices almost always become dense during the training process. Thus, this implementation automatically densifies sparse matrix input, and there should be no expected performance improvement in using sparse matrix input over dense matrix input.

Technical Background

The PCA implemented here uses an SVD decomposition implementation to recover the principal components (as opposed to the directly computing the eigenvectors of the covariance matrix). Let \( \boldsymbol X \) be the data matrix, and let \( \hat{x} \) be a vector of the column averages of \( \boldsymbol{X}\). PCA computes the matrix \( \hat{\boldsymbol X} \) as

\[ \hat{\boldsymbol X} = {\boldsymbol X} - \vec{e} \hat{x}^T \]

where \( \vec{e} \) is the vector of all ones.

PCA then computes the SVD matrix factorization

\[ \hat{\boldsymbol X} = {\boldsymbol U}{\boldsymbol \Sigma}{\boldsymbol V}^T \]

where \( {\boldsymbol \Sigma} \) is a diagonal matrix. The eigenvalues are recovered as the entries of \( {\boldsymbol \Sigma}/(\sqrt{N-1}) \), and the principal components are the rows of \( {\boldsymbol V} \).

It is important to note that the PCA implementation assumes that the user will use only the principal components that have non-zero eigenvalues. The SVD calculation is done with the Lanczos method, with does not guarantee correctness for singular vectors with zero-valued eigenvalues. Consequently, principal components with zero-valued eigenvalues are not guaranteed to be correct. Generally, this will not be problem unless the user wants to use the principal components for the entire eigenspectrum.

Literature

[1] Principal Component Analysis. http://en.wikipedia.org/wiki/Principal_component_analysis

[2] Shlens, Jonathon (2009), A Tutorial on Principal Component Analysis

Related Topics

File pca.sql_in documenting the SQL functions

Principal Component Projection