User Documentation
 All Files Functions Groups
+ Collaboration diagram for PCA Training:

About:
Principal component analysis (PCA) is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (i.e., accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to (i.e., uncorrelated with) the preceding components.

See the Technical Background for an introduction to principal component analysis and the implementation notes.

Online Help
View short help messages using the following statements:
-- Summary of PCA projection
madlib.pca_train()
madlib.pca_train('?')
madlib.pca_train('help')

-- Training function syntax and output table format
madlib.pca_train('usage')

-- Summary of PCA projection with sparse matrices
madlib.pca_sparse_train()
madlib.pca_sparse_train('?')
madlib.pca_sparse_train('help')

-- Training function syntax and output table format
madlib.pca_sparse_train('usage')

Training Function
The training functions have the following formats:
pca_project( source_table,  out_table, row_id,
    k, grouping_cols:= NULL,
    lanczos_iter := min(k+40, <smallest_matrix_dimension>),
    use_correlation := False, result_summary_table := NULL)
and
pca_sparse_project(source_table, out_table,
    row_id, col_id, val_id, row_dim,  col_dim, k,
    grouping_cols := NULL,
    lanczos_iter := min(k+40, <smallest_matrix_dimension>),
    use_correlation := False, result_summary_table := NULL)
Note
Because of the centering step in PCA (see Technical Background), sparse matrices almost always become dense during the training process. Thus, this implementation automatically densifies sparse matrix input, and there should be no expected performance improvement in using sparse matrix input over dense matrix input.
Arguments
source_table

Text value. Name of the input table containing the data for PCA training. The input data matrix should have \( N \) rows and \( M \) columns, where \( N \) is the number of data points, and \( M \) is the number of features for each data point.

A dense input table is expected to be in the one of the two standard MADlib dense matrix formats, and a sparse input table should be in the standard MADlib sparse matrix format.

The two standard MADlib dense matrix formats are

{TABLE|VIEW} source_table (
    row_id INTEGER,
    row_vec FLOAT8[],
)

and

{TABLE|VIEW} source_table (
    row_id INTEGER,
    col1 FLOAT8,
    col2 FLOAT8,
    ...
)

Note that the column name row_id is taken as an input parameter, and should contain a list of row indices (starting at 0) for the input matrix.

The input table for sparse PCA is expected to be in the form:

{TABLE|VIEW} source_table (
    ...
    row_id INTEGER,
    col_id INTEGER,
    val_id FLOAT8,
    ...
)

The row_id and col_id columns specify which entries in the matrix are nonzero, and the val_id column defines the values of the nonzero entries.

out_table

Text value. Name of the table that will contain the principal components of the input data.

row_id

Text value. Column name containing the row IDs in the input source table.

col_id

Text value. Name of 'col_id' column in sparse matrix representation (sparse matrices only).

val_id

Text value. Name of 'val_id' column in sparse matrix representation (sparse matrices only).

row_dim

Integer value. The number of rows in the sparse matrix (sparse matrices only).

col_dim

Integer value. The number of columns in the sparse matrix (sparse matrices only).

k

Integer value. The number of principal components to calculate from the input data.

grouping_cols

Text value. Currently grouping_cols is present as a placeholder for forward compatibility. The parameter is planned to be implemented as a comma-separated list of column names, with the source data grouped using the combination of all the columns. An independent PCA model will be computed for each combination of the grouping columns. Default: NULL.

lanczos_iter

Integer value. The number of Lanczos iterations for the SVD calculation. The Lanczos iteration number roughly corresponds to the accuracy of the SVD calculation, and a higher iteration number corresponds to greater accuracy but longer computation time. The number of iterations must be at least as large as the value of k, but no larger than the smallest dimension of the matrix. If the iteration number is given as zero, then the default number of iterations is used. Default: minimum of {k+40, smallest matrix dimension}.

use_correlation

Boolean value. Whether to use the correlation matrix for calculating the principal components instead of the covariance matrix. Currently use_correlation is a placeholder for forward compatibility, and this value must be set to false. Default: False.

result_summary_table
Text value. Name of the optional summary table. Default: NULL.

Output Tables

The output is divided into three tables (one of which is optional). The output table ('out_table' above) encodes the principal components with the

k highest eigenvalues. The table has the following columns:

row_id

Eigenvalue rank in descending order of the eigenvalue size.

principal_components

Vectors containing elements of the principal components.

eigen_values
The eigenvalues associated with each principal component.

In addition to the output table, a table containing the column means is also generated. This table has the same name as the output table, with the string "_mean" appended to the end. This table has only one column:

column_mean
A vector containing the column means for the input matrix.

The optional summary table contains information about the performance of the PCA. This table has the following columns:

rows_used
Number of data points in the input.
exec_time (ms)
Number of milliseconds for the PCA calculation to run.
iter
Number of iterations used in the SVD calculation.
recon_error
The absolute error in the SVD approximation.
relative_recon_error
The relative error in the SVD approximation.
use_correlation
Indicates if the correlation matrix was used.

Examples:
  1. Create the sample data.
    sql> DROP TABLE IF EXISTS mat;
    CREATE TABLE mat (
        row_id integer,
        row_vec double precision[]
    );
    
    sql> COPY mat (row_id, row_vec) FROM stdin;
    0   {1,2,3}
    1   {2,1,2}
    2   {3,2,1}
    \.
  2. Run the PCA function:
    sql> drop table result_table;
    sql> select pca_train(
        'mat',              -- name of the input table
        'result_table',     -- name of the output table
        'row_id',           -- column containing the matrix indices
        3                   -- Number of PCA components to compute
    );
    
  3. View the PCA results:
    sql> SELECT * from result_table;
     row_id |                     principal_components                     |     eigen_values
    --------+--------------------------------------------------------------+----------------------
          0 | {0.707106781186547,0.408248290459781,-0.577350269192513}     |                    2
          2 | {-0.707106781186547,0.408248290459781,-0.577350269192512}    | 1.26294130828989e-08
          1 | {2.08166817117217e-17,-0.816496580931809,-0.577350269183852} |    0.816496580927726

See Also
File pca.sql_in documenting the SQL functions
PCA Projection

Technical Background

The PCA implemented here uses an SVD decomposition implementation to recover the principle components (as opposed to the directly computing the eigenvectors of the covariance matrix). Let \( \boldsymbol X \) be the data matrix, and let \( \hat{x} \) be a vector of the column averages of \( \boldsymbol{X}\). PCA computes the matrix \( \hat{\boldsymbol X} \) as

\[ \hat{\boldsymbol X} = {\boldsymbol X} - \vec{e} \hat{x}^T \]

where \( \vec{e} \) is the vector of all ones.

PCA then computes the SVD matrix factorization

\[ \hat{\boldsymbol X} = {\boldsymbol U}{\boldsymbol \Sigma}{\boldsymbol V}^T \]

where \( {\boldsymbol \Sigma} \) is a diagonal matrix. The eigenvalues are recovered as the entries of \( {\boldsymbol \Sigma}/(\sqrt{N-1}) \), and the principle components are the rows of \( {\boldsymbol V} \).

It is important to note that the PCA implementation assumes that the user will use only the principle components that have non-zero eigenvalues. The SVD calculation is done with the Lanczos method, with does not guarantee correctness for singular vectors with zero-valued eigenvalues. Consequently, principle components with zero-valued eigenvalues are not guaranteed to be correct. Generally, this will not be problem unless the user wants to use the principle components for the entire eigenspectrum.

Literature:

[1] Principal Component Analysis. http://en.wikipedia.org/wiki/Principal_component_analysis

[2] Shlens, Jonathon (2009), A Tutorial on Principal Component Analysis