User Documentation
 All Files Functions Groups
+ Collaboration diagram for PCA Projection:

About:

Principal component projection is a mathematical procedure that projects high dimensional data onto a lower dimensional space. This lower dimensional space is defined by the \( k \) principal components with the highest variance in the training data. More details on the mathematics of PCA can be found in pca_train and some details about the principal component projection calculations can be found in the Technical Background.

Online Help

View short help messages using the following statements:

-- Summary of PCA projection
madlib.pca_project()
madlib.pca_project('?')
madlib.pca_project('help')

-- Projection function syntax and output table format
madlib.pca_project('usage')

-- Summary of PCA projection with sparse matrices
madlib.pca_sparse_project()
madlib.pca_sparse_project('?')
madlib.pca_sparse_project('help')

-- Projection function syntax and output table format
madlib.pca_sparse_project('usage')

Training Function
The training functions have the following formats:
madlib.pca_project( source_table, pc_table, out_table, row_id,
    residual_table := NULL, result_summary_table := NULL)
and
madlib.pca_sparse_project( source_table, pc_table, out_table, row_id,
    col_id, val_id, row_dim, col_dim, residual_table := NULL,
    result_summary_table := NULL)
Note
This function is intended to operate on the principal component tables generated by pca_train or pca_sparse_train. The MADlib PCA functions generate a table containing the column-means in addition to a table containing the principal components. If this table is not found by the MADlib projection function, it will trigger an error. As long the principal component tables are created with MADlib functions, then the column-means table will be automatically found by the MADlib projection functions.
Because of the centering step in PCA projection (see Technical Background), sparse matrices almost always become dense during the projection process. Thus, this implementation automatically densifies sparse matrix input, and there should be no expected performance improvement in using sparse matrix input over dense matrix input.
Arguments
source_table

Text value. Source table name. Identical to pca_train, the input data matrix should have \( N \) rows and \( M \) columns, where \( N \) is the number of data points, and \( M \) is the number of features for each data point.

The input table for pca_project is expected to be in the one of the two standard MADlib dense matrix formats, and the sparse input table for pca_sparse_project should be in the standard MADlib sparse matrix format. These formats are described in the documentation for pca_train.

pc_table

Text value. Table name for the table containing principal components.

out_table

Text value. Name of the table that will contain the low-dimensional representation of the input data.

row_id

Text value. Column name containing the row IDs in the input source table.

col_id

Text value. Name of 'col_id' column in sparse matrix representation (sparse matrices only).

val_id

Text value. Name of 'val_id' column in sparse matrix representation (sparse matrices only).

row_dim

Integer value. The number of rows in the sparse matrix (sparse matrices only).

col_dim

Integer value. The number of columns in the sparse matrix (sparse matrices only).

residual_table

Text value. Name of the optional residual table. Default: NULL.

result_summary_table
Text value. Name of the optional summary table. Default: NULL.

Output Tables

The output is divided into three tables (two of which are optional).

The output table ('out_table' above) encodes a dense matrix with the projection onto the principal components. The table has the following columns:

row_id
Row id of the output matrix.
row_vec
A vector containing elements in the row of the matrix.

The residual table ('residual_table' above) encodes a dense residual matrix. The table has the following columns:

row_id
Row id of the output matrix.
row_vec
A vector containing elements in the row of the residual matrix.

The result summary table ('result_summary_table' above) contains information about the performance time of the PCA projection. The table has the following columns:

exec_time
Wall clock time (ms) of the function.
residual_norm
Absolute error of the residuals.
relative_residual_norm
Relative error of the residuals.

Examples:
  1. Create the sample data.
    sql> DROP TABLE IF EXISTS mat;
    sql> CREATE TABLE mat (
        row_id integer,
        row_vec double precision[]
    );
    
    sql> COPY mat (row_id, row_vec) FROM stdin;
    1   {1,2,5}
    0   {4,7,5}
    3   {9,2,4}
    2   {7,4,4}
    5   {0,5,5}
    4   {8,5,7}
    \.
  2. Run the PCA function and keep only the top two PCs:
    sql> DROP TABLE IF EXISTS result_table;
    sql> SELECT pca_train(
        'mat',              -- name of the input table
        'result_table',     -- name of the output table
        'row_id',           -- column containing the matrix indices
        2                   -- Number of PCA components to compute
    );
    
  3. Project the original data into a low-dimensional representation.
    sql> DROP TABLE IF EXISTS residual_table, result_summary_table, out_table;
    sql> SELECT pca_project(
        'mat',              -- name of the input table
        'result_table',     -- name of the table containing the PCs
        'out_table'         -- name of the table containing the projection
        'row_id',           -- column containing the input matrix indices
        'residual_table',         -- Name of the optional residual table
        'result_summary_table'    -- Name of the optional summary table
    );
    
  4. Check the error in the projection.
    sql> SELECT * FROM result_summary_table;
       exec_time   | residual_norm | relative_residual_norm
    ---------------+---------------+------------------------
     5685.40501595 | 2.19726255664 |         0.099262204234
    

See Also
File pca_project.sql_in documenting the SQL functions.
PCA Training

Technical Background

Given a table containing some principal components \( \boldsymbol P \) and some input data \( \boldsymbol X \), the low-dimensional representation \( {\boldsymbol X}' \) is computed as

\begin{align*} {\boldsymbol {\hat{X}}} & = {\boldsymbol X} - \vec{e} \hat{x}^T \\ {\boldsymbol X}' & = {\boldsymbol {\hat {X}}} {\boldsymbol P}. \end{align*}

where \(\hat{x} \) is the column means of \( \boldsymbol X \) and \( \vec{e} \) is the vector of all ones. This step is equivalent to centering the data around the origin.

The residual table \( \boldsymbol R \) is a measure of how well the low-dimensional representation approximates the true input data, and is computed as

\[ {\boldsymbol R} = {\boldsymbol {\hat{X}}} - {\boldsymbol X}' {\boldsymbol P}^T. \]

A residual matrix with entries mostly close to zero indicates a good representation.

The residual norm \( r \) is simply

\[ r = \|{\boldsymbol R}\|_F \]

where \( \|\cdot\|_F \) is the Frobenius norm. The relative residual norm \( r' \) is

\[ r' = \frac{ \|{\boldsymbol R}\|_F }{\|{\boldsymbol X}\|_F } \]