MADlib  1.4.1
User Documentation
 All Files Functions Variables Groups
Principal Component Projection

Principal component projection is a mathematical procedure that projects high dimensional data onto a lower dimensional space. This lower dimensional space is defined by the \( k \) principal components with the highest variance in the training data. More details on the mathematics of PCA can be found in pca_train and some details about the principal component projection calculations can be found in the Technical Background.

Projection Function
The projection functions have the following formats:
madlib.pca_project( source_table, 
                    pc_table, 
                    out_table, 
                    row_id, 
                    residual_table, 
                    result_summary_table
                  )
and
madlib.pca_sparse_project( source_table, 
                           pc_table, 
                           out_table, 
                           row_id, 
                           col_id, 
                           val_id, 
                           row_dim, 
                           col_dim, 
                           residual_table, 
                           result_summary_table
                         ) 
Arguments
source_table

TEXT. Source table name. Identical to pca_train, the input data matrix should have \( N \) rows and \( M \) columns, where \( N \) is the number of data points, and \( M \) is the number of features for each data point.

The input table for pca_project is expected to be in the one of the two standard MADlib dense matrix formats, and the sparse input table for pca_sparse_project should be in the standard MADlib sparse matrix format. These formats are described in the documentation for pca_train.

pc_table

TEXT. Table name for the table containing principal components.

out_table

TEXT. Name of the table that will contain the low-dimensional representation of the input data.

The out_table encodes a dense matrix with the projection onto the principal components. The table has the following columns:

row_id Row id of the output matrix.
row_vec A vector containing elements in the row of the matrix.

row_id

TEXT. Column name containing the row IDs in the input source table.

col_id

TEXT. Name of 'col_id' column in sparse matrix representation (sparse matrices only).

val_id

TEXT. Name of 'val_id' column in sparse matrix representation (sparse matrices only).

row_dim

INTEGER. The number of rows in the sparse matrix (sparse matrices only).

col_dim

INTEGER. The number of columns in the sparse matrix (sparse matrices only).

residual_table (optional)

TEXT, default: NULL. Name of the optional residual table.

The residual_table encodes a dense residual matrix. The table has the following columns:

row_id Row id of the output matrix.
row_vec A vector containing elements in the row of the residual matrix.

result_summary_table (optional)

TEXT, default: NULL. Name of the optional summary table.

The result_summary_table contains information about the performance time of the PCA projection. The table has the following columns:

exec_time Wall clock time (ms) of the function.
residual_norm Absolute error of the residuals.
relative_residual_norm Relative error of the residuals.

Examples
  1. View online help for the PCA projection function.
    SELECT madlib.pca_project();
    
  2. Create the sample data.
    DROP TABLE IF EXISTS mat;
    CREATE TABLE mat (
        row_id integer,
        row_vec double precision[]
    );
    COPY mat (row_id, row_vec) FROM stdin;
    1   {1,2,5}
    0   {4,7,5}
    3   {9,2,4}
    2   {7,4,4}
    5   {0,5,5}
    4   {8,5,7}
    \.
    
  3. Run the PCA function and keep only the top two PCs:
    DROP TABLE IF EXISTS result_table;
    SELECT pca_train ( 'mat', 
                       'result_table', 
                       'row_id', 
                       2 
                     );
    
  4. Project the original data into a low-dimensional representation.
    DROP TABLE IF EXISTS residual_table, result_summary_table, out_table;
    SELECT pca_project( 'mat',
                        'result_table',
                        'out_table'
                        'row_id',
                        'residual_table',
                        'result_summary_table'
                      );
    
  5. Check the error in the projection.
    SELECT * FROM result_summary_table;
    
    Result:
       exec_time   | residual_norm | relative_residual_norm
    ---------------+---------------+------------------------
     5685.40501595 | 2.19726255664 |         0.099262204234
    

Notes
  • This function is intended to operate on the principal component tables generated by pca_train or pca_sparse_train. The MADlib PCA functions generate a table containing the column-means in addition to a table containing the principal components. If this table is not found by the MADlib projection function, it will trigger an error. As long the principal component tables are created with MADlib functions, then the column-means table will be automatically found by the MADlib projection functions.
  • Because of the centering step in PCA projection (see "Technical Background"), sparse matrices almost always become dense during the projection process. Thus, this implementation automatically densifies sparse matrix input, and there should be no expected performance improvement in using sparse matrix input over dense matrix input.
  • Table names can be optionally schema qualified (current_schemas() is searched if a schema name is not provided) and all table and column names should follow case-sensitivity and quoting rules per the database. (For instance, 'mytable' and 'MyTable' both resolve to the same entity, i.e. 'mytable'. If mixed-case or multi-byte characters are desired for entity names then the string should be double-quoted; in this case the input would be '"MyTable"').

Technical Background

Given a table containing some principal components \( \boldsymbol P \) and some input data \( \boldsymbol X \), the low-dimensional representation \( {\boldsymbol X}' \) is computed as

\begin{align*} {\boldsymbol {\hat{X}}} & = {\boldsymbol X} - \vec{e} \hat{x}^T \\ {\boldsymbol X}' & = {\boldsymbol {\hat {X}}} {\boldsymbol P}. \end{align*}

where \(\hat{x} \) is the column means of \( \boldsymbol X \) and \( \vec{e} \) is the vector of all ones. This step is equivalent to centering the data around the origin.

The residual table \( \boldsymbol R \) is a measure of how well the low-dimensional representation approximates the true input data, and is computed as

\[ {\boldsymbol R} = {\boldsymbol {\hat{X}}} - {\boldsymbol X}' {\boldsymbol P}^T. \]

A residual matrix with entries mostly close to zero indicates a good representation.

The residual norm \( r \) is simply

\[ r = \|{\boldsymbol R}\|_F \]

where \( \|\cdot\|_F \) is the Frobenius norm. The relative residual norm \( r' \) is

\[ r' = \frac{ \|{\boldsymbol R}\|_F }{\|{\boldsymbol X}\|_F } \]

Related Topics
File pca_project.sql_in documenting the SQL functions

Principal Component Analysis