Principal component projection is a mathematical procedure that projects high dimensional data onto a lower dimensional space. This lower dimensional space is defined by the \( k \) principal components with the highest variance in the training data. More details on the mathematics of PCA can be found in pca_train and some details about the principal component projection calculations can be found in the Technical Background.
madlib.pca_project( source_table, pc_table, out_table, row_id, residual_table, result_summary_table )and
madlib.pca_sparse_project( source_table, pc_table, out_table, row_id, col_id, val_id, row_dim, col_dim, residual_table, result_summary_table )
TEXT. Source table name. Identical to pca_train, the input data matrix should have \( N \) rows and \( M \) columns, where \( N \) is the number of data points, and \( M \) is the number of features for each data point.
The input table for pca_project is expected to be in the one of the two standard MADlib dense matrix formats, and the sparse input table for pca_sparse_project should be in the standard MADlib sparse matrix format. These formats are described in the documentation for pca_train.
TEXT. Table name for the table containing principal components.
TEXT. Name of the table that will contain the low-dimensional representation of the input data.
The out_table encodes a dense matrix with the projection onto the principal components. The table has the following columns:
row_id | Row id of the output matrix. |
---|---|
row_vec | A vector containing elements in the row of the matrix. |
TEXT. Column name containing the row IDs in the input source table.
TEXT. Name of 'col_id' column in sparse matrix representation (sparse matrices only).
TEXT. Name of 'val_id' column in sparse matrix representation (sparse matrices only).
INTEGER. The number of rows in the sparse matrix (sparse matrices only).
INTEGER. The number of columns in the sparse matrix (sparse matrices only).
TEXT, default: NULL. Name of the optional residual table.
The residual_table encodes a dense residual matrix. The table has the following columns:
row_id | Row id of the output matrix. |
---|---|
row_vec | A vector containing elements in the row of the residual matrix. |
TEXT, default: NULL. Name of the optional summary table.
The result_summary_table contains information about the performance time of the PCA projection. The table has the following columns:
exec_time | Wall clock time (ms) of the function. |
---|---|
residual_norm | Absolute error of the residuals. |
relative_residual_norm | Relative error of the residuals. |
SELECT madlib.pca_project();
DROP TABLE IF EXISTS mat; CREATE TABLE mat ( row_id integer, row_vec double precision[] ); COPY mat (row_id, row_vec) FROM stdin; 1 {1,2,5} 0 {4,7,5} 3 {9,2,4} 2 {7,4,4} 5 {0,5,5} 4 {8,5,7} \.
DROP TABLE IF EXISTS result_table; SELECT pca_train ( 'mat', 'result_table', 'row_id', 2 );
DROP TABLE IF EXISTS residual_table, result_summary_table, out_table; SELECT pca_project( 'mat', 'result_table', 'out_table' 'row_id', 'residual_table', 'result_summary_table' );
SELECT * FROM result_summary_table;Result:
exec_time | residual_norm | relative_residual_norm ---------------+---------------+------------------------ 5685.40501595 | 2.19726255664 | 0.099262204234
Given a table containing some principal components \( \boldsymbol P \) and some input data \( \boldsymbol X \), the low-dimensional representation \( {\boldsymbol X}' \) is computed as
\begin{align*} {\boldsymbol {\hat{X}}} & = {\boldsymbol X} - \vec{e} \hat{x}^T \\ {\boldsymbol X}' & = {\boldsymbol {\hat {X}}} {\boldsymbol P}. \end{align*}
where \(\hat{x} \) is the column means of \( \boldsymbol X \) and \( \vec{e} \) is the vector of all ones. This step is equivalent to centering the data around the origin.
The residual table \( \boldsymbol R \) is a measure of how well the low-dimensional representation approximates the true input data, and is computed as
\[ {\boldsymbol R} = {\boldsymbol {\hat{X}}} - {\boldsymbol X}' {\boldsymbol P}^T. \]
A residual matrix with entries mostly close to zero indicates a good representation.
The residual norm \( r \) is simply
\[ r = \|{\boldsymbol R}\|_F \]
where \( \|\cdot\|_F \) is the Frobenius norm. The relative residual norm \( r' \) is
\[ r' = \frac{ \|{\boldsymbol R}\|_F }{\|{\boldsymbol X}\|_F } \]