MADlib
1.1 A newer version is available
User Documentation
|
Principal component projection is a mathematical procedure that projects high dimensional data onto a lower dimensional space. This lower dimensional space is defined by the \( k \) principal components with the highest variance in the training data. More details on the mathematics of PCA can be found in pca_train and some details about the principal component projection calculations can be found in the Technical Background.
View short help messages using the following statements:
-- Summary of PCA projection madlib.pca_project() madlib.pca_project('?') madlib.pca_project('help') -- Projection function syntax and output table format madlib.pca_project('usage') -- Summary of PCA projection with sparse matrices madlib.pca_sparse_project() madlib.pca_sparse_project('?') madlib.pca_sparse_project('help') -- Projection function syntax and output table format madlib.pca_sparse_project('usage')
madlib.pca_project( source_table, pc_table, out_table, row_id, residual_table := NULL, result_summary_table := NULL)and
madlib.pca_sparse_project( source_table, pc_table, out_table, row_id, col_id, val_id, row_dim, col_dim, residual_table := NULL, result_summary_table := NULL)
Text value. Source table name. Identical to pca_train, the input data matrix should have \( N \) rows and \( M \) columns, where \( N \) is the number of data points, and \( M \) is the number of features for each data point.
The input table for pca_project is expected to be in the one of the two standard MADlib dense matrix formats, and the sparse input table for pca_sparse_project should be in the standard MADlib sparse matrix format. These formats are described in the documentation for pca_train.
Text value. Table name for the table containing principal components.
Text value. Name of the table that will contain the low-dimensional representation of the input data.
Text value. Column name containing the row IDs in the input source table.
Text value. Name of 'col_id' column in sparse matrix representation (sparse matrices only).
Text value. Name of 'val_id' column in sparse matrix representation (sparse matrices only).
Integer value. The number of rows in the sparse matrix (sparse matrices only).
Integer value. The number of columns in the sparse matrix (sparse matrices only).
Text value. Name of the optional residual table. Default: NULL.
The output is divided into three tables (two of which are optional).
The output table ('out_table' above) encodes a dense matrix with the projection onto the principal components. The table has the following columns:
The residual table ('residual_table' above) encodes a dense residual matrix. The table has the following columns:
The result summary table ('result_summary_table' above) contains information about the performance time of the PCA projection. The table has the following columns:
sql> DROP TABLE IF EXISTS mat; sql> CREATE TABLE mat ( row_id integer, row_vec double precision[] ); sql> COPY mat (row_id, row_vec) FROM stdin; 1 {1,2,5} 0 {4,7,5} 3 {9,2,4} 2 {7,4,4} 5 {0,5,5} 4 {8,5,7} \.
sql> DROP TABLE IF EXISTS result_table; sql> SELECT pca_train( 'mat', -- name of the input table 'result_table', -- name of the output table 'row_id', -- column containing the matrix indices 2 -- Number of PCA components to compute );
sql> DROP TABLE IF EXISTS residual_table, result_summary_table, out_table; sql> SELECT pca_project( 'mat', -- name of the input table 'result_table', -- name of the table containing the PCs 'out_table' -- name of the table containing the projection 'row_id', -- column containing the input matrix indices 'residual_table', -- Name of the optional residual table 'result_summary_table' -- Name of the optional summary table );
sql> SELECT * FROM result_summary_table; exec_time | residual_norm | relative_residual_norm ---------------+---------------+------------------------ 5685.40501595 | 2.19726255664 | 0.099262204234
Given a table containing some principal components \( \boldsymbol P \) and some input data \( \boldsymbol X \), the low-dimensional representation \( {\boldsymbol X}' \) is computed as
\begin{align*} {\boldsymbol {\hat{X}}} & = {\boldsymbol X} - \vec{e} \hat{x}^T \\ {\boldsymbol X}' & = {\boldsymbol {\hat {X}}} {\boldsymbol P}. \end{align*}
where \(\hat{x} \) is the column means of \( \boldsymbol X \) and \( \vec{e} \) is the vector of all ones. This step is equivalent to centering the data around the origin.
The residual table \( \boldsymbol R \) is a measure of how well the low-dimensional representation approximates the true input data, and is computed as
\[ {\boldsymbol R} = {\boldsymbol {\hat{X}}} - {\boldsymbol X}' {\boldsymbol P}^T. \]
A residual matrix with entries mostly close to zero indicates a good representation.
The residual norm \( r \) is simply
\[ r = \|{\boldsymbol R}\|_F \]
where \( \|\cdot\|_F \) is the Frobenius norm. The relative residual norm \( r' \) is
\[ r' = \frac{ \|{\boldsymbol R}\|_F }{\|{\boldsymbol X}\|_F } \]