Hypothesis tests are used to confirm or reject a null hypothesis \( H_0 \) about the distribution of random variables, given realizations of these random variables. Since in general it is not possible to make statements with certainty, one is interested in the probability \( p \) of seeing random variates at least as extreme as the ones observed, assuming that \( H_0 \) is true. If this probability \( p \) is small, \( H_0 \) will be rejected by the test with significance level \( p \). Falsifying \( H_0 \) is the canonic goal when employing a hypothesis test. That is, hypothesis tests are typically used in order to substantiate that instead the alternative hypothesis \( H_1 \) is true.
Hypothesis tests may be divided into parametric and non-parametric tests. A parametric test assumes certain distributions and makes inferences about parameters of the distributions (e.g., the mean of a normal distribution). Formally, there is a given domain of possible parameters \( \Gamma \) and the null hypothesis \( H_0 \) is the event that the true parameter \( \gamma_0 \in \Gamma_0 \), where \( \Gamma_0 \subsetneq \Gamma \). Non-parametric tests, on the other hand, do not assume any particular distribution of the sample (e.g., a non-parametric test may simply test if two distributions are similar).
The first step of a hypothesis test is to compute a test statistic, which is a function of the random variates, i.e., a random variate itself. A hypothesis test relies on the distribution of the test statistic being (approximately) known. Now, the \( p \)-value is the probability of seeing a test statistic at least as extreme as the one observed, assuming that \( H_0 \) is true. In a case where the null hypothesis corresponds to a family of distributions (e.g., in a parametric test where \( \Gamma_0 \) is not a singleton set), the \( p \)-value is the supremum, over all possible distributions according to the null hypothesis, of these probabilities.
Input data is assumed to be normalized with all values stored row-wise. In general, the following inputs are expected.
One-sample tests expect the following form:
{TABLE|VIEW} source ( ... value DOUBLE PRECISION ... )
Two-sample tests expect the following form:
{TABLE|VIEW} source ( ... first BOOLEAN, value DOUBLE PRECISION ... )
The first
column indicates whether a value is from the first sample (if TRUE
) or the second sample (if FALSE
).
Many-sample tests expect the following form:
{TABLE|VIEW} source ( ... group INTEGER, value DOUBLE PRECISION ... )
All tests are implemented as aggregate functions. The non-parametric (rank-based) tests are implemented as ordered aggregate functions and thus necessitate an ORDER BY
clause. In the following, the most simple forms of usage are given. Specific function signatures, as described in hypothesis_tests.sql_in, may require more arguments or a different ORDER BY
clause.
SELECT test(value) FROM sourcewhere 'test' can be one of
t_test_one
(one-sample or dependent paired Student's t-test)chi2_gof_test
(Pearson's chi-squared goodness of fit test, also used for chi-squared independence test as shown in example section below)SELECT test(first/group, value) FROM sourcewhere 'test' can be one of
f_test
(Fisher F-test)t_test_two_pooled
(two-sample pooled Student’s t-test, i.e. equal variances)t_test_two_unpooled
(two-sample unpooled t-test, i.e., unequal variances, also known as Welch's t-test)one_way_anova
(one-way analysis of variance, multi-sample)Run a non-parametric two-sample/multi-sample test:
SELECT test(first/group, value ORDER BY value) FROM source
where 'test' can be one of
ks_test
(Kolmogorov-Smirnov test)mw_test
(Mann-Whitney test)wsr_test
(Wilcoxon signed-rank test, multi-sample)Note on non-parametric tests: Kolomogov-Smirnov two-sample test is based on the asymptotic theory. The p-value is given by comparing the test statistics with the Kolomogov distribution. The p-value is also adjusted for data with heavy tail distribution, which may give different results than those given by R function's ks.test. See [3] for a detailed explanation. The literature is not unanimous about the definitions of the Wilcoxon rank sum and Mann-Whitney tests. There are two possible definitions for the statistic; MADlib outputs the minimum of the two and uses it for significance testing. This might give different results for both mw_test and wsr_test compared to statistical functions in other popular packages (like R's wilcox.test function). See [4] for a detailed explanation.
-- Load data DROP TABLE IF EXISTS auto83b; CREATE TABLE auto83b ( id SERIAL, mpg_us DOUBLE PRECISION, mpg_j DOUBLE PRECISION ); COPY auto83b (mpg_us, mpg_j) FROM stdin DELIMITER '|'; 18|24 15|27 18|27 16|25 17|31 15|35 14|24 14|19 21|31 10|32 10|24 11|26 9| 9 \N|32 \N|37 \N|38 \N|34 \N|34 \N|32 \N|33 \N|32 \N|25 \N|24 \N|37 13|\N 12|\N 18|\N 21|\N 19|\N 21|\N 15|\N 16|\N 15|\N 11|\N 20|\N 21|\N 19|\N 15|\N \.
-- Create table for one sample tests DROP TABLE IF EXISTS auto83b_one_sample; CREATE TABLE auto83b_one_sample AS SELECT mpg_us AS mpg FROM auto83b WHERE mpg_us is not NULL; -- Print table SELECT * FROM auto83b_one_sample;
mpg 18 15 18 16 17 15 14 14 21 10 10 11 9 13 12 18 21 19 21 15 16 15 11 20 21 19 15 (27 rows)
-- Create table for two sample tests DROP TABLE IF EXISTS auto83b_two_sample; CREATE TABLE auto83b_two_sample AS SELECT TRUE AS is_us, mpg_us AS mpg FROM auto83b WHERE mpg_us is not NULL UNION ALL SELECT FALSE, mpg_j FROM auto83b WHERE mpg_j is not NULL; -- Print table SELECT * FROM auto83b_two_sample;
is_us | mpg -------+----- t | 18 t | 15 t | 18 t | 16 t | 17 t | 15 t | 14 t | 14 t | 21 t | 10 t | 10 t | 11 t | 9 t | 13 t | 12 t | 18 t | 21 t | 19 t | 21 t | 15 t | 16 t | 15 t | 11 t | 20 t | 21 t | 19 t | 15 f | 24 f | 27 f | 27 f | 25 f | 31 f | 35 f | 24 f | 19 f | 31 f | 32 f | 24 f | 26 f | 9 f | 32 f | 37 f | 38 f | 34 f | 34 f | 32 f | 33 f | 32 f | 25 f | 24 f | 37 (51 rows)
-- One sample tests SELECT (madlib.t_test_one(mpg - 20)).* FROM auto83b_one_sample; -- test rejected for mean = 20
statistic | df | p_value_one_sided | p_value_two_sided ------------------+----+-------------------+---------------------- -6.0532478722666 | 26 | 0.999998926789141 | 2.14642171769697e-06
SELECT (madlib.t_test_one(mpg - 15.7)).* FROM auto83b_one_sample; -- test not rejected
statistic | df | p_value_one_sided | p_value_two_sided ---------------------+----+-------------------+------------------- 0.00521831713126531 | 26 | 0.497938118950661 | 0.995876237901321
-- Two sample tests SELECT (madlib.t_test_two_pooled(is_us, mpg)).* FROM auto83b_two_sample;
statistic | df | p_value_one_sided | p_value_two_sided -------------------+----+-------------------+---------------------- -8.89342267075968 | 49 | 0.999999999995748 | 8.50408632402377e-12
SELECT (madlib.t_test_two_unpooled(is_us, mpg)).* FROM auto83b_two_sample;
statistic | df | p_value_one_sided | p_value_two_sided -------------------+------------------+-------------------+---------------------- -8.61746388524314 | 35.1283818346179 | 0.999999999821218 | 3.57563867403599e-10
SELECT (madlib.f_test(is_us, mpg)).* FROM auto83b_two_sample; -- Test result indicates that the two distributions have different variances
statistic | df1 | df2 | p_value_one_sided | p_value_two_sided -------------------+-----+-----+-------------------+--------------------- 0.311786921089247 | 26 | 23 | 0.997559863672441 | 0.00488027265511803
CREATE TABLE chi2_test_blood_group ( id SERIAL, blood_group VARCHAR, observed BIGINT, expected DOUBLE PRECISION ); INSERT INTO chi2_test_blood_group(blood_group, observed, expected) VALUES ('O', 67, 82.28), ('A', 83, 84.15), ('B', 29, 14.96), ('AB', 8, 5.61); SELECT (madlib.chi2_gof_test(observed, expected)).* FROM chi2_test_blood_group;
statistic | p_value | df | phi | contingency_coef ------------------+----------------------+----+------------------+------------------- 17.0481013341976 | 0.000690824622923826 | 3 | 2.06446732440826 | 0.899977280680593
The Chi-squared independence test uses the Chi-squared goodness-of-fit function, as shown in the example below. The expected value needs to be computed and passed to the goodness-of-fit function. The expected value for MADlib is computed as sum of rows * sum of columns, for each element of the input matrix. For e.g., expected value for element (2,1) would be sum of row 2 * sum of column 1.
CREATE TABLE chi2_test_friendly ( id_x SERIAL, values INTEGER[] ); INSERT INTO chi2_test_friendly(values) VALUES (array[5, 29, 14, 16]), (array[15, 54, 14, 10]), (array[20, 84, 17, 94]), (array[68, 119, 26, 7]);
-- Input table is expected to be unpivoted, so need to pivot it CREATE TABLE chi2_test_friendly_unpivoted AS SELECT id_x, id_y, values[id_y] AS observed FROM chi2_test_friendly, generate_series(1,4) AS id_y;
-- Compute Chi-squared independence statistic, by calculating expected value in the SQL and calling the goodness-of-fit function SELECT (madlib.chi2_gof_test(observed, expected, deg_freedom)).* FROM ( -- Compute expected values and degrees of freedom SELECT observed, sum(observed) OVER (PARTITION BY id_x)::DOUBLE PRECISION * sum(observed) OVER (PARTITION BY id_y) AS expected FROM chi2_test_friendly_unpivoted ) p, ( SELECT (count(DISTINCT id_x) - 1) * (count(DISTINCT id_y) - 1) AS deg_freedom FROM chi2_test_friendly_unpivoted ) q;
statistic | p_value | df | phi | contingency_coef ------------------+----------------------+----+------------------+------------------- 138.289841626008 | 2.32528678709871e-25 | 9 | 2.93991753313346 | 0.946730727519112
CREATE TABLE nist_anova_test ( id SERIAL, resistance FLOAT8[] ); INSERT INTO nist_anova_test(resistance) VALUES (array[6.9,8.3,8.0]), (array[5.4,6.8,10.5]), (array[5.8,7.8,8.1]), (array[4.6,9.2,6.9]), (array[4.0,6.5,9.3]);
SELECT (madlib.one_way_anova(level, value)).* FROM ( SELECT level, resistance[level] AS value FROM nist_anova_test, (SELECT * FROM generate_series(1,3) level) q1 ) q2;
sum_squares_between | sum_squares_within | df_between | df_within | mean_squares_between | mean_squares_within | statistic | p_value ---------------------+--------------------+------------+-----------+----------------------+---------------------+------------------+-------------------- 27.8973333333333 | 17.452 | 2 | 12 | 13.9486666666667 | 1.45433333333333 | 9.59110703644281 | 0.0032482226008593
CREATE TABLE ks_sample_1 AS SELECT TRUE AS first, unnest(ARRAY[0.22, -0.87, -2.39, -1.79, 0.37, -1.54, 1.28, -0.31, -0.74, 1.72, 0.38, -0.17, -0.62, -1.10, 0.30, 0.15, 2.30, 0.19, -0.50, -0.09]) AS value UNION ALL SELECT FALSE, unnest(ARRAY[-5.13, -2.19, -2.43, -3.83, 0.50, -3.25, 4.32, 1.63, 5.18, -0.43, 7.11, 4.87, -3.10, -5.81, 3.76, 6.31, 2.58, 0.07, 5.76, 3.50]);
SELECT (madlib.ks_test(first, value, (SELECT count(value) FROM ks_sample_1 WHERE first), (SELECT count(value) FROM ks_sample_1 WHERE NOT first) ORDER BY value)).* FROM ks_sample_1;
statistic | k_statistic | p_value -----------+-----------------+-------------------- 0.45 | 1.4926782214936 | 0.0232132758544496
SELECT (madlib.mw_test(is_us, mpg ORDER BY mpg)).* from auto83b_two_sample; -- Note first parameter above is BOOLEAN
statistic | u_statistic | p_value_one_sided | p_value_two_sided -------------------+-------------+-------------------+---------------------- -5.50097925755249 | 32.5 | 0.999999981115618 | 3.77687645883758e-08
DROP TABLE IF EXISTS test_wsr; CREATE TABLE test_wsr ( x DOUBLE PRECISION, y DOUBLE PRECISION ); COPY test_wsr (x, y) FROM stdin DELIMITER '|'; 0.32|0.39 0.4|0.47 0.11|0.11 0.47|0.43 0.32|0.42 0.35|0.3 0.32|0.43 0.63|0.98 0.5|0.86 0.6|0.79 0.38|0.33 0.46|0.45 0.2|0.22 0.31|0.3 0.62|0.6 0.52|0.53 0.77|0.85 0.23|0.21 0.3|0.33 0.7|0.57 0.41|0.43 0.53|0.49 0.19|0.2 0.31|0.35 0.48|0.4 \. SELECT (madlib.wsr_test( x - y, 2 * 2^(-52) * greatest(x,y) ORDER BY abs(x - y) )).* FROM test_wsr;
statistic | rank_sum_pos | rank_sum_neg | num | z_statistic | p_value_one_sided | p_value_two_sided -----------+--------------+--------------+-----+-------------------+-------------------+------------------- 105.5 | 105.5 | 194.5 | 24 | -1.27318365656729 | 0.898523560667509 | 0.202952878664983
[1] M. Hollander, D. Wolfe: Nonparametric Statistical Methods, 2nd edition, Wiley, 1999
[2] E. Lehmann, J. Romano: Testing Statistical Hypotheses, 3rd edition, Springer, 2005
[3] M. Stephens: Use of the Kolmogorov-Smirnov, Cramer-Von Mises and related statistics without extensive tables, Journal of the Royal Statistical Society. Series B (Methodological) (1970): 115-122.
[4] Wikipedia: Mann–Whitney U test calculation, http://en.wikipedia.org/wiki/Mann-Whitney_test#Calculations
File hypothesis_tests.sql_in documenting the SQL functions.