MADlib
1.1 A newer version is available
User Documentation
|
SQL functions for statistical hypothesis tests. More...
Go to the source code of this file.
Functions | |
aggregate t_test_result | t_test_one (float8 value) |
Perform one-sample or dependent paired Student t-test. More... | |
aggregate t_test_result | t_test_two_pooled (boolean first, float8 value) |
Perform two-sample pooled (i.e., equal variances) Student t-test. More... | |
aggregate t_test_result | t_test_two_unpooled (boolean first, float8 value) |
Perform unpooled (i.e., unequal variances) t-test (also known as Welch's t-test) More... | |
aggregate f_test_result | f_test (boolean first, float8 value) |
Perform Fisher F-test. More... | |
aggregate chi2_test_result | chi2_gof_test (bigint observed, float8 expected=1, bigint df=0) |
Perform Pearson's chi-squared goodness-of-fit test. More... | |
aggregate ks_test_result | ks_test (boolean first, float8 value, bigint m, bigint n) |
Perform Kolmogorov-Smirnov test. More... | |
aggregate mw_test_result | mw_test (boolean first, float8 value) |
Perform Mann-Whitney test. More... | |
aggregate wsr_test_result | wsr_test (float8 value, float8 precision=-1) |
Perform Wilcoxon-Signed-Rank test. More... | |
aggregate one_way_anova_result | one_way_anova (integer group, float8 value) |
Perform one-way analysis of variance. More... | |
Definition in file hypothesis_tests.sql_in.
aggregate chi2_test_result chi2_gof_test | ( | bigint | observed, |
float8 | expected = 1 , |
||
bigint | df = 0 |
||
) |
Let \( n_1, \dots, n_k \) be a realization of a (vector) random variable \( N = (N_1, \dots, N_k) \) that follows the multinomial distribution with parameters \( k \) and \( p = (p_1, \dots, p_k) \). Test the null hypothesis \( H_0 : p = p^0 \).
observed | Number \( n_i \) of observations of the current event/row |
expected | Expected number of observations of current event/row. This number is not required to be normalized. That is, \( p^0_i \) will be taken as expected divided by sum(expected) . Hence, if this parameter is not specified, chi2_test() will by default use \( p^0 = (\frac 1k, \dots, \frac 1k) \), i.e., test that \( p \) is a discrete uniform distribution. |
df | Degrees of freedom. This is the number of events reduced by the degree of freedom lost by using the observed numbers for defining the expected number of observations. If this parameter is 0, the degree of freedom is taken as \( (k - 1) \). |
statistic FLOAT8
- Statistic \[ \chi^2 = \sum_{i=1}^k \frac{(n_i - np_i)^2}{np_i} \]
The corresponding random variable is approximately chi-squared distributed withdf
degrees of freedom.df BIGINT
- Degrees of freedomp_value FLOAT8
- Approximate p-value, i.e., \( \Pr[X^2 \geq \chi^2 \mid p = p^0] \). Computed as (1.0 - chi_squared_cdf(statistic))
.phi FLOAT8
- Phi coefficient, i.e., \( \phi = \sqrt{\frac{\chi^2}{n}} \)contingency_coef FLOAT8
- Contingency coefficient, i.e., \( \sqrt{\frac{\chi^2}{n + \chi^2}} \)SELECT (chi2_gof_test(observed, 1, NULL)).* FROM source
var1
, var2
, observed
. SELECT (chi2_gof_test(observed, expected, deg_freedom)).* FROM ( SELECT observed, sum(observed) OVER (PARTITION BY var1)::DOUBLE PRECISION * sum(observed) OVER (PARTITION BY var2) AS expected FROM source ) p, ( SELECT (count(DISTINCT var1) - 1) * (count(DISTINCT var2) - 1) AS deg_freedom FROM source ) q;
Definition at line 549 of file hypothesis_tests.sql_in.
aggregate f_test_result f_test | ( | boolean | first, |
float8 | value | ||
) |
Given realizations \( x_1, \dots, x_m \) and \( y_1, \dots, y_n \) of i.i.d. random variables \( X_1, \dots, X_m \sim N(\mu_X, \sigma^2) \) and \( Y_1, \dots, Y_n \sim N(\mu_Y, \sigma^2) \) with unknown parameters \( \mu_X, \mu_Y, \) and \( \sigma^2 \), test the null hypotheses \( H_0 : \sigma_X < \sigma_Y \) and \( H_0 : \sigma_X = \sigma_Y \).
first | Indicator whether value is from first sample \( x_1, \dots, x_m \) (if TRUE ) or from second sample \( y_1, \dots, y_n \) (if FALSE ) |
value | Value of random variate \( x_i \) or \( y_i \) |
statistic FLOAT8
- Statistic \[ f = \frac{s_Y^2}{s_X^2} \]
The corresponding random variable is F-distributed with \( (n - 1) \) degrees of freedom in the numerator and \( (m - 1) \) degrees of freedom in the denominator.df1 BIGINT
- Degrees of freedom in the numerator \( (n - 1) \)df2 BIGINT
- Degrees of freedom in the denominator \( (m - 1) \)p_value_one_sided FLOAT8
- Lower bound on one-sided p-value. In detail, the result is \( \Pr[F \geq f \mid \sigma_X = \sigma_Y] \), which is a lower bound on \( \Pr[F \geq f \mid \sigma_X \leq \sigma_Y] \). Computed as (1.0 - fisher_f_cdf(statistic))
.p_value_two_sided FLOAT8
- Two-sided p-value, i.e., \( 2 \cdot \min \{ p, 1 - p \} \) where \( p = \Pr[ F \geq f \mid \sigma_X = \sigma_Y] \). Computed as (min(p_value_one_sided, 1. - p_value_one_sided))
.SELECT (f_test(first, value)).* FROM source
Definition at line 419 of file hypothesis_tests.sql_in.
aggregate ks_test_result ks_test | ( | boolean | first, |
float8 | value, | ||
bigint | m, | ||
bigint | n | ||
) |
Given realizations \( x_1, \dots, x_m \) and \( y_1, \dots, y_m \) of i.i.d. random variables \( X_1, \dots, X_m \) and i.i.d. \( Y_1, \dots, Y_n \), respectively, test the null hypothesis that the underlying distributions function \( F_X, F_Y \) are identical, i.e., \( H_0 : F_X = F_Y \).
first | Determines whether the value belongs to the first (if TRUE ) or the second sample (if FALSE ) |
value | Value of random variate \( x_i \) or \( y_i \) |
m | Size \( m \) of the first sample. See usage instructions below. |
n | Size of the second sample. See usage instructions below. |
statistic FLOAT8
- Kolmogorov–Smirnov statistic \[ d = \max_{t \in \mathbb R} |F_x(t) - F_y(t)| \]
where \( F_x(t) := \frac 1m |\{ i \mid x_i \leq t \}| \) and \( F_y \) (defined likewise) are the empirical distribution functions.k_statistic FLOAT8
- Kolmogorov statistic \( k = r + 0.12 + \frac{0.11}{r} \) where \( r = \sqrt{\frac{m n}{m+n}}. \) Then \( k \) is approximately Kolmogorov distributed.p_value FLOAT8
- Approximate p-value, i.e., an approximate value for \( \Pr[D \geq d \mid F_X = F_Y] \). Computed as (1.0 - kolmogorov_cdf(k_statistic))
.SELECT (ks_test(first, value, (SELECT count(value) FROM source WHERE first), (SELECT count(value) FROM source WHERE NOT first) ORDER BY value )).* FROM source
ORDER BY value
) and will raise an exception if values are not ordered. Definition at line 655 of file hypothesis_tests.sql_in.
aggregate mw_test_result mw_test | ( | boolean | first, |
float8 | value | ||
) |
Given realizations \( x_1, \dots, x_m \) and \( y_1, \dots, y_m \) of i.i.d. random variables \( X_1, \dots, X_m \) and i.i.d. \( Y_1, \dots, Y_n \), respectively, test the null hypothesis that the underlying distributions are equal, i.e., \( H_0 : \forall i,j: \Pr[X_i > Y_j] + \frac{\Pr[X_i = Y_j]}{2} = \frac 12 \).
first | Determines whether the value belongs to the first (if TRUE ) or the second sample (if FALSE ) |
value | Value of random variate \( x_i \) or \( y_i \) |
statistic FLOAT8
- Statistic \[ z = \frac{u - \bar x}{\sqrt{\frac{mn(m+n+1)}{12}}} \]
where \( u \) is the u-statistic computed as follows. The z-statistic is approximately standard normally distributed.u_statistic FLOAT8
- Statistic \( u = \min \{ u_x, u_y \} \) where \[ u_x = mn + \binom{m+1}{2} - \sum_{i=1}^m r_{x,i} \]
where\[ r_{x,i} = \{ j \mid x_j < x_i \} + \{ j \mid y_j < x_i \} + \frac{\{ j \mid x_j = x_i \} + \{ j \mid y_j = x_i \} + 1}{2} \]
is defined as the rank of \( x_i \) in the combined list of all \( m+n \) observations. For ties, the average rank of all equal values is used.p_value_one_sided FLOAT8
- Approximate one-sided p-value, i.e., an approximate value for \( \Pr[Z \geq z \mid H_0] \). Computed as (1.0 - normal_cdf(z_statistic))
.p_value_two_sided FLOAT8
- Approximate two-sided p-value, i.e., an approximate value for \( \Pr[|Z| \geq |z| \mid H_0] \). Computed as (2 * normal_cdf(-abs(z_statistic)))
.SELECT (mw_test(first, value ORDER BY value)).* FROM source
ORDER BY value
) and will raise an exception if values are not ordered. Definition at line 744 of file hypothesis_tests.sql_in.
aggregate one_way_anova_result one_way_anova | ( | integer | group, |
float8 | value | ||
) |
Given realizations \( x_{1,1}, \dots, x_{1, n_1}, x_{2,1}, \dots, x_{2,n_2}, \dots, x_{k,n_k} \) of i.i.d. random variables \( X_{i,j} \sim N(\mu_i, \sigma^2) \) with unknown parameters \( \mu_1, \dots, \mu_k \) and \( \sigma^2 \), test the null hypotheses \( H_0 : \mu_1 = \dots = \mu_k \).
group | Group which value is from. Note that group can assume arbitary value not limited to a continguous range of integers. |
value | Value of random variate \( x_{i,j} \) |
sum_squares_between DOUBLE PRECISION
- sum of squares between the group means, i.e., \( \mathit{SS}_b = \sum_{i=1}^k n_i (\overline{x_i} - \bar x)^2. \)sum_squares_within DOUBLE PRECISION
- sum of squares within the groups, i.e., \( \mathit{SS}_w = \sum_{i=1}^k (n_i - 1) s_i^2. \)df_between BIGINT
- degree of freedom for between-group variation \( (k-1) \)df_within BIGINT
- degree of freedom for within-group variation \( (n-k) \)mean_squares_between DOUBLE PRECISION
- mean square between groups, i.e., \( s_b^2 := \frac{\mathit{SS}_b}{k-1} \)mean_squares_within DOUBLE PRECISION
- mean square within groups, i.e., \( s_w^2 := \frac{\mathit{SS}_w}{n-k} \)statistic DOUBLE PRECISION
- Statistic computed as \[ f = \frac{s_b^2}{s_w^2}. \]
This statistic is Fisher F-distributed with \( (k-1) \) degrees of freedom in the numerator and \( (n-k) \) degrees of freedom in the denominator.p_value DOUBLE PRECISION
- p-value, i.e., \( \Pr[ F \geq f \mid H_0] \).SELECT (one_way_anova(group, value)).* FROM source
Definition at line 987 of file hypothesis_tests.sql_in.
aggregate t_test_result t_test_one | ( | float8 | value) |
Given realizations \( x_1, \dots, x_n \) of i.i.d. random variables \( X_1, \dots, X_n \sim N(\mu, \sigma^2) \) with unknown parameters \( \mu \) and \( \sigma^2 \), test the null hypotheses \( H_0 : \mu \leq 0 \) and \( H_0 : \mu = 0 \).
value | Value of random variate \( x_i \) |
statistic FLOAT8
- Statistic \[ t = \frac{\sqrt n \cdot \bar x}{s} \]
The corresponding random variable is Student-t distributed with \( (n - 1) \) degrees of freedom.df FLOAT8
- Degrees of freedom \( (n - 1) \)p_value_one_sided FLOAT8
- Lower bound on one-sided p-value. In detail, the result is \( \Pr[\bar X \geq \bar x \mid \mu = 0] \), which is a lower bound on \( \Pr[\bar X \geq \bar x \mid \mu \leq 0] \). Computed as (1.0 - students_t_cdf(statistic))
.p_value_two_sided FLOAT8
- Two-sided p-value, i.e., \( \Pr[ |\bar X| \geq |\bar x| \mid \mu = 0] \). Computed as (2 * students_t_cdf(-abs(statistic)))
.SELECT (t_test_one(value - mu_0)).* FROM source
SELECT (t_test_one(first - second - mu_0)).* FROM source
Definition at line 224 of file hypothesis_tests.sql_in.
aggregate t_test_result t_test_two_pooled | ( | boolean | first, |
float8 | value | ||
) |
Given realizations \( x_1, \dots, x_n \) and \( y_1, \dots, y_m \) of i.i.d. random variables \( X_1, \dots, X_n \sim N(\mu_X, \sigma^2) \) and \( Y_1, \dots, Y_m \sim N(\mu_Y, \sigma^2) \) with unknown parameters \( \mu_X, \mu_Y, \) and \( \sigma^2 \), test the null hypotheses \( H_0 : \mu_X \leq \mu_Y \) and \( H_0 : \mu_X = \mu_Y \).
first | Indicator whether value is from first sample \( x_1, \dots, x_n \) (if TRUE ) or from second sample \( y_1, \dots, y_m \) (if FALSE ) |
value | Value of random variate \( x_i \) or \( y_i \) |
statistic FLOAT8
- Statistic \[ t = \frac{\bar x - \bar y}{s_p \sqrt{1/n + 1/m}} \]
where\[ s_p^2 = \frac{\sum_{i=1}^n (x_i - \bar x)^2 + \sum_{i=1}^m (y_i - \bar y)^2} {n + m - 2} \]
is the pooled variance. The corresponding random variable is Student-t distributed with \( (n + m - 2) \) degrees of freedom.df FLOAT8
- Degrees of freedom \( (n + m - 2) \)p_value_one_sided FLOAT8
- Lower bound on one-sided p-value. In detail, the result is \( \Pr[\bar X - \bar Y \geq \bar x - \bar y \mid \mu_X = \mu_Y] \), which is a lower bound on \( \Pr[\bar X - \bar Y \geq \bar x - \bar y \mid \mu_X \leq \mu_Y] \). Computed as (1.0 - students_t_cdf(statistic))
.p_value_two_sided FLOAT8
- Two-sided p-value, i.e., \( \Pr[ |\bar X - \bar Y| \geq |\bar x - \bar y| \mid \mu_X = \mu_Y] \). Computed as (2 * students_t_cdf(-abs(statistic)))
.SELECT (t_test_pooled(first, value)).* FROM source
Definition at line 300 of file hypothesis_tests.sql_in.
aggregate t_test_result t_test_two_unpooled | ( | boolean | first, |
float8 | value | ||
) |
Given realizations \( x_1, \dots, x_n \) and \( y_1, \dots, y_m \) of i.i.d. random variables \( X_1, \dots, X_n \sim N(\mu_X, \sigma_X^2) \) and \( Y_1, \dots, Y_m \sim N(\mu_Y, \sigma_Y^2) \) with unknown parameters \( \mu_X, \mu_Y, \sigma_X^2, \) and \( \sigma_Y^2 \), test the null hypotheses \( H_0 : \mu_X \leq \mu_Y \) and \( H_0 : \mu_X = \mu_Y \).
first | Indicator whether value is from first sample \( x_1, \dots, x_n \) (if TRUE ) or from second sample \( y_1, \dots, y_m \) (if FALSE ) |
value | Value of random variate \( x_i \) or \( y_i \) |
statistic FLOAT8
- Statistic \[ t = \frac{\bar x - \bar y}{\sqrt{s_X^2/n + s_Y^2/m}} \]
The corresponding random variable is approximately Student-t distributed with\[ \frac{(s_X^2 / n + s_Y^2 / m)^2}{(s_X^2 / n)^2/(n-1) + (s_Y^2 / m)^2/(m-1)} \]
degrees of freedom (Welch–Satterthwaite formula).df FLOAT8
- Degrees of freedom (as above)p_value_one_sided FLOAT8
- Lower bound on one-sided p-value. In detail, the result is \( \Pr[\bar X - \bar Y \geq \bar x - \bar y \mid \mu_X = \mu_Y] \), which is a lower bound on \( \Pr[\bar X - \bar Y \geq \bar x - \bar y \mid \mu_X \leq \mu_Y] \). Computed as (1.0 - students_t_cdf(statistic))
.p_value_two_sided FLOAT8
- Two-sided p-value, i.e., \( \Pr[ |\bar X - \bar Y| \geq |\bar x - \bar y| \mid \mu_X = \mu_Y] \). Computed as (2 * students_t_cdf(-abs(statistic)))
.SELECT (t_test_unpooled(first, value)).* FROM source
Definition at line 364 of file hypothesis_tests.sql_in.
aggregate wsr_test_result wsr_test | ( | float8 | value, |
float8 | precision = -1 |
||
) |
Given realizations \( x_1, \dots, x_n \) of i.i.d. random variables \( X_1, \dots, X_n \) with unknown mean \( \mu \), test the null hypotheses \( H_0 : \mu \leq 0 \) and \( H_0 : \mu = 0 \).
value | Value of random variate \( x_i \) or \( y_i \). Values of 0 are ignored (i.e., they do not count towards \( n \)). |
precision | The precision \( \epsilon_i \) with which value is known. The precision determines the handling of ties. The current value \( v_i \) is regarded a tie with the previous value \( v_{i-1} \) if \( v_i - \epsilon_i \leq \max_{j=1, \dots, i-1} v_j + \epsilon_j \). If precision is negative, then it will be treated as value * 2^(-52) . (Note that \( 2^{-52} \) is the machine epsilon for type DOUBLE PRECISION .) |
statistic FLOAT8
- statistic computed as follows. Let \( w^+ = \sum_{i \mid x_i > 0} r_i \) and \( w^- = \sum_{i \mid x_i < 0} r_i \) be the signed rank sums where \[ r_i = \{ j \mid |x_j| < |x_i| \} + \frac{\{ j \mid |x_j| = |x_i| \} + 1}{2}. \]
The Wilcoxon signed-rank statistic is \( w = \min \{ w^+, w^- \} \).rank_sum_pos FLOAT8
- rank sum of all positive values, i.e., \( w^+ \)rank_sum_neg FLOAT8
- rank sum of all negative values, i.e., \( w^- \)num BIGINT
- number \( n \) of non-zero valuesz_statistic FLOAT8
- z-statistic \[ z = \frac{w^+ - \frac{n(n+1)}{4}} {\sqrt{\frac{n(n+1)(2n+1)}{24} - \sum_{i=1}^n \frac{t_i^2 - 1}{48}}} \]
where \( t_i \) is the number of values with absolute value equal to \( |x_i| \). The corresponding random variable is approximately standard normally distributed.p_value_one_sided FLOAT8
- One-sided p-value i.e., \( \Pr[Z \geq z \mid \mu \leq 0] \). Computed as (1.0 - normal_cdf(z_statistic))
.p_value_two_sided FLOAT8
- Two-sided p-value, i.e., \( \Pr[ |Z| \geq |z| \mid \mu = 0] \). Computed as (2 * normal_cdf(-abs(z_statistic)))
.SELECT (wsr_test(value - mu_0 ORDER BY abs(value))).* FROM source
SELECT (wsr_test(first - second - mu_0 ORDER BY abs(first - second))).* FROM sourceIf correctly determining ties is important (e.g., you may want to do so when comparing to software products that take
first
, second
, and mu_0
as individual parameters), supply the precision parameter. This can be done as follows: SELECT (wsr_test( first - second - mu_0, 3 * 2^(-52) * greatest(first, second, mu_0) ORDER BY abs(first - second) )).* FROM sourceHere \( 2^{-52} \) is the machine epsilon, which we scale to the magnitude of the input data and multiply with 3 because we have a sum with three terms.
ORDER BY abs(value
)) and will raise an exception if the absolute values are not ordered. Definition at line 872 of file hypothesis_tests.sql_in.