Hypothesis Tests

Hypothesis Testing

  • Reframes our qualitative question ("Is this difference real?") into a mathematical question ("What is the probability that the difference I am observing is due to chance?")

  • Goal: reject the null hypothesis: "The two populations I am comparing are identical and the differences I observe are due to chance."

  • We reject the null hypothesis by proving that the proving that it unlikely. We do that by calculating the p-value (using a hypothesis test). We generally want p < 0.05 (i.e., there’s only a 5% chance that two identical distributions could have produced these results)

Is my data Categorical or Numerical?

Numerical

  • A professor expects an exam average to be roughly 75%, and wants to know if the actual scores line up with this expectation. Was the test actually too easy or too hard?

  • A PM for a website wants to compare the time spent on different versions of a homepage. Does one version make users stay on the page significantly longer?

Categorical

  • A pollster wants to know if men and women have significantly different yogurt flavor preferences. Does a result where men more often answer "chocolate" as their favorite reflect a significant difference in the population?

  • Do different age groups have significantly different emotional reactions to different ads?

How many samples am I comparing?

  • 1 Sample (i.e., comparing to an ideal target) i.e., comparing an actual result against a desired target or KPI

  • 2 Sample i.e., comparing a control and treatment group or an A/B test

  • More than 2 Sample i.e., comparing three different variants of a landing page

Hypothesis Testing Options

1 Sample T-Test

When to Use

Compares a sample mean to a hypothetical population mean. It answers the question "What is the probability that the sample came from a distribution with the desired mean?" Use this when you are comparing against a known target (like a statistic from a paper or a target metric).

Usage

ttest_1samp requires two inputs, a distribution of values and an expected mean:

tstat, pval = ttest_1samp(example_distribution, expected_mean)
print(pval)

2 Sample T-Test

When to Use

A 2 Sample T-Test compares two sets of data, which are both approximately normally distributed.

The null hypothesis, in this case, is that the two distributions have the same mean. Use this when you are comparing two different numerical samples.

Usage

ttest_ind requires two distributions of values:

tstat, pval = ttest_ind(example_distribution1, 
                        example_distribution2)
print(pval)

ANOVA

When to Use

ANOVA compares 2 or more numerical datasets without increasing the probability of a false positive. In order to use ANOVA,

  1. The samples are independent.

  2. Each sample is from a normally distributed population.

  3. The population standard deviations of the groups are all equal. This property is known as homoscedasticity.

Usage

f_oneway (scipy.stats) requires two or more groups:

fstat, pval = f_oneway(data_group1, data_group2, 
                       data_group3, data_groupN)
print(pval)

ols (statsmodels)

model_name = ols('outcome_variable ~ group1 + group2 + groupN', 
                 data=your_data).fit()
model_name.summary()

Tukey

When to Use

Tukey's Range Test compares more than 2 numerical datasets without increasing the probability of a false positive. Unlike ANOVA, Tukey tells us which datasets are significantly different. Many statisticians use Tukey instead of Anova.

Note: pairwise_tukeyhsd is from StatsModels, not SciPy!

Usage

pairwise_tukeyhsd requires three arguments:

  • A vector of all data (concatenated using np.concatenate)

  • A vector of labels for the data

  • A level of significance (usually 0.05)

v = np.concatenate([a, b, c]) 
labels = ['a'] * len(a) + ['b'] * len(b) + ['c'] * len(c) 
tukey_results = pairwise_tukeyhsd(v, labels, 0.05)

Binomial Test

When to Use

Compares an observed proportion to a theoretical ideal.

Examples:

  • Comparing the actual percent of emails that were opened to the quarterly goals

  • Comparing the actual percentage of respondents who gave a certain survey response to the expected survey response

Usage

binom_test requires three arguments:

  • The number of successes (the numerator of your proportion)

  • n - the number of trials (the denominator of your proportion)

  • p - the proportion you are comparing to

pval = binomtest(numerator, n=denominator, p=proportion) 
print(pval)

Chi Squared Test

When to Use

If we have two or more categorical datasets that we want to compare, we should use a Chi Square test. It is useful in situations like:

  • An A/B test where half of users were shown a green submit button and the other half were shown a purple submit button. Was one group more likely to click the submit button?

  • Men and women were both given a survey asking "Which of the following three products is your favorite?" Did the men and women have significantly different preferences?

Usage

chi2_contingency requires a contingency table of all results:

chi2, pval, dof, expected = chi2_contenigency([cat1yes, cat1no], 
                                              [cat2yes, cat2no]) 
print(pval)

Last updated