Tests Overview

This section provides a compact overview of commonly used statistical tests. The tests are grouped by purpose to make navigation easier. Each description explains what the test is used for and when it is typically applied.

One variable

This test checks whether the observed share of categories differs from an expected proportion. It is often used when comparing actual frequencies with a benchmark or target distribution. If the difference is large enough, the result suggests that the observed pattern is unlikely due to random variation.

This test compares the mean of one quantitative variable with a reference value. It assumes that the data follow an approximately normal distribution. It is useful when you want to test whether a measured average differs from a standard or target level.

This test compares the median of a sample with a reference value and does not require normal distribution. It works with ranked data and is robust to outliers. It is useful when the data are skewed or contain extreme values.

Two quantitative variables

This test measures the strength and direction of a linear relationship between two quantitative variables. It assumes that both variables follow an approximately normal distribution. A positive value means both increase together, while a negative value means one decreases as the other increases.

Two qualitative variables

This test checks whether the observed share of categories differs from an expected proportion. It is often used when comparing actual frequencies with a benchmark or target distribution. If the difference is large enough, the result suggests that the observed pattern is unlikely due to random variation.

One quantitative and one qualitative variable

This test compares the means of two independent groups when the data are normally distributed and variances are similar. It evaluates whether any observed difference in means is likely due to chance. It is often used when comparing two populations or treatments.

This test compares the distribution or median of two independent groups without assuming normality. It works with ranked values and is robust to outliers. It is useful for skewed or non metric data.

This test compares two measurements from the same subjects, for example before and after an intervention. It assumes that the differences follow a normal distribution. It is commonly used to evaluate changes over time.

This test is the non parametric alternative to the paired t test. It compares paired data using ranks and does not require normality. It is useful when the differences are skewed or contain outliers.

More than two groups

This test compares the means of three or more independent groups. It assumes normality and similar variances across groups. It determines whether at least one group mean differs significantly from the others.

This test is the non parametric alternative to ANOVA. It compares the distributions or medians of three or more independent groups using ranks. It is useful when data are skewed or ordinal.

This test compares means across three or more repeated measurements from the same subjects. It evaluates whether there is a systematic change over time or conditions. It assumes that the repeated differences follow certain statistical patterns.

This test is the non parametric alternative to repeated measures ANOVA. It compares ranked repeated measurements within the same subjects. It is useful when normality cannot be assumed.

Modelling multiple variables

This method models the relationship between a quantitative outcome and several predictors. It estimates how each variable contributes to the outcome while holding others constant. It is widely used for forecasting and explaining variation.

This method extends logistic regression to outcomes with more than two categories. It estimates how predictors influence the probability of each category. It is useful when outcomes are ordered or grouped into several classes.

© 2025 Peter Sikabonyi. All rights reserved.