About Statistical Tests
Statistical tests help determine whether patterns in data are meaningful or random. They are used in areas such as quality, customer insights and process improvement. The choice of test depends on the business question, the type of data and its distribution. Each method is based on certain assumptions, so results should always be interpreted in context. This site provides a clear overview and practical guidance.
When should a statistical test be used?
Statistical tests are useful when you want to make decisions based on data rather than intuition alone. They help assess whether an observed effect is likely to be real or just due to random variation. This is especially important when sample sizes are limited and natural fluctuations can easily mislead interpretation.
Examples from business practice
• Customer satisfaction
A company compares survey ratings before and after a service change to see whether satisfaction has genuinely improved or if the difference could be random.
• Sales performance
Two marketing campaigns are tested in different regions to determine whether one leads to significantly higher sales than the other.
• Quality control in production
Measured part dimensions from two machines are compared to find out whether one machine produces systematically different results.
• Website optimisation (A/B testing)
Two versions of a landing page are shown to users to check whether one version leads to a higher conversion rate.
• Employee training impact
Performance scores before and after training are analysed to see whether the training produced a measurable improvement.
• Cost reduction initiatives
A company tests whether a process change actually reduces process time or whether observed differences are just random fluctuations.
Choosing the right statistical test follows a simple workflow. First, define your question: Do you want to compare groups, measure a relationship or test a proportion? Next, consider the type of data and whether it is normally distributed. Finally, check whether your samples are independent or paired.
The Test Selector on this site guides you through these steps and recommends suitable tests. Each method is explained in simple terms so you can understand when and why it is used.
• Results are always probabilistic, never absolute
• Good data quality is essential for reliable conclusions
• Violating test assumptions may lead to misleading results
• Effect size and confidence intervals matter in addition to p values
A p value expresses how surprising your data would be if there were actually no real effect in reality. In other words, it tells you how likely it is to observe a difference at least as large as the one in your data purely by chance.
A small p value means that your result would be unlikely if there were truly no effect. This is usually taken as evidence that the effect may be real rather than random variation. A common threshold is p < 0.05, which corresponds to a 5 percent probability of seeing such a result by chance alone.
However, a p value does not tell you how big or how important the effect is, and it does not guarantee that the result is true. It is simply one piece of evidence that should always be interpreted together with context, data quality and effect size.