Sample Size Calculator

Calculate the required sample size for surveys and statistical studies

Results:

Required Sample Size: 35


Confidence Level: 95%

Margin of Error: ±5%

Standard Deviation: 15

Z-Score: 1.96

About Sample Size

Sample size determines how many observations are needed for reliable results in a study.

Formula (Infinite Population): n = (z²σ²) / E²

Formula (Finite Population): n = n₀ / (1 + (n₀-1)/N)

Tips:

  • Larger confidence level requires larger sample
  • Smaller margin of error requires larger sample
  • For proportions, use standard deviation = 0.5

What Is Sample Size and Why Does It Matter?

Sample size is the number of observations or participants in a study. Determining the right sample size is crucial: too small means unreliable results that may miss real effects; too large wastes resources without adding value. Sample size calculations balance precision, confidence, and practical constraints.

Sample SizeProsCons
Too smallCheaper, fasterLow power, wide confidence intervals, unreliable
Just rightAdequate power, valid conclusionsRequires careful calculation
Too largeHigh precisionWasteful, finds trivial differences "significant"

Sample Size for Proportion (Survey)

n = (z² × p × (1-p)) / E²

Where:

  • n= Required sample size
  • z= Z-score for confidence level (1.96 for 95%)
  • p= Expected proportion (use 0.5 if unknown)
  • E= Margin of error (e.g., 0.05 for ±5%)

Sample Size for Estimating Proportions

When conducting surveys or polls to estimate a percentage (proportion), sample size depends on desired confidence level, margin of error, and expected proportion.

Confidence LevelZ-ScoreInterpretation
90%1.64510% chance interval misses true value
95%1.9605% chance interval misses true value
99%2.5761% chance interval misses true value
Margin of ErrorSample Size (p=0.5, 95% CI)Precision
±10%97Low precision
±5%385Standard
±3%1,068High precision
±1%9,604Very high precision

Note: Using p = 0.5 (maximum uncertainty) gives the most conservative estimate—your actual required sample size may be smaller if the true proportion is far from 50%.

Sample Size for Estimating Means

When estimating a population mean (e.g., average income, average height), sample size depends on the desired precision and population variability.

FactorEffect on Required nRelationship
Higher confidence levelIncreases nn ∝ z²
Smaller margin of errorIncreases n (dramatically)n ∝ 1/E²
Higher population SDIncreases nn ∝ σ²

Sample Size for Mean

n = (z × σ / E)²

Where:

  • n= Required sample size
  • z= Z-score for confidence level
  • σ= Population standard deviation (estimate)
  • E= Desired margin of error (same units as σ)

Finite Population Correction

When sampling from a finite population where your sample is a significant portion (>5%) of the total, you need fewer observations than the infinite population formula suggests.

Population (N)Sample % of NUncorrected nCorrected n'Reduction
10,0003.8%385370-4%
2,00016%385323-16%
50043%385218-43%
10079%38579-79%

Finite Population Correction

n' = n / [1 + (n-1)/N]

Where:

  • n'= Adjusted sample size
  • n= Sample size for infinite population
  • N= Total population size

Sample Size for Hypothesis Testing (Power Analysis)

Statistical power is the probability of detecting an effect when it truly exists. Power analysis determines sample size needed to detect a specified effect with a given confidence level.

ParameterTypical ValueEffect on Sample Size
Significance level (α)0.05 (5%)Lower α → larger n
Power (1-β)0.80 (80%)Higher power → larger n
Effect size (d)0.2 small, 0.5 medium, 0.8 largeSmaller effect → much larger n
Test TypeEffect Size dSample Size per Group (α=0.05, power=0.80)
Two-sample t-test0.2 (small)394
Two-sample t-test0.5 (medium)64
Two-sample t-test0.8 (large)26

Key insight: Detecting small effects requires dramatically larger samples than detecting large effects.

Practical Considerations

Real-world sample size planning must account for non-response, dropout, and other practical issues.

FactorAdjustmentExample
Non-responsen_actual = n / response_rateIf 60% response expected, divide by 0.6
Dropout (longitudinal)Account for attritionAdd 10-20% for expected dropout
Subgroup analysisSize each subgroup adequatelyIf analyzing 4 subgroups, each needs sufficient n
Cluster samplingApply design effectMultiply n by design effect (often 1.5-2)

Rule of thumb: When in doubt, add 10-20% to your calculated sample size to account for unforeseen issues.

Sample Size for Common Research Scenarios

Different research designs have different sample size requirements. Here are common scenarios with typical recommendations.

Research TypeMinimum Recommended nNotes
Pilot study30-50Feasibility, not statistical inference
Survey (general population)384-40095% CI, ±5% margin
A/B test (large effect)50-100 per group10%+ conversion difference
A/B test (small effect)1000+ per group1-2% conversion difference
Regression (rule of thumb)10-20 per predictorMinimum 50 total
Factor analysis300+ or 10 per variableWhichever is larger

Worked Examples

Survey Sample Size Calculation

Problem:

A company wants to survey customers about satisfaction. They want 95% confidence with ±4% margin of error. What sample size is needed?

Solution Steps:

  1. 1Identify parameters: z = 1.96 (95% CI), E = 0.04, p = 0.5 (unknown proportion, use maximum)
  2. 2Apply formula: n = (z² × p × (1-p)) / E²
  3. 3Calculate: n = (1.96² × 0.5 × 0.5) / 0.04²
  4. 4n = (3.8416 × 0.25) / 0.0016 = 0.9604 / 0.0016 = 600.25
  5. 5Round up: n = 601

Result:

Need 601 survey responses for 95% confidence with ±4% margin of error. To account for 70% expected response rate, send surveys to 601/0.70 = 859 customers.

Sample Size for Comparing Two Means

Problem:

A researcher wants to detect a medium effect (d = 0.5) between two treatment groups with 80% power at α = 0.05. How many per group?

Solution Steps:

  1. 1Use power analysis formula: n per group = 2[(zα + zβ)²] / d²
  2. 2For α = 0.05 (two-tailed), zα = 1.96
  3. 3For power = 0.80, β = 0.20, zβ = 0.84
  4. 4n = 2 × [(1.96 + 0.84)²] / 0.5² = 2 × 7.84 / 0.25
  5. 5n = 15.68 / 0.25 = 62.7, round up to 63 per group

Result:

Need 63 participants per group (126 total) to detect a medium effect (d = 0.5) with 80% power. For 90% power, would need approximately 85 per group.

Finite Population Adjustment

Problem:

A school has 450 students. How many should be surveyed for 95% CI with ±5% margin?

Solution Steps:

  1. 1Calculate infinite population n: n = (1.96² × 0.5 × 0.5) / 0.05² = 384.16 → 385
  2. 2Apply finite population correction: n' = n / [1 + (n-1)/N]
  3. 3n' = 385 / [1 + (384)/450] = 385 / [1 + 0.853] = 385 / 1.853
  4. 4n' = 207.8 → round up to 208

Result:

Need only 208 students (not 385) because 450 is a small population. Survey about 46% of students rather than what infinite formula suggests.

Tips & Best Practices

  • Use p = 0.5 for proportion sample sizes when the true proportion is unknown—it's the most conservative choice.
  • Margin of error has a squared effect: halving the margin requires 4× the sample size.
  • Apply finite population correction when sampling more than 5% of a population.
  • Always plan for non-response by inflating sample size: n_needed / expected_response_rate.
  • For power analysis, 80% power with α = 0.05 is standard; use 90% power for important decisions.
  • Detecting small effects (d = 0.2) requires ~16× more participants than large effects (d = 0.8).
  • When in doubt, round up and add 10-20% buffer for unexpected dropouts or data quality issues.

Frequently Asked Questions

The value p = 0.5 maximizes the term p(1-p) = 0.25, giving the largest (most conservative) sample size. Any other proportion yields a smaller product: p = 0.3 gives 0.21, p = 0.1 gives 0.09. Using 0.5 ensures your sample is large enough regardless of the actual proportion. If you have a reasonable estimate, use it for a smaller required sample.
Sample size is inversely proportional to the SQUARE of margin of error. Halving the margin of error (e.g., from ±4% to ±2%) requires 4× the sample size. This is why very precise estimates are expensive: going from ±5% to ±1% requires 25× more data. Choose margin of error based on practical needs, not just 'smaller is better.'
It depends on the analysis. General guidelines: For descriptive statistics, n ≥ 30 is often cited (Central Limit Theorem). For group comparisons, 20-30 per group minimum. For regression, 10-20 observations per predictor. For factor analysis, 300+ or 10 per variable. These are minimums—larger is better for reliability.
Yes! A priori power analysis (before data collection) is a cornerstone of good research design. It ensures you have enough data to detect meaningful effects and avoids wasting resources on underpowered studies. Post-hoc power calculations (after data collection) are generally discouraged as they're often misleading.
Options include: (1) Use pilot study data, (2) Use results from similar published studies, (3) Use the range rule: σ ≈ range/4 or range/6, (4) For bounded scales (1-7 Likert), assume σ ≈ 1 to 1.5, (5) Make a conservative (larger) estimate—it's better to collect slightly more data than needed.
Power is the probability of detecting a true effect (avoiding false negatives). Power = 1 - β, where β is the false negative rate. 80% power is conventional, meaning 80% chance of detecting the effect if it exists (20% chance of missing it). Some fields use 90% for more important studies. Higher power requires larger samples.

Sources & References

Last updated: 2026-01-22