Sponsored Links

Senin, 18 Desember 2017

Sponsored Links

Why Sample Size Determination is NOT Statistics - YouTube
src: i.ytimg.com

Sample size determination is the act of choosing the number of observations or replicates to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample. In practice, the sample size used in a study is determined based on the expense of data collection, and the need to have sufficient statistical power. In complicated studies there may be several different sample sizes involved in the study: for example, in a stratified survey there would be different sample sizes for each stratum. In a census, data are collected on the entire population, hence the sample size is equal to the population size. In experimental design, where a study may be divided into different treatment groups, this may be different sample sizes for each group.

Sample sizes may be chosen in several different ways:

  • experience - A choice of small sample sizes, though sometimes necessary, can result in wide confidence intervals or risks of errors in statistical hypothesis testing.
  • using a target variance for an estimate to be derived from the sample eventually obtained, i.e. if a high precision is required (narrow confidence interval) this translates to a low target variance of the estimator.
  • using a target for the power of a statistical test to be applied once the sample is collected.
  • using a confidence level, i.e. the larger the required confidence level, the larger the sample size (given a constant precision requirement).


Video Sample size determination



Introduction

Larger sample sizes generally lead to increased precision when estimating unknown parameters. For example, if we wish to know the proportion of a certain species of fish that is infected with a pathogen, we would generally have a more precise estimate of this proportion if we sampled and examined 200 rather than 100 fish. Several fundamental facts of mathematical statistics describe this phenomenon, including the law of large numbers and the central limit theorem.

In some situations, the increase in precision for larger sample sizes is minimal, or even non-existent. This can result from the presence of systematic errors or strong dependence in the data, or if the data follows a heavy-tailed distribution.

Sample sizes are judged based on the quality of the resulting estimates. For example, if a proportion is being estimated, one may wish to have the 95% confidence interval be less than 0.06 units wide. Alternatively, sample size may be assessed based on the power of a hypothesis test. For example, if we are comparing the support for a certain political candidate among women with the support for that candidate among men, we may wish to have 80% power to detect a difference in the support levels of 0.04 units.


Maps Sample size determination



Estimation

Estimation of a proportion

A relatively simple situation is estimation of a proportion. For example, we may wish to estimate the proportion of residents in a community who are at least 65 years old.

The estimator of a proportion is p ^ = X / n {\displaystyle {\hat {p}}=X/n} , where X is the number of 'positive' observations (e.g. the number of people out of the n sampled people who are at least 65 years old). When the observations are independent, this estimator has a (scaled) binomial distribution (and is also the sample mean of data from a Bernoulli distribution). The maximum variance of this distribution is 0.25/n, which occurs when the true parameter is p = 0.5. In practice, since p is unknown, the maximum variance is often used for sample size assessments.

For sufficiently large n, the distribution of p ^ {\displaystyle {\hat {p}}} will be closely approximated by a normal distribution. Using this approximation, it can be shown that around 95% of this distribution's probability lies within 2 standard deviations of the mean. Using the Wald method for the binomial distribution, an interval of the form

( p ^ - 2 0.25 n , p ^ + 2 0.25 n ) {\displaystyle \left({\hat {p}}-2{\sqrt {\frac {0.25}{n}}},{\hat {p}}+2{\sqrt {\frac {0.25}{n}}}\right)}

will form a 95% confidence interval for the true proportion. If this interval needs to be no more than W units wide, the equation

4 0.25 n = W {\displaystyle 4{\sqrt {\frac {0.25}{n}}}=W}

can be solved for n, yielding n = 4/W2 = 1/B2 where B is the error bound on the estimate, i.e., the estimate is usually given as within ± B. So, for B = 10% one requires n = 100, for B = 5% one needs n = 400, for B = 3% the requirement approximates to n = 1000, while for B = 1% a sample size of n = 10000 is required. These numbers are quoted often in news reports of opinion polls and other sample surveys.

Estimation of a mean

A proportion is a special case of a mean. When estimating the population mean using an independent and identically distributed (iid) sample of size n, where each data value has variance ?2, the standard error of the sample mean is:

? n . {\displaystyle {\frac {\sigma }{\sqrt {n}}}.}

This expression describes quantitatively how the estimate becomes more precise as the sample size increases. Using the central limit theorem to justify approximating the sample mean with a normal distribution yields an approximate 95% confidence interval of the form

( x ¯ - 2 ? n , x ¯ + 2 ? n ) . {\displaystyle \left({\bar {x}}-{\frac {2\sigma }{\sqrt {n}}},{\bar {x}}+{\frac {2\sigma }{\sqrt {n}}}\right).}

If we wish to have a confidence interval that is W units in width, we would solve

4 ? n = W {\displaystyle {\frac {4\sigma }{\sqrt {n}}}=W}

for n, yielding the sample size n = 16?2/W2.

For example, if we are interested in estimating the amount by which a drug lowers a subject's blood pressure with a confidence interval that is six units wide, and we know that the standard deviation of blood pressure in the population is 15, then the required sample size is 100.


Determining Sample Size in Acceptance Sampling - YouTube
src: i.ytimg.com


Required sample sizes for hypothesis tests

A common problem faced by statisticians is calculating the sample size required to yield a certain power for a test, given a predetermined Type I error rate ?. As follows, this can be estimated by pre-determined tables for certain values, by Mead's resource equation, or, more generally, by the cumulative distribution function:

Tables

The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. The parameters used are:

  • The desired statistical power of the trial, shown in column to the left.
  • Cohen's d (=effect size), which is the expected difference between the means of the target values between the experimental group and the control group, divided by the expected standard deviation.

Mead's resource equation

Mead's resource equation is often used for estimating sample sizes of laboratory animals, as well as in many other laboratory experiments. It may not be as accurate as using other methods in estimating sample size, but gives a hint of what is the appropriate sample size where parameters such as expected standard deviations or expected differences in values between groups are unknown or very hard to estimate.

All the parameters in the equation are in fact the degrees of freedom of the number of their concepts, and hence, their numbers are subtracted by 1 before insertion into the equation.

The equation is:

E = N - B - T , {\displaystyle E=N-B-T,}

where:

  • N is the total number of individuals or units in the study (minus 1)
  • B is the blocking component, representing environmental effects allowed for in the design (minus 1)
  • T is the treatment component, corresponding to the number of treatment groups (including control group) being used, or the number of questions being asked (minus 1)
  • E is the degrees of freedom of the error component, and should be somewhere between 10 and 20.

For example, if a study using laboratory animals is planned with four treatment groups (T=3), with eight animals per group, making 32 animals total (N=31), without any further stratification (B=0), then E would equal 28, which is above the cutoff of 20, indicating that sample size may be a bit too large, and six animals per group might be more appropriate.

Cumulative distribution function

Let Xi, i = 1, 2, ..., n be independent observations taken from a normal distribution with unknown mean ? and known variance ?2. Let us consider two hypotheses, a null hypothesis:

H 0 : ? = 0 {\displaystyle H_{0}:\mu =0}

and an alternative hypothesis:

H a : ? = ? * {\displaystyle H_{a}:\mu =\mu ^{*}}

for some 'smallest significant difference' ?* >0. This is the smallest value for which we care about observing a difference. Now, if we wish to (1) reject H0 with a probability of at least 1-? when Ha is true (i.e. a power of 1-?), and (2) reject H0 with probability ? when H0 is true, then we need the following:

If z? is the upper ? percentage point of the standard normal distribution, then

Pr ( x ¯ > z ? ? / n | H 0  true ) = ? {\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}|H_{0}{\text{ true}})=\alpha }

and so

'Reject H0 if our sample average ( x ¯ {\displaystyle {\bar {x}}} ) is more than z ? ? / n {\displaystyle z_{\alpha }\sigma /{\sqrt {n}}} '

is a decision rule which satisfies (2). (Note, this is a 1-tailed test)

Now we wish for this to happen with a probability at least 1-? when Ha is true. In this case, our sample average will come from a Normal distribution with mean ?*. Therefore, we require

Pr ( x ¯ > z ? ? / n | H a  true ) >= 1 - ? {\displaystyle \Pr({\bar {x}}>z_{\alpha }\sigma /{\sqrt {n}}|H_{a}{\text{ true}})\geq 1-\beta }

Through careful manipulation, this can be shown (see Statistical power#Example) to happen when

n >= ( z ? + ? - 1 ( 1 - ? ) ? * / ? ) 2 {\displaystyle n\geq \left({\frac {z_{\alpha }+\Phi ^{-1}(1-\beta )}{\mu ^{*}/\sigma }}\right)^{2}}

where ? {\displaystyle \Phi } is the normal cumulative distribution function.


Sampling Procedures and sample size determination. - ppt video ...
src: slideplayer.com


Stratified sample size

With more complicated sampling techniques, such as stratified sampling, the sample can often be split up into sub-samples. Typically, if there are H such sub-samples (from H different strata) then each of them will have a sample size nh, h = 1, 2, ..., H. These nh must conform to the rule that n1 + n2 + ... + nH = n (i.e. that the total sample size is given by the sum of the sub-sample sizes). Selecting these nh optimally can be done in various ways, using (for example) Neyman's optimal allocation.

There are many reasons to use stratified sampling: to decrease variances of sample estimates, to use partly non-random methods, or to study strata individually. A useful, partly non-random method would be to sample individuals where easily accessible, but, where not, sample clusters to save travel costs.

In general, for H strata, a weighted sample mean is

x ¯ w = ? h = 1 H W h x ¯ h , {\displaystyle {\bar {x}}_{w}=\sum _{h=1}^{H}W_{h}{\bar {x}}_{h},}

with

Var ( x ¯ w ) = ? h = 1 H W h 2 Var ( x ¯ h ) . {\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\,\operatorname {Var} ({\bar {x}}_{h}).}

The weights, W h {\displaystyle W_{h}} , frequently, but not always, represent the proportions of the population elements in the strata, and W h = N h / N {\displaystyle W_{h}=N_{h}/N} . For a fixed sample size, that is n = ? n h {\displaystyle n=\sum {n_{h}}} ,

Var ( x ¯ w ) = ? h = 1 H W h 2 V a r h ( 1 n h - 1 N h ) , {\displaystyle \operatorname {Var} ({\bar {x}}_{w})=\sum _{h=1}^{H}W_{h}^{2}\,Var_{h}\left({\frac {1}{n_{h}}}-{\frac {1}{N_{h}}}\right),}

which can be made a minimum if the sampling rate within each stratum is made proportional to the standard deviation within each stratum: n h / N h = k S h {\displaystyle n_{h}/N_{h}=kS_{h}} , where S h = V a r h {\displaystyle S_{h}={\sqrt {Var_{h}}}} and k {\displaystyle k} is a constant such that ? n h = n {\displaystyle \sum {n_{h}}=n} .

An "optimum allocation" is reached when the sampling rates within the strata are made directly proportional to the standard deviations within the strata and inversely proportional to the square root of the sampling cost per element within the strata, C h {\displaystyle C_{h}} :

n h N h = K S h C h , {\displaystyle {\frac {n_{h}}{N_{h}}}={\frac {KS_{h}}{\sqrt {C_{h}}}},}

where K {\displaystyle K} is a constant such that ? n h = n {\displaystyle \sum {n_{h}}=n} , or, more generally, when

n h = K ? W h S h C h . {\displaystyle n_{h}={\frac {K'W_{h}S_{h}}{\sqrt {C_{h}}}}.}

How to calculate sample size and margin of error - YouTube
src: i.ytimg.com


Qualitative research

Sample size determination in qualitative studies takes a different approach. It is generally a subjective judgment, taken as the research proceeds. One approach is to continue to include further participants or material until saturation is reached. The number needed to reach saturation has been investigated empirically.

There is a paucity of reliable guidance on estimating sample sizes before starting the research, with a range of suggestions given. A tool akin to a quantitative power calculation, based on the negative binomial distribution, has been suggested for thematic analysis.


How to determine the minimum sample size for your research ? - YouTube
src: i.ytimg.com


Software for power and sample size calculations

Numerous free and/or open source programs are available for performing power and sample size calculations. These include

  • G*Power (http://www.gpower.hhu.de/)
  • powerandsamplesize.com Free and open source online calculators
  • PS
  • PowerUp! provides convenient excel-based functions to determine minimum detectable effect size and minimum required sample size for various experimental and quasi-experimental designs.
  • PowerUpR is R package version of PowerUp! and additionally includes functions to determine sample size for various multilevel randomized experiments with or without budgetary constraints.
  • R package pwr
  • Russ Lenth's power and sample-size page
  • WebPower Free online statistical power analysis (http://webpower.psychstat.org)
  • SampSize app for Android and iOS iPhone and iPad (https://www.epigenesys.org.uk/portfolio/sampsize/)

The Logic behind Statistical Inference รข€
src: www.popularsocialscience.com


See also

  • Design of experiments
  • Engineering response surface example under Stepwise regression
  • Cohen's h

Calculating Required Sample Size to Estimate Population ...
src: i.ytimg.com


Notes


Sample size Determination (mean) - YouTube
src: i.ytimg.com


References

  • Bartlett, J. E., II; Kotrlik, J. W.; Higgins, C. (2001). "Organizational research: Determining appropriate sample size for survey research" (PDF). Information Technology, Learning, and Performance Journal. 19 (1): 43-50. 
  • Kish, L. (1965). Survey Sampling. Wiley. ISBN 0-471-48900-X. 
  • Smith, Scott (8 April 2013). "Determining Sample Size: How to Ensure You Get the Correct Sample Size | Qualtrics". Qualtrics. Retrieved 15 November 2016. 

Cross Sectional Study Sample Size Estimation - t test - YouTube
src: i.ytimg.com


Further reading

  • NIST: Selecting Sample Sizes
  • ASTM E122-07: Standard Practice for Calculating Sample Size to Estimate, With Specified Precision, the Average for a Characteristic of a Lot or Process

Source of the article : Wikipedia

Comments
0 Comments