Basic Laws of Sampling

Sometimes it is possible to obtain sufficiently accurate results by studying a part/a segment of the population. Thus the few items are selected from the population in such a way that they are the representative of the universe and these representatives in research are called as ‘sample’. The process of selecting the representatives from the population is called ‘sampling’. Thus sampling is simply the process of learning about population on the basis of sample drawn from it. Under this method a small group of universe is taken as the representative of the whole mass and the results are drawn. It is the method to make social/business investigation practicable and easy. Sampling is, therefore, resorted to when either it is impossible to enumerate all the units in the whole population or when it is too costly to enumerate in terms of time and money or when the uncertainty inherent in sampling is more than compensated by the possibilities of errors in complete enumeration.

Two fundamental principles on which the sampling theory rests on are;

1. Law of Statistical Regularity

The law states that if a moderately large number of items are selected at random from a given population, the characteristics of those items will reflect, to a fairly accurate degree, the characteristic of the entire population. For example, if 300 employees are picked from a company at random and the average height is found out, the result will be nearly the same as will be found if all the employees of the company are picked up and measured.

The reliability of the law depends on the two factors viz., (i) the size of the sample which says that the larger the sample, the more reliable are its indicators. The reliability of the sample is proportional to the square root of the number of items it contains and larger the samples the more representative and stable, and (ii) the sample must be chosen at random.

There are various characteristics on which the applicability of the law is based on. The first one is that with the use of this law, a part of the universe can be chosen. Thus, when census method for collecting information is not possible because of the constraints, then with the help of this law and by using the method of random sampling, researchers can determine the sample units. The second one is that, when selection is made at random, then by using this law, all good, bad and average units of the entire population have equal chance of being selected and third characteristics is that with the help of this law, inferences drawn from a particular inquiry for different time and place can be used for all other places with little adjustments.

2. Law of Inertia of Large Numbers

The law of inertia of large numbers is a corollary of the law of statistical regularity and lays down that ‘in large masses of data abnormalities will occur, but in all probability, exceptional items will offset each other, leaving the average unchanged subject, where the element of time enters, to the general trend of data. The law of inertia of large numbers asserts that large aggregates are the results of the movements of its separate parts, and it is impossible that the latter will all be moving in the same direction at the same time. Consequently, their movements will tend to compensate one another, and the large the number involved, the more complete will this competition be. To summarize the above definitions it can be found that the larger the number of items one chose from a universe, the greater is the possibility of accuracy. Hence, the law is based on the fact that if one part of a large group varies in one direction, the probability that another equal part of the same group would vary in the opposite direction, so that the total change would be insignificant.

Leave a Reply

Your email address will not be published. Required fields are marked *