


Guru Search
Results:
8 matches were found
 Monday, December 20, 1999 #3061

Dear Media Guru, we are currently in the process of conducting a dip stick media survey to be integrated in a media reviews presentation for an FMCG. Based on your experience and references what would be the minimum acceptable sample size to use in order to insure the research findings are viable and reliable ? Note that the total population in our market is close to eighteen million, and the specified target group is about ten million. Thanks.
 The Media Guru Answers(Monday, December 20, 1999 ):

The population is not really relevant to determining the reliability of a sample.
The key consideration really is: what size are the answers you expect and what potential error can you stand in your decision making?
The formula to calculate one standard error (68% confidence) is:
The square root of ((P times Q)divided by N), where
 P is the percentage of the sample offering the response to test (treated as a decimal fraction)
 Q is the remaining percentage of the sample, and
 N is the sample size
For 90% confidence, the above formula is multiplied by 1.645.
When you hear that results of a political poll are +/ 3%, this is the range of error around a 50% answer, usually at "90% confidence," meaning that if the poll was repeated with the same sample size, 90 times out of 100 the same question would have a response between 47 and 53%. "+/3%" on an answer of 50% means 6% relative error. A sample size of 750 would bring you +/ 3% on a 50% answer:
100%  50% = 50%
P=0.50, Q=0.50 and N= 750
0.50 x 0.50 = 0.25
0.25 / 750 = 0.000333333
Square root of 0.000333333 = 1.8275
1.8275 x 1.645 = 3.003
That's fine for political polls, where responses tend to hover around 50%. But in media studies, 1% or 2% may be exposed to a given ad placement. The same 750 sample would give reliability of +/ 0.84 on a 2% response, which is a relative error of 42%.
So, use this formula with your anticipated answer sizes and the level of relative error with which you can comfortably make decisions to determine a suitable sample size.
 Tuesday, March 23, 1999 #2404

Dear Guru, Everyone in the cable industry is bragging
about the large increases in cable ratings and spending.
It appears that the large cable interconnects want to
play in the same field as the broadcast networks in
their respective markets. My question.... Is there a
"reliable" way to post cable ratings on a local
interconnect? If so, would you please explain how to
do it and how it would be reliable? Thanks in advance.
 The Media Guru Answers(Wednesday, March 24, 1999 ):

Without examining the specific rating report you would use, the Guru cannot answer specifically. Cable interconnect sellers can extract anudience numbers specific to their system lineups from the standard Nielsen NSI (local market spot TV reports) to make proposals. Obviously, a buyer can post on the same basis.
"Reliable" is a statistical term, meaning that if the same research was done again with the same size sample, the same result would be found within a given tolerance. The key variable is sample size. Because cable intereconnect
network ratings are typically tiny (less than 1.0 in most cases), these ratings are projections from smaller numbers of sample respondents and therefore are less "reliable" than typical ratings from local broadcast outlets. However, it is in the nature of reliability that while individual spot's ratings may be very unstable, entire schedules are far more stable as sampling errors cancel out.
Click here to see past Guru responses regarding reliability in research data.
 Friday, March 19, 1999 #2400

I need to know the calculation to work out margin of error for TV reach and frequency results. E.g. what is the margin of error of 40% @ 2+ depending on the size of the sample, penetration etc.
 The Media Guru Answers(Saturday, March 20, 1999 ):

Assuming you are using a model to calculate reach and frequency, your error is no longer an aspect of sample size but of the reliability of the model.
For instance, suppose your schedule consisted of 20 advertisements with an average rating of 10. And, based on sample size, the 10 rating was +/ 2 rating points (or 20% relative error). But your total schedule of 200 GRP is not going to be +/ 40 points. Because error is plus or minus, there is an equal chance that one 10 rating is really PLUS 2 and the next 10 rating is really MINUS 2. So, in a schedule, most of the error cancels out. This is one reason why ratings minima for buying are often shortsighted.
When it comes to reach analysis, someone might have built a model by compiling several actual schedules measured by the original research and finding a formula for the straight line formed by the average frequency of each. Since the actual schedules came from the orignal research, the sampling error of each (minimized by the plus or minus aspect of the schedule elements, as above) could have been calculated. But now the "curve" coming out of the model is only judged by its ability to match back to actual schedules.
 Wednesday, March 25, 1998 #1553

How can be measurement error calculated? I would like to know is there any correlation between sample size and data validity? Thank you
 The Media Guru Answers(Monday, April 06, 1998 ):

sample size and data reliability are in a "rule of squares" relationship:
A sample four times as large is twice as reliable. Note that "reliable" is the statistical term referring to the chances that a duplicate study with the same size random sample will get the same results, give or take a specified range of error.
"Validity" refers to correctness. It might have to do with whether a question asked can corectly produce a result that is desired. For example, a question like "What will you have for breakfast a week from Tuesday?" may not be a valid predictor of what people will actually eat on that day. But, with a proper sample, it will be reliable in predicting with a set degree of variability what people will eat.
The formula for calculation of error for a given sample is: The Square root of (P x Q over N) Where:
p = the percentage result to be tested (e.g. 10% of the people will have bacon)
q = the complement, or difference vs 100% (if p = 10% then q = 90%
n = the sample size
So, if a sample of 500 produces the result that 10% will have bacon, the sampling error for this result is the square root of (10 x 90)÷500 or +/ 1.342
so the answer of 10% should be read as "between 8.658 and 11.342" and
really means that 68% of the time the same study repeated would produce a percent of bacon eaters between 8.658 and 11.342.
If the sample is quadrupled, to 2000, then the error is halved, to 0.671.
 Friday, December 20, 1996 #1090

Media Guru:
What can you tell me about standard error? Specifically, I have three questions: 1. What goes into standard error? If not the actual calculations, can you tell me what affects standard error: it's not just sample size, is it? 2. What is the maximum standard error that is considered acceptable to the media  specifically, the advertising  industry? 3. Related to the previous question, do you happen to know recent standard error levels for suppliers such as Nielsen (National), Simmons or MRI? Thank you for your attention to this humble query.
 The Media Guru Answers(Saturday, December 21, 1996 ):

As the adjacent answer to your previous Guru inquiry details, standard error considers sample size and the size of the specific response. Standard error is smallest for a 50% response in a specific way. 10% or 90% answers have the same standard error. When you hear that a study, like a presidential poll is "+/ 3% that is usually the standard error for a 50% response.
It is interesting to note that the size of the sample is the key and not the relationship of the size of the sample to the universe. In other words, when a broadcast rating service uses a larger sample for New York than for Klamath Falls it does so because of the cost of larger samples being more affordable by larger market's media who sponsor the research, not because a bigger market "needs' a bigger sample. Also note that because of the square root aspect of the calculation, a sample must grow by a factor of 4 to reduce error by half. "Minimum acceptable error" is quite situational. While an error of +/ 2 on a rating of 10 seems small, it becomes important when a buyer need to decide between programs rated 8,9,10,11 and 12 which might all have identical audiences yet seem to vary by 50%. As stated above, sample size and response control standard error. Neilsen Simmons and MRI each give the information with their reports to calculate error appropriate to the individual report and findings. Most software used to generate reports has the option to display the error witheach cell of data reported. (You may have noticed single and double asterisks on tabulations of Simmons or MRI data, these are indicators of standard error ranges)
 Tuesday, December 17, 1996 #1092

Dear Media Guru Guy:Where can I find out more info on standard error? First, how is it calculated, or what affects standard error? Second, I'm thinking about magazine readership research, and I'm wondering what standard errors some of the popular studies have (MRI or Simmons, for example). And then, I'm also curious as to what, among media gurus, is considered an acceptable or unacceptable error. Do the same standards apply for other media research studies, for example, Nielsen ratings? Thanks Mr. Guru.
 The Media Guru Answers(Friday, December 20, 1996 ):

Standard error reflects the range of "tolerance," due to sample size, around the reported answer where the "truth" lies. Inother words, statistical data like "10% of women 1849 read MagazineX" in reality means that within the range of error expected, if thesame study were repeated 100 times, with a sample of 225, the resultwould between 8 and 12 percent, 68 times out of the 100.
The formula isthe square root of (P times Q divided by N) Where P= the percentage to be tested ( e.g. the 10% in the Guru's example, above) Q=100 minus P ( 90% in our example) N= total sample size (225 in our example) or 10 X 90 = 900 900 divided by 225 = 4 square root of 4 = 2 One standard error is the amount of variance sampling causes 68% of the time. So, at one standard error, 10% is between 8% and 12%with 68% "confidence" At 2 standard errors or within+/ 4 points, or6 to 14% we have an answer we are confident our research will repeat95% of the time. This is why the concept is also referred to as"reliability". It is really way to express confidence that the samesampling procedure will produce the same result. Most statistical texts can give you considerably more on the topic.
 Wednesday, February 21, 1996 #1755

Dear Media Guru I have a two part question , both dealing with the same subject, tv sampling error. Suppose ER gets a 20% rating and Seinfeld gets an 18%, both off a sample of 1000 resondents. What are the odds of there being absolutely no difference between these two ratings? This is not as simple as looking up the standard error of each rating. I remember that it has something to do with the standard error of the difference, but I just can't recall the calculating process.Could you please explain? Then to complicate matters, I'm looking at the same phenonena on a grander scale. Suppose the estimated delivery in rating points for a tv schedule is 1000 grps and it underdelivers by 10% ie. 900 grps. What is the likelihood that the difference had to do with pure chance ( sampling error) and how do I calculate that? I know this is more difficult since you have to account for buying many programs in the estimate and the actual. Naturally, we are assuming that the error differences are all due to sampling, and not the idiosyncrasies of the marketplace or the impurity of the sample. In this case I know the answer is going to be technical, but that is what I need. Thanks
 The Media Guru Answers(Friday, February 23, 1996 ):

The Guru loves this kind of stuff. The answer is technical but hopefully, in simple terms.
First, if ER has a 20 rating and Seinfeld has 18, with a sample of 1000 (for that demographic), then the ER 20 rating's standard error is +/ 1.265 while Seinfeld's 18's is 1.215 (See formulas in the Jan 25 18:23 Guru Q&A below). Note that the absolute size of the error on the 20 is larger but it is relatively smaller. Also note that the range of these errors is such that they can make the two programs' ratings equal: 20  1.265 = 18.735 which overlaps 18 + 1.215 or 19.215. There is a 68% probablity that these two ratings fall within this range. But the swing could go either way on either number. And could fall anywhere within the +/ range specified There is a 90% probabilty that these two ratings fall within +/ 1.999 on the 18 and +/ 2.081 on the 20. The odds are 95% that they fall within +/ 2.381 for 18 and 2.479 for the 20. These odds actually relate to reliabilty. That is, if you repeat the same rating study 100 times with the same actual facts existing, 68% of those studies will give ER a rating between 18.735 and 21.265. Now the 1000 GRP underdelivered by 10% is different As the beginning of the explanation showed, while there is a swing around any rating (a 5 would be +/ 0.689 in the same study), the odds equally favor underestimates and overestimates. This is the same as the reason why small samples don't necessarily underestimate ratings. So in 1000 GRPs made up of 500 spots with an average 2 rating, the sampling error on the individual ratings somewhat cancels out. To calculate this in an Arbitron measured radio buy using a single survey and one station, for example, the formula is GRP x ((100 x #spots)  GRP) / sample x Factor))) "Factor" is from a table provided, specific to demographic and #quarter hours in the daypart of the buy. So, if your 1000 GRP were based on Adults 1849 ( with a 1000 A1849 sample), and a MonFri, 6a7p schedule, the calculation would be: (1000 x ((100 x 500) 1000) / (1000 x 2.42))) or +/ 143 GRP at the 68% confidence interval. Obviously, if the average rating were higher, hence fewer spots or if the sample was larger the variance would be smaller. With an average 20 rating, the swing is about +/ 40 GRP. So, depending on average ratings and sample sizes, the 10% underdelivery could be within the range of standard error.
 Thursday, January 25, 1996 #1774

I have a total universe of 9500 people and I would like to know how big a sample I would need for as good study. This is for a phone interview.
 The Media Guru Answers(Friday, February 02, 1996 ):

The population size is a relatively insignificant factor in calculating reliability of a sample; 500 respondents is just about as reliable in surveying a small town as for the United States as a whole.
To plan a "good study," you need to consider the size of the typical answer you will get. If your typical response will be 50% of respondents said "yes" then a far smaller sample could be suitable than if answers are "10% use brand B." You also need to decide what level of reliabilty you require, or how much swing, +/, is acceptable and to what tolerance. For example: If your sample size is 500, a 50% answer is actually reliable +/ 4.4 percentage points, 95% of the time and +/ 3.7 points 90% of the time. If the sample size is 125, a 50% answer is reliable +/ 8.8 points, 95% of the time. This is double the relative error of the 500 sample (rule of thumb, 4x as much sample reduces error by half). If the answers anticipated were 10%, then for the 125 sample it varies +/ 5.2 points, or over 50% relative error. A 10% answer from a 500 total sample yields +/ 2.6% points at 95% tolerance or 26% relative error, which is possibly acceptable for your need. To examine other possibilities, the formula for 95% tolerance is: 1.96 x squareroot of ((PxQ)/N) where P = the answer size expressed as a decimal fraction Q = the remaing fraction of the sample N = the total sample size To examine 90% tolerance, substitute 1.645 for 1.96
Back



