« Drezner for the Defense | Main | Talk Amongst Yourselves »

### Margins

My friend Tom Lang tries to explain the math behind the margin of error, and while he does a better job than your average journalist, I think he's still getting the story a bit wrong. Tom's explanation suggests that if candidate X has 52 percent and candidate Y has 45 percent and the poll has a margin of error of +/- 4 percent this ought to be reported as "X and Y are in a statistical tie." The logic here is that the 95 percent confidence band for Y runs as high as 49 percent while the 95 percent confidence band for X runs as low as 48 percent. That makes it seem as if a 52-45 (MOE +/- 4) result is equivalent to a 50-50 (MOE +/- 4) result, but it really isn't. The thing is that there's no such thing as *the* margin of error, just different confidence levels you could be using in your analysis. 95 percent is traditional in the US but there's no reason handed down from the Lord on High that all statistics need to be done this way. If you accepted a 90 percent level of confidence, the margin of error would be smaller, and the 52-45 race would not be within the margin of error while the 50-50 race would be. What 52-45 (MOE +/- 4) tells you is that candidate X is probably ahead, but the chances that he's not ahead are larger than 5 percent. I would need to be smarter to calculate the precise chance that candidate Y is in the lead under this scenario, but it's still pretty low. A 51-49 (MOE +/- 4) result, on the other hand, really tells you very little about who's in the lead.

Ideally, instead of a quick summary you would want to publish some kind of table giving the level of confidence with which you can assert a variety of propositions about the race, but something like that would be very hard to read. At any rate, I know a lot of readers here understand this math better than me, so if I've got it wrong blame *The Cartoon Guide to Statistics* or else whoever at Dalton decided that the smart kids should go straight from geometry to calculus and not learn any stats.

August 5, 2004 | Permalink

## TrackBack

TrackBack URL for this entry:

http://www.typepad.com/services/trackback/6a00d8345160fd69e200d83456788c69e2

Listed below are links to weblogs that reference Margins:

» Lies, Damned Lies from Balloon Juice

This Yglesias post reminded me, once again, how little most people know about statistics (and this is not a dig... [Read More]

Tracked on Aug 5, 2004 9:29:43 AM

» Arguing the Margins from T.J. Lang

Matthew Yglesias takes issue with my explanation of the margin of error. Basically, he feels that polls that show large leads, but are technically still in the margin of error, can be reported as legitimate leads. I've received a few... [Read More]

Tracked on Aug 5, 2004 10:05:16 AM

» Arguing the Margins from T.J. Lang

Matthew Yglesias takes issue with my explanation of the margin of error. Basically, he feels that polls that show large leads, but are technically still in the margin of error, can be reported as legitimate leads. I've received a few... [Read More]

Tracked on Aug 5, 2004 10:16:51 AM

» Arguing the Margins from T.J. Lang

Matthew Yglesias takes issue with my explanation of the margin of error. Basically, he feels that polls that show large leads, but are technically still in the margin of error, can be reported as legitimate leads. I've received a few... [Read More]

Tracked on Aug 5, 2004 10:17:54 AM

» Margin of Error Again from the Greater Nomadic Council

Matthew Yglesias makes a few minor corrections to the Campaign Desk margin of error explanation.... [Read More]

Tracked on Aug 5, 2004 11:42:58 AM

» Margins of error from Majikthise

Matt Yglesias writes: The thing is that there's no such thing as the margin of error, just different confidence levels you could be using in your analysis. 95 percent is traditional in the US but there's no reason handed down [Read More]

Tracked on Aug 5, 2004 12:31:12 PM

## Comments

Yes. People take that 95% interval way too literally. A 52-45 -/+4 is not best described as a statistical tie. It's described as a highly probable lead, but we can't be statistically certain it's not a tie.

Posted by: DJW | Aug 5, 2004 9:45:52 AM

I don't think anyone learns any stat in high school, do they? We went from geometry --> algebra II --> trig --> calc.

Posted by: JP | Aug 5, 2004 9:46:25 AM

What 52-45 (MOE +/- 4) tells you is that candidate X is probably ahead, but the chances that he's not ahead are larger than 5 percent.

I don't think that this is precisely true -- a percentage confidence level has a more complex mathematical interpretation.(*) However, it is a lot closer to the truth than the "statistical tie" characterization that MY is rebutting here.

(* Here's why: suppose you used a confidence level of just 40% and at that confidence level it appeared that X was ahead. It would not follow that there was at least a 60% chance that X was not ahead.)

Posted by: alkali | Aug 5, 2004 9:56:24 AM

The flip side of that, is that these confidence bands in publicly distributed polls are generally calculated simply on the basis of the number of people polled, and don't take into account a huge variety of other sources of error, besides sample size. Like who's available to answer the phone at certain times of the day, the fact that you can't really assume the people who refuse to talk to you have the same distribution of opinions as the people who do talk, things like that. Often they don't even account for the amplification of error due to renorming the sampled population, which CAN be calculated.

In other words, if the poll says +/- 3%, you can be pretty confident that the error band is quite a bit larger in reality. So it's probably a good practice to not take those polls too seriously unless the margin shown is quite large.

Posted by: Brett Bellmore | Aug 5, 2004 9:56:52 AM

"the chances that he's not ahead are larger than 5 percent"

I dont think you have this quite correct. It gets very complicated because there is a relationship between the sample and both numbers. For example, if we had a skewed sample, the higher value for one would likely result in a lower value for the other, but they are not strictly zero-sum.

Without that relationship, the liklihood that both numbers are at the low end of the 95% confidence band is less likely than one of the numbers being so. I forget, if I ever, knew the distribution from the center, but, I think it is fair to say that since there is a relationship between the two numbers that is not exactly relational, it is pretty complicated.

The other thing to take into account are the cumalitive value of polls.

Posted by: theCoach | Aug 5, 2004 10:09:26 AM

The record of the poll-taker comes into play as well.

Posted by: Slothrop of Boulder | Aug 5, 2004 10:29:41 AM

Ok, I'm going to display some really woeful ignorance of statistical science here, but, nevermind: How does one find out what the confidence level is of a poll? Is this something that can be inferred from the margin of error? Polls never mention a confidence level, at least none that I can recall. I just did a quick check of some media outlets and didn't find any mention of this number (just the MOE). Quinnipiac doesn't mention it either (http://www.quinnipiac.edu/x660.xml).

Posted by: P.B. Almeida | Aug 5, 2004 10:51:13 AM

All of these discussions make my head hurt. It's like a bunch of 8-year-olds talking about sex -- a gallimaufry of half-truths, dimly-remembered facts, rumors, and deranged theories. Why, oh why, given the pervasiveness of the use of polls in our discourse, don't those who aspire to journalism take a course or two in probability and statistics? Do any journalism schools require even a "Stats for the Innumerate" course given by the apple math department?

Shouting at the wind, I know. I feel the same way about the knowledge of basic science in the journalism trade.

Posted by: Bob Munck | Aug 5, 2004 11:02:18 AM

P.B.--

In the U.S., just about every poll is reported with the margin of error calculated at the 95% confidence level, but you're right--they almost never bother saying explicitly what the confidence level is.

If you wanted to, you could "back out" what the confidence level being used was:

Margin of Error = Z * (standard error of the proportion)

The numbers going into the standard error of the proportion are the proportion voting for the guy and the sample size (things we usually know from articles). The formula for it is to take the square root of ((the proportion voting for your guy times the proportion voting against your guy) divided by the sample size). You also know the margin of error from the article. So if you wanted to find out the confidence level based on information included in a newspaper article, solve the above equation for Z and plug in the numbers in the article. Z comes from the standard normal table, and at the 95% confidence level, it's 1.96.

But it's easier to assume that they've used the 95% level unless it's reported otherwise.

Posted by: MaryGarth | Aug 5, 2004 11:06:37 AM

Assuming that a poll reveals 52% for X vs. 45% for Y, it would be more useful if a pollster gives the probability that X actually is leading among the whole population. [What is Prob(X - Y > 0) ? ] This could be easily calculated. (I'm not sure whether X and Y are distributed normally or binomially... ah my head hurts)

Of course, as theCoach points out, the two numbers are not independent (they could even be perfectly correlated, i.e. X + Y = 100% always), so you have to figure out their covariance somehow...

Posted by: next big thing | Aug 5, 2004 11:12:11 AM

MaryGarth:

Much obliged.

Posted by: P.B. Almeida | Aug 5, 2004 11:14:40 AM

Before anyone jumps on me, a quick correction to what I posted above. Since the proportions for the two candidates might not add up to one (100%), in the formula for the s.e. of the proportion, I should have said, "the proportion voting for your guy times 1 minus the proportion voting for your guy"...

P.B.--any time!

Posted by: MaryGarth | Aug 5, 2004 11:21:39 AM

Anybody care to comment in an apolitical and educational way their take on the usefulness of the Iowa Electronics Market as a predicitive tool? Intuitively I would rely more on the predicitve ability of people voting with their money.

http://www.biz.uiowa.edu/iem/markets/Pres04_WTA.html

Posted by: Warthog | Aug 5, 2004 11:23:08 AM

Not a lot to add, except that even a 51-49 with a MOE of 4 does tell you who has the highest probability of being in the lead: the 51. Otherwise it would be 50-50.

Posted by: Tim H. | Aug 5, 2004 11:26:46 AM

Warthog,

That probably depends on a lot of things - how mature is the market, how big an amount of money is being wagered, etc. D^2 at crooked timber weighs in on this with a little more knowledge. His bet is that the markets are not as good as some polls.

Let me give an example of why I think the maturity of the market matters. It is conventional wisdom that conventions give a candidate a bounce. Speculators in Iowa might presume that in an immature market they have better information about that bounce and be able to cash in on the less informed. With a more mature market speculators would assume that most investors have the basic knowledge.

Over on CrookedTimber, I believe there were some people commenting about the disparity between the state by state numbers and the overall numbers suggesting some opportunity for arbitrage, which would suggest to me that the market is not that reliable.

Posted by: theCoach | Aug 5, 2004 11:35:28 AM

As Tim H. says, the estimate, even when not significant, does matter somewhat. The rule of thumb I use, is that if the margin of error of one estimate overlaps with estimate of the other then the difference is not significant. Many people think that the margin of errors shouldn't overlap, but this is not necessary. So a 50%-45% (+-4%) split is statistically significant, as is the 52%-45% example.

Posted by: MC | Aug 5, 2004 11:38:07 AM

"What 52-45 (MOE +/- 4) tells you is that candidate X is probably ahead, but the chances that he's not ahead are larger than 5 percent."

To add to what Alkali and the Ciach said, this is WRONG, and not a mistake a philosophy major should make. A 95% confident interval says: if I ran lots of similar polls, all independent of one another, in approximatly 95% of them, the estimated value would lie no more than this far (4 percentage points in the example) from the true value. Using this kind of analysis, it is incoherent (i.e it makes no sense) to talk about "the probability the true value is within the confidence bounds". That probability is 0 if the true value is outside the bounds and 1 if it is inside: the true value lies where it lies.

You have to use Bayesian analysis if you want to talk about the probability of the true value lying within a particular range, and polling of the sort being quoted does not use Bayesian analysis.

Depending on what a statistical tie means (it is not a term I ever heard when I was in graduate school), what Mr. Lang says is likely nonsense. Ask him this question: if this is really a tie, then he should be willing to bet even money on either side (that is, willing to pay you $100 if the candidate you choose wins so long as you pay him $100 if the candidate you do not choose wins, he agrees first, then you choose). That is usually what is meant by a "tie". Assuming the same thing happens on the day before the election, go for the bet. If he won't take it, ask him what he means by a "statistical tie" exactly.

I only comment occasionally on blogs, but I have made a variant of this comment numerous times. Joining Matt in wrongly thinking confidence bounds say something about the probability the true value lies within the bounds are Mark Kleiman and Brad DeLong(!) and others I cannot now remember.

Posted by: David Margolies | Aug 5, 2004 11:55:26 AM

"That probability is 0 if the true value is outside the bounds and 1 if it is inside: the true value lies where it lies."

Could you expand on this portion of your post It's the centerpiece of your post, but it's opaque to me. Is it mistaken to believe that such polls make any kind of inference about the 'true value'? Whew, my grad stat muscles are ten years out of shape.

Posted by: djangone | Aug 5, 2004 12:21:34 PM

"That probability is 0 if the true value is

outside the bounds and 1 if it is inside:

the true value lies where it lies."

Could you expand on this portion of your post

It's the centerpiece of your post, but it's

opaque to me. Is it mistaken to believe that

such polls make any kind of inference about

the 'true value'? Whew, my grad stat muscles

are ten years out of shape.

The model which is being use, which I think was formulated by Jerzy Neymann at Berkeley a fairly long time ago, posits an exact (but not directly knowable) true value. Then the problem is to estimate that value. The sampling method provides an estimate of that true value. When you have 95% confidence bounds of 4%, you are saying "roughly 95 times out of 100 samples of this kind, the esitmate lies within 4 percentage points of the true value; in the roughly other 5% of such samples, the estimate is more than 4 percentage points away."

Now that statement is a probability statement about polls in general, not about the particular poll in hand. For the poll you are looking at, the true value either is or is not within the bounds. Consider this example: suppose very accurate surveys and censuses say that 95% of Matt's readers are under 25 years old. Then I can say with 95% confidence that you are under 25 (in roughly 95 out of 100 times, if I made that statement about a reader chosen at random, I would be right). But that doesn't mean that the chance you are under 25 is 95%: it is 100% if you are under 25 and 0% if you are older.

Now a Bayesian looks at things differently. Bayesians do not believe in true values, but instead believe everything has a probability distribution. You always start with an a priori distribution and then use the data to transform it to an a posteriori distribution. How much of an effect the data has depends on the a priori distribution. But Bayesians are always in a position to say `I believe the probability that x lies in this interval is y%'.

Posted by: David Margolies | Aug 5, 2004 1:02:11 PM

In a non-Bayesian approach, the "true value" is the percentage of the whole population that favors Mr X. This is a fixed number, but it would be extremely costly and difficult to measure directly.

Posted by: next big thing | Aug 5, 2004 1:05:38 PM

I think this argument is a little too academic (and I say this as an academic). We should all remember that MOE calculates the margin of sampling error only. There are, of course, numerous other sources of error related to methodology, failure of randomization, modeling and such. For example, I looked at SUSA, ARG and Zogby primary results compared to actual results and found that SUSA and ARG were outside MOE about half the time and Zogby was outside about 25%. Not great track records...

So, MOE has become a shorthand way to determine "significance". While MY is correct in the statistical analysis of poll results pushing the margin of error, I think it is still reasonable to say if you are within MOE (as in his example), the sampling error combined with other error could still, with high probability, make the trial heat a "tie".

Posted by: Scott Pauls | Aug 5, 2004 1:13:30 PM

>>Bayesians do not believe in true values, but instead believe everything has a probability distribution.

Uhmm... but even if everything has a probability distribution, there is a "true" value for the mean, right? That is to say, whether or not a person is likely to vote "1" or "0", and all votes are described by a probability distribution, the mean, or expected value, is defined. So, in some sense, the mean (and all moments of the distribution) have "true" values. (Or is this not the definition of "a true value"?)

Posted by: lucia | Aug 5, 2004 1:14:01 PM

"(* Here's why: suppose you used a confidence level of just 40% and at that confidence level it appeared that X was ahead. It would not follow that there was at least a 60% chance that X was not ahead.)"

You can't select a confidence interval of 40%. They are all 50+. Any result showing a difference has a better than 50% chance of being true, looking at sampling error alone.

Another thing people often miss is that the 95% confidence interval goes both ways. If Kerry is at 52%, with a 4% margin of error, there is 5% chance that his true support is outside that range, and (assuming a normal distribution), the probabilities are equal on either side. That is, there is a 2.5% chance he is actually over 56%, and there is a 2.5% chance he is actually under 48%.

Figuring out whether a poll really indicates a 95% chance that a difference is significant is trickier than just looking at margins of error. Most articles assume that a four point margin of error means that a *difference* of less than four points is significant. That isn't necessarily so. But that doesn't mean that you need 8% to achieve signficance either. "4%" is almost always based on specific assumptions about the distributions of the underlying data (usually conservative assumptions). Relevant other factors are the number of choices, the distributions among those choices, and what you really want to know (are you just measuring how much support Kerry has, or do you want to know whether he will beat Bush? the second has to account for interdependencies between the answers).

Mot surveys assume a 50/50 split, and simply set the margin of error around a given result. For example, the margin of error around Nader's support is less than 4%, because his share is so low. and it is bounded in that it can't go less than zero.

Also, sampling error isn't the most important problem that polls face, in my opinion. In a world where people have multiple phone lines, own mobile phones, screen their calls, and block numbers they don't know, random sampling doesn't really work, and one of the fundamental assumptions about significance testing goes out the window. To correct for problems in random sampling, pollsters apply models that weight by particular demographics to avoid possible under-representation. Additionally, they just have an educated guess at who is going to turn out to vote. If you use registered voters, you ignore that their may be a difference between people who have registered and those who haven't. if you use likely voters, you are dependant on your likely voter model, and even the best of those don't do a great job of predicting who will vote (although they do better than just assuming everyone will vote). This can cause a big problem in an election where past patterns may not apply (say, one party is more motivated than another party, or one party is doing a better job mobilizing previously alienated voters - I saw the latter firsthand with Jesse Ventura's unexpected victory in Minnesota in 1998).

All of this has, I believe, has more to do than sampling error in explaining why different polls can show radically different results (CNN/Gallup's measure of Kerry's bounce compared with ABC's, for instance).

Which is why you shouldn't look at any one poll. The best results are obtained by looking at a broad basket of polls, and then looking closely at the internals to see if any problems with the models are evident (say, if Kerry's support is more enthusiastic than Bush's).

Posted by: Tom | Aug 5, 2004 1:26:47 PM

"Joining Matt in wrongly thinking confidence bounds say something about the probability the true value lies within the bounds are Mark Kleiman and Brad DeLong(!) and others I cannot now remember."

I don't follow this. Of course confidence bounds say something about the probability that the true value lies with the bounds. A 95% confidence interval is saying that there is a 95% chance that the true result is inside the margin of error (with assumptions about random sampling, etc.).

You seem to be saying, at least implicitly, that choosing a 67%, 99.99999999%, or 95% confidence has no impact on the probability of whether or not the result falls inside the stated margin of error. As that is obvious nonsense, I must be misunderstanding you.

Posted by: Tom | Aug 5, 2004 1:33:25 PM

David Margolies:

Thanks! That clarified a lot. I also think I can predict the response to Tom's post of 1:33 pm, but I'll sit back and see if I'm right.

Posted by: djangone | Aug 5, 2004 1:47:37 PM

The comments to this entry are closed.