Reading poll data this political season, I'm reminded of a saying bruited about during my student days: "Figures don't lie, but liars do figure."
Don't get me wrong: Mine is no tirade against statistics, against quantitative measures, against probability sampling of voters' intentions come election day. The fact is one who pays attention to election campaigns finds a plethora of polling data giving disparate and changing answers to "who's ahead?"
Nor is my purpose to give a statistician's critique of any particular poll. Rather, I'll assume that every poll asking voters this week about a particular contest is flawless in design. But 10 such polls will give 10 different percentages to a particular candidate. Reason? Sampling variability. It's like when you toss 20 coins many times the number of heads and tails will vary with each toss.
So ... if the question this week in 20 different polls is "would you vote for "A" or "B", the percent saying "A" will vary from poll to poll just as in the coin toss example.
My interest is in which of these several polls this week will be publicized by whom and on behalf of whose interests. This shifts a critique of polling from blanket judgment of polls to one of better understanding of how polling samples are (mis)used.