Sometimes I wish I wrote a social science blog. Like yesterday, when I came across this line in an email by a colleague:
"We're aiming for assured representation more than false statistical precision based on one random shot."
It was at the end of a long message detailing a sampling scheme so stratified as to remove anything resembling randomness.
As I wrote to a work-friend, "So all those years I spent in grad school learning methods and stats were a waste. Glad I finally found out!"
Scatterplot friends, see what we have to deal with in government work?
I'm used to explaining statistical significance and when it is and isn't appropriate to use certain kinds of methods. Usually my colleagues listen and my contractors take my direction. In this case, I'm a bystander watching a project managed by someone else go terribly wrong. This person has decided on the outcome and is steering the project, none too subtly, to prove his point.
Hey, you have a hypothesis. Great. Let's test it. But let's not constrain the sampling scheme so dramatically that we are only going to select sites that will confirm your convictions.
Oh, and the laws of probability? They aren't invalidated just because you say so.
My apologies to those readers who are not quantitative social scientists.
Grateful for: good training.