Lying to Gallup, part 2

Print More

Very close readers of this blog will notice a little dispute I’ve been having with SIU sociology professor Darren Sherkat (Go, Salukis!) over how to think about the tendency of Americans to fib about their church attendence. I made the argument that there may be less regional variation than Gallup indicates, because people in states where regular church attendance is more the norm are more likely to over-report their churchgoing. Sherkat takes me to task for not applying this logic to aggregate American church attendance over time. His point is that in all likelihood, the alleged decline in attendance over the past half century is a figment of self-reporting, because fibbing was more likely decades back when weekly churchgoing was more the norm nationwide. So if anything, I suppose, church attendance should actually be up from what it used to be.

That there is a fibbing problem has been clear for a while now, thanks to studies by Hadaway et al. from the early 1990s showing that Gallup’s steady 40-percent weekly attendance rate is off by a good 15 points. Has actual attendance actually been just 25 percent all the way along? Presser and Stinson’s 1998 article using time-use studies compiled by the government suggests otherwise.

I propose a Catholic-Evangelical seesaw. Very high Catholic attendance rates prior to Vatican II made for an actual national attendance rate approaching 40 percent. Post-Vatican II, Catholics stopped going to Mass as often, but felt guilty about it, and so began fibbing more. Over time, Catholics have become less guilty about not attending, but meanwhile, in the evangelical heartland the churchgoing norm was ratcheting up, leaving non-churchgoers more guilty and thus more likely to fib. The result of all this is the substantial differentiation Gallup finds between Catholic and evangelical regions, but with the average self-reporting rate remaining constant.

  • Not quite so fast! I was not meaning to imply that declines in attendance are not real. The rates of attendance reported in the early 1970s, when we have the first real high quality studies with measures continued until recently, were astronomical! I think the error around those point estimates was higher because of social desirability bias. I think social desirability biases are higher in places and times when religion is more normative–like in the South, and among African Americans, and by reflection in the past IF RELIGIOUS PARTICIPATION HAS DECLINED IN GENERAL. I agree with Presser and others who chart declining attendance. Church attendance has declined substantially since the early 1970s, and BECAUSE OF THAT we should expect that social desirability biases are also declining.
    The cumulative GSS, which shows that “nearly weekly” to “more than weekly” attendance dropped from 41% in 1972 to 31% in 2008. During the same period, the GSS finds that the percentage of Americans who never attend or attend less than once a year increases from 18% in 1972 to 28% in 2008. The question is whether we should magnify decreases in religious participation based on Hadaway et al. (who suggest that bias is increasing, but have no proof of this), or whether we should minimize, very slightly, this decrease in attendance. My point is about the direction of statistical bias, not the direction of social change.

  • Mark Silk

    OK, good. But the question remains: How to account for the constancy of Gallup’s 40-percent weekly attendance result?

  • Gallup’s sampling design and quality control over response rates (and probably other aspects of the survey research process) have shifted quite a bit over time. I do not view those data as being of high enough quality to establish trends. It’s a journalism poll, meant to stimulate “news” stories. Science requires more.
    I was an original reviewer of the old Hadaway et al paper in ASR (I think the statute of limitations has run out on anonymity), and I was shocked that the editors let the responses turn into a pissing match about trends. The paper was about bias, not trends. Hadaway et al made a claim about trends in the conclusions, and those two or three sentences should have been deleted–instead they allowed long responsses from Presser and others.

  • Mark Silk

    So you’re suggesting that Gallup has arranged things in such a way as always to come up with something in the 40 percent range? If that’s being done to stimulate news stories, then they don’t understand journalism. What stimulates news is, well, news. And coming up with 40 percent a weekly attendance rate year after year is the opposite of news.

  • The real issue is quick and dirty polling versus expensive and tedious science. It was easier in the old days, when people were less overloaded with bad marketing polls and the like. But, overload and hostility mean that Gallup’s response rates have fallen from what we used to call barely acceptable (60%—when NORC was pulling over 80%) to what should be called schlock (under 20%–no peer reviewed social scientific journal should publish a study with a response rate this low…). They aren’t doing this out of a conspiracy, they just want to make money–and making money means getting results now, not waiting three months and spending lots of manpower to have more valid and reliable data.