« "More Women = More Female Friendly Policies" | Main | The Obama Admin's War on Transparency? »

June 17, 2013

Is Congress Smarter Than the WaPo's Editorial Board?

In an overwrought editorial about the "epidemic" of military rape, the Washington Post Editorial board once more show that they can't even get basic facts right:

Of the 26,000 unwanted sexual contact incidents in 2012 contained in a recent Defense Department survey, only 3,374 were reported and, Ms. Gillibrand said, only one in 10 ended up going to trial.

The Post links to a list of DoD surveys. Apparently, selecting (never mind reading) the correct source and linking directly to it was a bridge too far. Had one of the Post's editorial board actually read the survey (you can stop laughing now), this paragraph in the summary section might have given them cause to doubt that the report describes an "epidemic" of rape. Can you spot the refrain running through the cited statistics?

Unwanted Sexual Contact. Overall, 6.1% of women and 1.2% of men indicated they experienced unwanted sexual contact in 2012. For women, this rate is statistically significantly higher in 2012 than in 2010 (6.1% vs. 4.4%); there is no statistically significant difference between 2012 and 2006 (6.1% vs. 6.8%). There is no statistically significant difference for men in the overall rate between 2012 and 2010 or 2006 (1.2% vs. 0.9% and 1.8%). Of the 6.1% of women who experienced unwanted sexual contact, 32% indicated the most serious behavior they experienced was unwanted sexual touching only, 26% indicated they experienced attempted sex, and 31% indicated they experienced completed sex. There were no statistically significant differences in the most serious behaviors for women between 2006, 2010, and 2012. Of the 1.2% of men who indicated experiencing unwanted sexual contact, 51% indicated the most serious behavior they experienced was unwanted sexual touching only, 5% indicated they experienced attempted sex, and 10% indicated they experienced completed sex. There were no statistically significant differences in the most serious behaviors for men between 2006, 2010, and 2012.

So much for the layers of editorial oversight and rigorous fact-checking that separate the pros from the wannabes.

Posted by Cassandra at June 17, 2013 05:49 AM

Trackback Pings

TrackBack URL for this entry:


It's interesting that they took the trouble to separate out the numbers for "completed" rape, because most studies lump those in with the "attempted" statistics. That's where you get headlines about how 1 in 4 or 1 in 5 American has been raped (defined as attempted or completed rape). I was curious to see how the military numbers for completed rape compared to the general population, but I admit defeat. A lot of Google hits, all linking to "defined as attempted or completed" statistics.

"Attempted rape" could mean anything, especially in a culture increasingly prone to foolishness like "Well, isn't it the same as rape when . . . ." No, it's not the same.

Posted by: Texan99 at June 17, 2013 09:38 AM

What I can't get past is the fact that supposedly "professional" journalists have repeatedly misrepresented extrapolated responses from a survey that we don't even know is representative as actual "incidents" or "rapes" or "reported sexual assaults".

I can't get past the repeated flogging of the "sexual assaults up by 38% since 2010" while ignoring that they're down by 15% from 2006.

And I can't believe anyone is flogging all these numbers when the actual report states repeatedly that the year to year differences are not statistically significant.

I guess my expectations for professional work are too high.


Posted by: Cassandra at June 17, 2013 10:43 AM

It's a narrative. It's Propaganda. The purpose is not "the truth", but more types of "civilian" control over the military.

The equivalent of political commissars inserted into the chain of command will not be far off. Although they will be called something like "JAG" officers, they will be political appointees, and be able to interfer with good order and discipline.

For the children, or women, or gays, or something like that. What kind of regular citizen would then want to volunteer to be part of such a force?

Posted by: Don Brouhaha at June 17, 2013 11:33 AM

"Although they will be called something like "JAG" officers, they will be political appointees, and be able to interfer with good order and discipline."

So, following in the time-honored military tradition of acronym vernacular, they're name will (appropriately) become JAGOFF's.
I like it.

Posted by: DL Sly at June 17, 2013 12:49 PM

The same sort of thing happened with the tobacco studies. The definitive study that actually showed that smoking cigarettes greatly increased your chances of getting lung cancer also included cigars in the study, and the results trumpeted were that smoking cigars also caused lung cancer. Reading into the depths of the study, one finds out that the only way to get any statistically significant "proof" that smoking cigars is bad is to lump the "5 cigars a day" category along with the "2-4 cigars a day" and "1 cigar a day" categories. That is, the "1 cigar a day" and "2-4 cigars a day" categories did NOT show any statistical significance when it came to lung cancer.

So when I'm smoking a cigar and someone tells me it's bad for my health (or worse), I simply sweetly smile and say nothing.

Posted by: Rex at June 17, 2013 12:50 PM

Depending on the methodology, actually both could be true. Remember, we aren't talking "cause" the same as we do when we say that dropping the temperature of water @STP below 32F "causes" water to freeze. "Cause" here simply means "more *likely* to occur"

In these types of analysis you are typically looking at odds ratios of an event either occuring or not. Let us say, for instance, that the odds of getting cancer double for every cigar one smokes. Let's also say that the odds for getting cancer if one smokes 5 cigars/day is 1/100. Smoking only 4 yeilds odds of 1/200, 3 yeilds odds of 1/400, 2 yeilds odds of 1/800 and 1 yeilds odds of 1/1,600.

We know this to be true because I have just been promoted to god of this hypothetical and have made it be true. :-)

It may very well be the case, that because of the sampling for any given study that odds of 1/1,600 are not distinguishable from 0. A finding of not-statistically different does not mean that the risk of cancer from smoking 1 cigar is the same as the risk for not smoking. Statistics never show equality. We only say that we can't tell a difference, not that there isn't one.

Does smoking a single cigar/day increase your risk? Probably.

Should you care? Well, that's for you to decide.

Posted by: Yu-Ain Gonnano at June 17, 2013 01:12 PM

Couple of observations:
2006: 6.8% **Not Sig Diff from 2012
2010: 4.4%
2012: 6.1% **Not Sig Diff from 2006

1) What caused the reduction in 2010?
2) While 2012 is not significantly different than 2006, what caused the reversion to the 2006 level after a statistically significant reduction?
3) Why wouldn't a statistically significant increase not be "epidemic" just because it returned to a previous level.
3a) Does something not qualify as "epidemic" if it has been going on long enough?
4) What were the levels in years prior to 2006? If the long term trend (and 3 datapoints isn't one) is moving downward with occasional reversals which disappear in the next time period, then this reversal is unremarkable. If, however, the trend were monotonically downward then this reversal would be rare and bear watching carefully.

That there were no statistically different rates for the most serious behaviors.
5) With the serious offenses being somewhat static, what explains the large variation in the "minor" behaviors?
6) While the "major" offenses are static, how does one define the level where the rates are, for lack of a better word, acceptable (0.00% being a practical impossibility)? Or like in 3a, is normality sufficient to say that it is properly prioritized? and to concern ourselves with other, more pressing, problems?

If only we had a profession whose job it was to go seek out information and ask questions of experts to provide the rest of us with context for these kinds of problems.

Posted by: Yu-Ain Gonnano at June 17, 2013 01:46 PM

One of my favorite stats jokes that I pull out when a client is fulminating about the r2 being less than .999 is that we *know* how to get a perfect r2 - just run a regression line through any 2 data points :p

I think the journalists at the WaPo have been listening in on my phone calls...

Posted by: Cassandra at June 17, 2013 02:12 PM

BOTA to Sly

Posted by: CAPT Mike at June 17, 2013 06:11 PM