The media seems to have embraced 655,000 as the number of deaths since Lancet published a Johns Hopkins University study of mortality in Iraq. They've settled on this number despite the fact that the researchers themselves, reflecting the inherent uncertainties in such extrapolations, said they were 95 percent certain that the real number lay somewhere between 392,979 and 942,636 deaths.
Usually insightful Daniel Davis, Guardian Unlimited, and Tim Lambert, an Australian science blogger, have also weighed in on the matter, calling any attempts to refute the report devious hack-work, especially because the administration seems content with a number far lower, about 50,000. Davis and Lambert analyze the data using their own brand of statistical posturing based on survey samplings before Davis goes on to say that “there has to be some accountability here.”
I agree. There does have to be some accountability here. And while I'll stop short of saying the researchers lied or are frauds, I will point out that their excessively broad range (1/2 million +/- 5%) speaks volumes: they have no idea. In fact, I am equally or perhaps even more accurate in saying that there were between one and 1 million deaths.
In covering the original study, the media seems to have settled on the middle ground, coming up with the 600,000 to 655,000 range. While I have no idea whether or not the number is accurate (the method, considering it's from Johns Hopkins, seems less credible than usual), I do know that some members of the media have become more sloppy at accepting statical reports as newsworthy because they seem credible (no matter what the method) and always create a buzz of controversy.
That is how this topic ties into communication. All communicators, or editors, will be tempted from time to time to publish a statistical report that will generate a buzz (they always do), but they should consider that 'buzz' publishing is getting away from the intent of reporting, which is, simply put, to get at the truth. It seems to me that publishing this one, given the method and given that people lie when taking such surveys, did little to do that.
What do I mean? Well, if you asked the same number of citizens if they had a loved one, or if someone they knew had a loved one, who died in 9/11, and applied the same statiscial theory that Davis applied in his post to defend the study, then I'd wager the death toll would exceed 1 million. Thank goodness it did not.
Usually insightful Daniel Davis, Guardian Unlimited, and Tim Lambert, an Australian science blogger, have also weighed in on the matter, calling any attempts to refute the report devious hack-work, especially because the administration seems content with a number far lower, about 50,000. Davis and Lambert analyze the data using their own brand of statistical posturing based on survey samplings before Davis goes on to say that “there has to be some accountability here.”
I agree. There does have to be some accountability here. And while I'll stop short of saying the researchers lied or are frauds, I will point out that their excessively broad range (1/2 million +/- 5%) speaks volumes: they have no idea. In fact, I am equally or perhaps even more accurate in saying that there were between one and 1 million deaths.
In covering the original study, the media seems to have settled on the middle ground, coming up with the 600,000 to 655,000 range. While I have no idea whether or not the number is accurate (the method, considering it's from Johns Hopkins, seems less credible than usual), I do know that some members of the media have become more sloppy at accepting statical reports as newsworthy because they seem credible (no matter what the method) and always create a buzz of controversy.
That is how this topic ties into communication. All communicators, or editors, will be tempted from time to time to publish a statistical report that will generate a buzz (they always do), but they should consider that 'buzz' publishing is getting away from the intent of reporting, which is, simply put, to get at the truth. It seems to me that publishing this one, given the method and given that people lie when taking such surveys, did little to do that.
What do I mean? Well, if you asked the same number of citizens if they had a loved one, or if someone they knew had a loved one, who died in 9/11, and applied the same statiscial theory that Davis applied in his post to defend the study, then I'd wager the death toll would exceed 1 million. Thank goodness it did not.