Blog Archive

Popular Posts

Pageviews last month

Thursday, November 02, 2006

MEDIA ALERT: LANCET REPORT CO-AUTHOR RESPONDS TO QUESTIONS



As described in our October 18 Media Alert, ‘Democracy And Debate - Killing Iraq’ (http://www.medialens.org/alerts/06/061018_democracy_and_debate.php), a recent study published in The Lancet medical journal estimated that 655,000 Iraqi people have been killed as a result of the March 2003 US-UK invasion of Iraq.

The media coverage has been appalling - the words ‘Lancet’ and ‘Iraq’ have appeared in national UK newspaper articles some 30 times, with many of these mentions in passing. There has been no serious attempt to examine the Lancet’s figures, to explain how they compare to earlier findings from different studies. Anyone aspiring to understand the issue could do so only by visiting small, alternative websites, such as those run by Tim Lambert (http://scienceblogs.com/deltoid/) and Stephen Soldz (http://psychoanalystsopposewar.org/blog/2006/10/24/iraq-body-count-finds-a-task-worth-their-time/), and our own message board (http://www.medialens.org/board/).

To its credit, the BBC website has tried harder than most mainstream media to report the issue honestly. In particular, BBC world affairs correspondent Paul Reynolds - who has frequently engaged with Media Lens readers - responded to complaints by agreeing to invite questions from members of the public and to forward them to the authors of the Lancet report. On October 30, the BBC posted an edited version of answers from Les Roberts:

http://news.bbc.co.uk/1/hi/talking_point/6099020.stm

Below, we are publishing Roberts’ unedited answers. We have also added Roberts’ response to an editorial by Steven Moore in the Wall Street Journal.

1. How do you know that you are not reporting the same fatality multiple times?

For example if you were to ask people in the UK if they know anyone who has been involved in a traffic accident most would say they do. Applying your logic that means there are 60 million accidents every year.
Andrew M, London, UK

Les Roberts: That is an excellent question. To be recorded as a death in a household, the decedent had to have spent most of the nights during the 3 months before their death “sleeping under the same roof” with the household that was being interviewed. This may have made us undercount some deaths (soldiers killed during the 2003 invasion for example) but addressed your main concern that no two households could claim the same death event.

2. It seems the Lancet has been overrun by left-wing sixth formers.

The report has a flawed methodology and deceit is shown in the counting process. What is your reaction to that?
Ian, Whitwick, UK

LR: Almost every researcher who studies a health problem is opposed to that health problem. For example, few people who study measles empathize with the virus. Thus, given that war is an innately political issue, and that people examining the consequences of war are generally opposed to the war’s conception and continuation, it is not surprising that projects like these are viewed as being highly political. That does not mean that the science is any less rigorous than a cluster survey looking at measles deaths. This study was the standard approach for measuring mortality in times of war, it went through a rigorous peer-review process and it probably could have been accepted into any of the journals that cover war and public health.

The Lancet is a rather traditional medical journal with a long history and is not seen as “left-wing” in the public health and medical communities. The types of different reports (medical trials, case reports, editorials) in the Lancet have been included for scores of years. The Lancet also has a long history of reporting about the adverse effects of war, and the world is a more gentle place for it.

3. Why is it so hard for people to believe the Lancet report? I am an Iraqi and can assure you that the figure given is nearer to the truth than any given before or since.
S Kazwini, London, UK

LR: I think it is hard to accept these results for a couple of reasons. People do not see the bodies. While in the UK there are well over 1000 deaths a day, they do not see the bodies there either. Secondly, people feel that all those government officials and all those reporters must be detecting a big portion of the deaths. When in actuality during times of war, it is rare for even 20% to be detected. Finally, there has been so much media attention given to the surveillance-based numbers put out by the coalition forces, the Iraqi Government and a couple of corroborating groups, that a population-based number is a dramatic contrast.

4. Why do you think some people are trying to rubbish your reports, which use the same technique as used in other war zones for example in Kosovo?

Another group, which uses only English-language reports - Iraq Body Count - constantly rubbishes your reports. Again, why do you think that is?
Mark Webb, Dublin, Ireland

LR: I suspect there are many different groups with differing motives.

5. Can you explain, if your figures are correct, why 920 more people were dying each day than officially recorded by the Iraqi Ministry of Health - implying huge fraud and/or incompetence on their behalf?
Dan, Scotland

LR: It is really difficult to collect death information in a war zone! In 2002, in Katana Health Zone in eastern Democratic Republic of Congo (DRC) there was a terrible meningitis outbreak, where the health zone was supported by the Belgian Government, and with perhaps the best disease surveillance network in the entire country. A survey by the NGO International Rescue Committee showed that only 7% of those meningitis deaths were recorded by the clinics and hospitals and government officials. Patrick Ball at Berkeley showed similar insensitivity by the press in Guatemala during the years of high violence in the 1980s. I do not think that very low reporting implies fraud.

6. As an analyst myself I would like to know how reliable the method itself actually is.

Les Roberts and his colleagues claim to have used the same method to estimate deaths in Iraq as is used to estimate deaths in natural disasters. Is there any evidence that the method is accurate? By this I mean a comparison of the number actual deaths after a natural disaster with estimates of the number of deaths.
Rickard Loe, Stockholm, Sweden

LR: That is a good question. There is a little evidence of which I am aware. Note that the 2004 and 2006 studies found similar results for the pre- and initial post-invasion period which at least implies reproducibility. I led a 30 cluster mortality survey in Kalima in the DRC in 2001. The relief organization Merlin did a nutritional survey and measured mortality in the same area and with a recall period that covered part of our survey. Both were cluster surveys, Merlin used a different technique to select houses and we obtained statistically identical results. In a couple of refugee settings, cluster surveys have produced similar estimates to grave monitoring.

In 1999, in Katana Health Zone in the Congo, I led a mortality survey where we walked a grid over the health zone and interviewed 41 clusters of 5 houses at 1km. spacings. In that survey, we estimated that 1,600 children had died of measles in the preceding half year. A couple of weeks later we did a standard immunization coverage survey (30 clusters of 7 children but selected totally proportional to population) that asked about measles deaths and we found an identical result.

I suspect that Demographic Health Surveys or the UNICEF MICS surveys (which are both retrospective cluster mortality approaches) have been calibrated against census data but I do not know when or where.

7. My understanding is that this study reports ten times more deaths attributable to the war than other studies because this is the only one to use statistical methods to make inferences about the mortality rate across the whole population.

Other studies only record verifiable deaths, which one would expect to constitute only a small part of the total number. Am I correct?
Matthew, Appleton

LR: Yes.

8. It seems to me that the timing of the publication of the 2004 and 2006 reports - in both cases shortly before a U.S. election - was a mistake.

Does Mr Roberts regret the timing of the release of the two reports or does he feel they achieved some benefit?
Mik Ado, London, UK

LR: Yes. Both were unfortunate timing. As I said at the time of the first study, I lived in fear that our Iraqi colleagues and interviewers would be killed if we had finished a survey in mid-September and it took two months for the results to get out. This notion has been widely misquoted as saying we wanted to influence the election….as if the two parties somehow had different positions on the war in Iraq. I think in Iraq, a post-election publication in 2004 would have been seen as my colleagues knowing something but keeping it hidden. It was also unfortunate that the attention span of the U.S. media is short during election seasons.

More detailed questions from Joe Emersberger

9. Lancet 2 found a pre-invasion death rate of 5.5/per 1000 people per year. The UN has as estimate of 10? Isn't that evidence ofinaccuracy in the study?

LR: The last census in Iraq was a decade ago and I suspect the UN number is somewhat outdated. The death rate in Jordan and Syria is about 5. Thus, I suspect that our number is valid. Note that if we are somehow under-detecting deaths, then our death toll would have to be too low, not too high. Both because a) we must be missing a lot, and b) the ratio of violent deaths to non-violent deaths is so high.

I find it very reassuring that both studies found similar pre-invasion rates, suggesting that the extra two-years of recall did not dramatically result in under-reporting….a problem recorded in Ziare and Liberia in the past.

10. The pre-invasion death rate you found for Iraq was lower than for many rich countries. Is it credible that a poor country like Iraq would have a lower death rate than a rich country like Australia?

LR: Yes. Jordan and Syria have death rates far below that of the UK because the population in the Middle-east is so young. Over half of the population in Iraq is under 18. Elderly populations in the West are a larger part of the population profile and they die at a much higher rate.

11. A research team led by physicists Sean Gourley and Neil Johnson of Oxford University and economist Michael Spagat have asserted in an article in Science that the second Lancet study is seriously flawed due to "main street bias.". Is this a valid, well tested concept and is it likely to have impacted your work significantly?

LR: I have done (that is designed, led, and gone to the houses with interviewers) at least 55 surveys in 17 countries since 1990…most of them retrospective mortality surveys such as this one. I have measured at different times, self-selection bias, bias from the families with the most deaths leaving an area, absentee bias….but I have never heard of “main street bias.” I have measured population density of a cluster during mortality surveys in Sierra Leone, Rwanda, Dem. Republic of Congo, and the Republic of Congo, and in spite of the conventional wisdom that crowding is associated with more disease and death, I have never been able to detect this during these conflicts where malaria and diarrhoea dominated the mortality profile.

We worked hard in Iraq to have every street segment have an equal chance of being selected. We worked hard to have each separate house have an equal chance of being selected. I do not believe that this “main street bias” arose because a) about a 1/4th of the clusters were in rural areas, b) main streets were roughly as likely to be selected, c) most urban clusters spanned 2-3 blocks as we moved in a chain from house to house so that the initial selected street usually did not provide the majority of the 40 households in a cluster and d) people being shot was by far the main mechanism of death, and we believe this usually happened away from home. Realize, there would have to be both a systematic selection of one kind of street by our process and a radically different rate of death on that kind of street in order to skew our results. We see no evidence of either.

12. In Slate Magazine, Fred Kaplan has alleged that

"....if a household wasn't on or near a main road, it had zero chance of being chosen. And "cluster samples" cannot be seen as representative of the entire population unless they are chosen randomly." Is Kaplan's statement true?

LR: His comment about proximity to main roads is just factually wrong! As far as cluster surveys go, they are never perfect; however, they are the main way to measure death rates in this kind of setting. See the SMART initiative at www.smartindicators.org.

13. Madelyn Hicks, a psychiatrist and public health researcher at King's College London in the U.K., says she "simply cannot believe" the paper's claim that 40 consecutive houses were surveyed in a single day. Can you comment on this?

LR: During my DRC surveys I planned on interviewers each interviewing 20 houses a day, and taking about 7 minutes per house. Most of the time in a day was spent on travel and finding the randomly selected household. In Iraq in 2004, the surveys took about twice as long and it usually took a two person team about 3 hours to interview a 30 house cluster. I remember one rural cluster that took about 6 hours and we got back after dark. Nonetheless, Dr. Hicks concerns are not valid as many days one team interviewed two clusters in 2004.

14. A recent Science Magazine article stated that Gilbert Burnham (one of your co-authors) didn’t know how Iraqis on survey team conducted their work. The article also claimed thatraw data was destroyed to protect the safety of interviewees. Is this true?

LR: These statements are simply not true; and do not reflect anything said by Gilbert Burnham! He’s submitted a letter to the editors of Science in response, which I hope they will print.

15. A UNDP study carried out survey 13 months after the war that had a much higher sample size than both Lancet studies and found about 1/3 the numbers of deaths that your team has found. Given the much higher sample size shouldn't weassume the UNDP study was more accurate and that therefore your numbers are way too high?

LR: The UNDP study was much larger, was led by the highly revered Jon Pederson at Fafo in Norway, but was not focused on mortality. His group conducted interviews about living conditions, which averaged about 82 minutes, and recorded many things. Questions about deaths were asked, and if there were any, there were a couple of follow-up questions.

A) I suspect that Jon’s mortality estimate was not complete. I say this because the overall non-violent mortality estimate was, I am told, very low compared to our 5.0 and 5.5/ 1000 /year estimates for the pre-war period which many critics (above) claim seems too low. Jon sent interviewers back after the survey was over to the same interviewed houses and asked just about <5 year old deaths. The same houses reported ~50% more deaths the second time around. In our surveys, we sent medical doctors who asked primarily about deaths. Thus, I think we got more complete reporting.

B) This UNDP survey covered about 13 months after the invasion. Our first survey recorded almost twice as many violent deaths from the 13th to the 18th months after the invasion as it did during the first 12 (see figure 2 in the 2004 Lancet article). The second survey found an excess rate of 2/1000/year over the same period corresponding to approximately 55,000 deaths by April of 2004(see table 3 of 2006 Lancet article). Thus, the rates of violent death recorded in the two survey groups are not so divergent.


Les Roberts Responds To Steven Moore Of The Wall Street Journal

Moore’s editorial can be read here: http://www.opinionjournal.com/editorial/feature.html?id=110009108

Distinction between criticism and fabrication regarding deaths in Iraq

I read with interest the October 18th editorial by Steven Moore reviewing our study reporting that an estimated 650,000 deaths were associated with the 2003 invasion and occupation of Iraq. I had spoken with Mr. Moore the week before when he said that he was writing something for the Wall Street Journal to put this survey in perspective. I am not surprised that we differed on the current relevance of 10 year-old census data in a country that had experienced a major war and mass exodus.

I am not surprised at his rejection of my suggestion that the references in a web report explaining the methodology for lay people and reporters was not the same as the references in our painstakingly written peer reviewed article. What is striking is Mr. Moore’s statement that we did not collect any demographic data, and his implication that this makes the report suspect.

This is curious because, not only did I tell him that we asked about the age and gender of the living residents in the houses we visited, but Mr. Moore and I discussed, verbally and by e-mail, his need to contact the first author of the paper, Gilbert Burnham, in order to acquire this information as I did not have the raw data. I would assume that this was simply a case of multiple misunderstandings except our first report in the Lancet in 2004 referenced in our article as describing the methods states, “…interviewees were asked for the age and sex or every current household member.”

Thus, it appears Mr. Moore had not read the description of the methods in our reports. It is not important whether this fabrication that “no demographic data was collected” is the result of subconscious need to reject the results or whether it was intentional deception. What is important, is that Mr. Moore and many others are profoundly uncomfortable that our government might have inadvertently triggered 650,000 deaths.

Most days in the US, more than 5000 people die. We do not see the bodies. We cannot, from our household perspective, sense the fraction from violence. We rely on a functional governmental surveillance network to do that for us. No such functional network exists in Iraq. Our report suggests that on top of the 300 deaths that must occur in Iraq each day from natural causes; there have been approximately 500 “extra” deaths mostly from violence.

Of any high profile scientific report in recent history, ours might be the easiest to verify. If we are correct, in the morgues and graveyards of Iraq, most deaths during the occupation would have been due to violence. If Mr. Bush’s “30,000 more or less” figure from last December is correct, less than 1 in 10 deaths has been from violence. Let us address the discomfort of Mr. Moore and millions of other Americans, not by uninformed speculation about epidemiological techniques, but by having the press travel the country and tell us how people are dying in Iraq.

No comments: