Email updates

Keep up to date with the latest news and content from Retrovirology and BioMed Central.

Open Access Highly Accessed Research

Absence of evidence of Xenotropic Murine Leukemia Virus-related virus infection in persons with Chronic Fatigue Syndrome and healthy controls in the United States

William M Switzer*, Hongwei Jia, Oliver Hohn, HaoQiang Zheng, Shaohua Tang, Anupama Shankar, Norbert Bannert, Graham Simmons, R Michael Hendry, Virginia R Falkenberg, William C Reeves and Walid Heneine

Retrovirology 2010, 7:57  doi:10.1186/1742-4690-7-57

PubMed Commons is an experimental system of commenting on PubMed abstracts, introduced in October 2013. Comments are displayed on the abstract page, but during the initial closed pilot, only registered users can read or post comments. Any researcher who is listed as an author of an article indexed by PubMed is entitled to participate in the pilot. If you would like to participate and need an invitation, please email info@biomedcentral.com, giving the PubMed ID of an article on which you are an author. For more information, see the PubMed Commons FAQ.

The Lo/Alter team (FDA/NIH) have tested these samples

Tom Kindlon   (2010-08-24 16:18)  Irish ME/CFS Association - for Information, Support & Research email

We now have more evidence suggesting the absence of positives in this study could be a cohort issue.

As I previously pointed out, I do not believe the samples in this study are representative of Chronic Fatigue Syndrome (CFS) patients as normally defined, particularly those that attend specialist clinics [1,2].

A paper has just been published by researchers from the FDA and NIH which found MLV-like virus gag gene sequences in 32 of 37 (86.5%) of their CFS patients compared with only 3 of 44 (6.8%) healthy volunteer blood donors[3].

In accompanying material[4], they mention the following:
"10. How are the differences between the CDC and FDA study results being evaluated?

Differences in the results could reflect differences in the patient populations that provided the samples. Alternatively, undefined differences in the method of sample preparation could be contributing to the discordant test results. All of the scientists involved are working collaboratively to design experiments to quickly answer this scientifically puzzling question. An independent investigator at the National Heart, Lung, and Blood Institute (NHLBI) set up a test set of 36 samples, including known positives and presumed negatives. Both the FDA/NIH and CDC labs participated in this test, and the results showed that both labs were able to detect XMRV present at low levels in blinded samples. Additionally, the CDC laboratory provided 82 samples from their published negative study to FDA, who tested the samples blindly. Initial analysis shows that the FDA test results are generally consistent with CDC, with no XMRV-positive results in the CFS samples CDC provided (34 samples were tested, 31 were negative, 3 were indeterminate)."

[1] Kindlon T. Comment: "Type of “CFS” patients in this study could on its own explain the different findings to the Lombardi et al. (2009) study"
http://www.retrovirology.com/content/7/1/57/comments#416675

[2] Kindlon T. Comment: "Correction to: Type of "CFS" patients in this study could on its own explain the different findings to the Lombardi et al. (2009) study"
http://www.retrovirology.com/content/7/1/57/comments#414725

[3] Lo S-C, et al. (2010) Detection of MLV-related virus gene sequences in blood of patients with chronic fatigue syndrome and healthy blood donors. Proc Natl Acad Sci USA,10.1073/pnas.1006901107.

[4] http://www.fda.gov/BiologicsBloodVaccines/SafetyAvailability/ucm223232.htm [Last accessed: August 24, 2010]

Competing interests

No competing interests

top

Methodological deficiencies in the study by Switzer et al

Gerwyn Morris   (2010-07-23 13:48)  University of Ulster email



A number of people here have already commented (quite eloquently) on the inappropriateness of using the Reeves 2005 criteria to diagnose people with ME/CFS. Indeed, using a selection criteria which has roughly the same chance of separating people with this illness from a generally fatigued population as tossing a coin is not the most auspicious of starts. That said however there are also some serious shortcomings with the methodology of the Switzer et al. paper which would make the detection of an in vivo virus highly unlikely, if present, in the people included in this study, regardless of the patient criteria used.

The terms analytical sensitivity and diagnostic sensitivity, although often used interchangeably, have entirely different meanings. Analytical sensitivity establishes the lowest theoretical level of detection of the assay target. This may be determined empirically via serial dilutions.

When such an assay is applied to a population for isolation and or diagnostic purposes however, another kind of sensitivity becomes the dominant concern (Saah et al., 1997). Diagnostic sensitivity refers to the ability to detect a target substance in a processed sample from a person known to be infected.


The diagnostic sensitivity of the Switzer team's assays were not established. This despite the fact that the CDC was sent blood samples from people known to be infected by XMRV. That they chose not to use these samples to establish the diagnostic sensitivity of their assays seems counterproductive, that is, if the goal was to successfully detect the virus. This criticism applies to both their PCR assays and their Western Blot approach. Finally, the decision to send samples to the Koch institute whose own ELISA approach has not established diagnostic sensitivity served no scientific purpose.

Although important methodological problems with the Switzer et al. study are highlighted above, the following is perhaps the most significant:

Dr. Norbert Bannert, one of the co-authors of the Switzer et al. study, was also involved in a 2000 study entitled "Transspecies Transmission of the Endogenous Koala Retrovirus" (Fiebig et al., 2000). This paper was concerned with the detection of an endogenous gammaretrovirus in koalas that “induces leukemias and immune deficiencies associated with opportunistic infections, such as chlamydiosis.” Due to obvious similarities, its methodology should therefore have offered some relevant lessons for those attempting to detect XMRV in patient samples.

For the Fiebig et al. study “a new KoRV isolate was obtained from mitogen-stimulated peripheral blood mononuclear cells (PBMCs)”. This was a technique used in the Lombardi et al. (2009) study but not by the CDC, despite Dr. Bannert's previous experiences in successfully isolating a gammaretrovirus using this technique.

Furthermore, “to study the host range of KoRV, human 293 kidney cells and the human T lymphocyte lines C8166 and CEM, as well as rat and mouse fibroblasts (rat1 and NIH 3T3, respectively), were used. Provirus integration was shown by PCR”. This was the same technique that was used for dramatically increasing proviral concentration in the Lombardi et al. study before PCR, and once again not used in this CDC study. One must ask why.

Finally, in the Fiebig et al. study the entire env gene (p15E and gp70) was amplified from the DNA of
the animal from which KoRVD-B was isolated and sequenced, and one antiserum was generated by immunizing with the recombinant ectodomain of p15E of KoRV. Therefore in the KoR study the antisera was produced from an infected host. The CDC had the opportunity to use such a method rather than rely on a synthetic clone but, mysteriously, declined to use the infected blood samples they were
sent by the Whittemore Peterson Institute.

Note that in the Fiebig et. al. KoRV study:
“Wistar rats were inoculated with cell-free KoRV (grown either on rat1 cells or 293 cells) or with KoRV-producing rat1 cells. Eleven of 12 animals were positive for p15E-specific antibodies, and four animals showed high levels of provirus integration in PBMCs at day 21 [...] indicating a productive infection in all 11 animals. The cell-associated virus load decreased, however, and 63 days postinoculation, no provirus was detected in the PBMCs of all inoculated rats.

“Despite this, coincubation of cell-free plasma and mitogen-stimulated PBMCs from these animals with 293 cells yielded infectious virus [...]. When organs (spleen, ovary, lymph node, lung, liver, and kidney) or PBMCs from two rats [...] were analyzed for provirus integration on day 70, no KoRV sequences were detected.” (Fiebig et al., 2000)

Thus over two months post-infection, the proviral copy number in their PBMC as well as in organs known to be viral reservoirs had fallen too low to be detected by PCR; yet infectious virus was still isolated when their PBMC were activated and cultured.

The implication here is that, in their XMRV study, it seems that Switzer et al. went ahead with a PCR approach despite the fact that at least one of the study's authors knew about the extreme improbability of detecting proviral DNA in a host chronically infected by a gammaretrovirus. The technique used in the Fiebig et al. study was developed to isolate PERV (Tache et al, 2000). Considering that the CDC have engaged in considerable study of PERV because of the potential dangers to transplant patients and the blood supply from this virus, their non-adherence to proven methodology for isolating a gammaretrovirus seems cavalier. Afterall, the potential levels of infection of healthy controls by XMRV demonstrated in peer reviewed studies indicate a potential medical catastrophe in the making.

Many studies report failures to detect proviral DNA in hosts known to be infected. False negatives frequently occur when primers are not properly complementary, frequently due to intraspecies genetic variability. Klein et al. (1999) comment that a “major mismatch” of three or four nucleotide bases causes a complete failure of PCR.

Loussert-Ajaka and others reported a lack of sensitivity in detecting HIV2 provirus because of low proviral load. In fact "PCR negative" is a diagnostic category describing HIV infections in which the proviral titre is too low to be detected by PCR (Woodfall et al, 1992). Therefore, given the above information, expecting PCR to detect a low titre gammaretrovirus without determining the diagnostic sensitivity of the assay would appear to be at the very least unrealistic.

In similar vein, two of the reasons for the failure of serological assays are genetic heterogeneity and antigenic variability (Kaplan, 2003). This heterogeneity and variability is a particular problem in assaying MuLV class viruses. In a 1980 study by O'Donnell and Nowinsky, a panel of monoclonal antibodies specific to the Gp70 and p15E epitopes of the env protein in ecotropic MuLV was used in an attempt to investigate the serological properties of a dulatropic MuLV of the same strain. No antibody reactions occurred. Upon investigation this was determined to have been caused by very minor changes in the Gp70 and P15E epitopes (O'Donnell and Nowinsky, 1980).

Given this information, the failure of the CDC to even to attempt to establish the diagnostic sensitivity of their serological tests when they had the means to do so is a major departure from the scientific method and makes the results obtained quite meaningless. In fact the decision to send the samples to
another continent for one of the serology tests maximized the chances of a false negative result because of the possibility of different strains existing in geographically distant parts of the world.

As a final comment, the recent criticism of the Switzer et al. paper made by Dr. Susan Vernon deserve mention, as they highlight further methodological difficulties with this study:

“The samples from these three study cohorts were collected using different types of tubes, each of which has a distinct way of being processed. [N]one of the blood tubes used were of the same type used in the Lombardi study. (They used tubes containing sodium heparin that are intended for use with virus isolation). The blood tubes from the 18 Georgia registry patients are designed to collect whole blood and preserve nucleic acid; it is not clear where the plasma came from for these subjects since plasma cannot be obtained using these blood tube types. So the explanation for not finding XMRV in these samples is simple – this was a study designed to not detect XMRV using a hodge-podge sample set.” (Vernon, 2010)
References:
Fiebig U, Hartmann MG, Bannert N, Kurth R, and Denner J. Transspecies Transmission of the Endogenous Koala Retrovirus. J Virol. 2006 June; 80(11): 5651–5654.
Kaplan M, Reasons for False Negative (Seronegative) Test Results in Lyme Disease. 2003.
http://www.anapsid.org/lyme/lymeseroneg.html
Klein D, Janda P, Steinborn R, Müller M, Salmons B, Günzburg WH. Proviral load determination of different feline immunodeficiency virus isolates using real-time polymerase chain reaction: influence of mismatches on quantification. Electrophoresis. 1999 Feb; 20(2): 291-9.
Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, Mikovits JA. Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science. 2009 Oct 23; 326(5952): 585-9. Epub 2009 Oct 8.
Loussert-Ajaka, Simon F, Farfara I, Houhou N, Couto-Fernandez J, Dazza MC and Brun-Vézinet F. Comparative study of single and nested PCR for the detection of proviral HIV2 DNA. INSERM. 1994 May 10.
O'Donnell PV, and Nowinski RC. Serological analysis of antigenic determinants on the env gene oroducts of AKR dualtropic (MCF) murine leukemia viruses. Virology. 1980 Nov; 107(1):81-88.
Saah, Alfred,J. and Hoover Donald R,Annals of Internal Medicine:"Sensitivity" and "Specificity Reconsidered:The meanings of these terms in Analytical and Diagnostic settings. 1997 Jan 1; 126(1): 91-94.
Switzer WM, Jia H, Hohn O, Zheng H, Tang S, Shankar A, Bannert N, Simmons G, Hendry RM, Falkenberg VR, Reeves WC, Heneine W. Absence of evidence of Xenotropic Murine Leukemia Virus-related virus infection in persons with Chronic Fatigue Syndrome and healthy controls in the United States. Retrovirology. 2010 Jul 1; 7(1): 57.
Tacke SJ, Kurth R, Denner J. Porcine endogenous retroviruses inhibit human immune cell function: risk for xenotransplantation? Virology. 2000 Mar 1; 268(1): 87-93.
Vernon S. Blood from a Stone. 2010 Jul 1. http://www.cfids.org/xmrv/070110study.asp
Woodfall B, Schechter MT, Le TN, Craib KJ, Cassol S, Montaner JS, O'Shaughnessy MV. Low viral load as defined by negative PCR is associated with slower progression of HIV disease. Int Conf AIDS. 1992 Jul 19-24; 8.

Competing interests

no competing interests

top

What patient group are they studying?

Robin Durham   (2010-07-23 13:46)  n/a

I am a patient. I was given the diagnosis 15 years ago using the original Fukuda definition (which does include tender lymphadenopathy despite what the authors of the study say). I also meet the criteria for the Canadian Consensus definition. I would have been excluded from this study because I have some of the physical findings of the Canadian definition. If I don't have CDC defined CFS, what do I have?

Competing interests

Patient

top

Correction to: Type of "CFS" patients in this study could on its own explain the different findings to the Lombardi et al. (2009) study

Tom Kindlon   (2010-07-08 02:47)  Irish ME/CFS Association - for Information, Support & Research email

Just a numerical correction to my comments and calculations [1] about the empiric criteria patients [2] used in this study [3]. Many people may prefer to simply jump to the last paragraph to get the figures as it might be a bit confusing.

The calculations I outlined apply if we didn't know what percentage of the empiric criteria patients [2] satisfy the 1994 definition for Chronic Fatigue Syndrome (CFS) [4] as it is normally applied.

However for 11 (out of 51) of the patients, we know that the percentages are a little different. This is because 11 patients are included in the study from the Wichita phase #2 study.

(This paragraph is unfortunately a bit technical) We are given information about these patients in the study that defines the empiric criteria (see Table 5)[2]. So between 10 and 16 of the 43 patients would satisfy the Wichita Phase #1 criteria. The authors claim that the figure is 16. However, they have ignored how the exclusion for major melancholic depressive disorder was used: 6 out of the 16 patients had been diagnosed with Major depressive disorder with melancholia (MDDm) between 1997 and 2000 (see Table 2). According the 1994 paper, exclusions include "any past or current diagnosis of a major depressive disorder with psychotic or melancholic features" meaning only 10 of the 43 would satisfy the criteria. If one says Wichita phase #1 patients should satisfy the criteria of the 2003 revision[5], the criteria are slightly different as explained in this extract: "The 1994 case definition stated that any past or current diagnosis of major depressive disorder with psychotic or melancholic features, anorexia nervosa, or bulimia permanently excluded a subject from the classification of CFS. Because these illnesses may resolve with little or no likelihood of recurrence and only active disease or disease requiring prophylactic medication would contribute to confusion with evaluation of CFS symptoms, we now recommend that if these conditions have been resolved for more than 5 years before the onset of the current chronically fatiguing illness, they should not be considered exclusionary." There would seem to be a high probability that this did not occur for all 6 of these patients [i.e. it would seem very unlikely that for all these patients, who we know were found to have had Major depressive disorder with melancholia [MDDm] at some point between 1997 and 2000, were all found to have MDDm in 1997 (as opposed to 1998-2000), that it resolved almost immediately after they were tested in 1997 and they were free from it for five years before developing new symptoms in 2002 that meant they satisfied CFS when tested in 2003 - the CDC team claim only current MDDm is an exclusion [2] so didn't check with regard to the five year rule].

11 out of the 43 Wichita phase #2 patients were used for the XMRV study. Using the figures that between 10 (23.26%) and 16 (37.21%) of these 43 patients would be described as Wichita phase #1 patients, on average between 2.56 and 4.09 would be Wichita phase #1 patients. Looking at the other 40 patients in the XMRV, based on the calculations in my previous comment, 2.72 would be Wichita phase #1 patients.

[Aside: the percentage of "normal" CFS patients would be higher in Wichita phase 2 than most empiric criteria cohorts because it is made up of a group of individuals including patients who had been previously diagnosed with CFS (in Wichita phase #1). One can't use the Wichita phase #2 figures to extrapolate to other empiric criteria cohorts].

Combining these figures, it would mean that on average 5.28-6.81 of the 51 patients in the current XMRV study would be considered Wichita phase #1 patients, or the sort of patients researchers (outside the CDC) consider to be CFS patients (i.e. satisfying the criteria in [4] and/or [5]). However, as I pointed out in the last comment [1], the patients in the current XMRV study are patients from epidemiological studies and may not be as functionally impaired as one would find in specialist CFS clinics.

References:

[1] Kindlon T. Comment: Type of "CFS" patients in this study could on its own explain the different findings to the Lombardi et al. (2009) study. http://www.retrovirology.com/content/7/1/57/comments#416675

[2] Reeves WC, Wagner D, Nisenbaum R, Jones JF, Gurbaxani B, Solomon L, Papanicolaou DA, Unger ER, Vernon SD, Heim C. Chronic fatigue syndrome--a clinically empirical approach to its definition and study. BMC Med. 2005 Dec 15;3:19

[3] Switzer WM, Jia H, Hohn O, Zheng H, Tang S, Shankar A, Bannert N, Simmons G, Hendry RM, Falkenberg VR, Reeves WC, Heneine W. Absence of evidence of Xenotropic Murine Leukemia Virus-related virus infection in persons with Chronic Fatigue Syndrome and healthy controls in the United States. Retrovirology. 2010 Jul 1;7(1):57.

[4] Fukuda K, Straus SE, Hickie I, Sharpe MC, Dobbins JG, Komaroff A, and the International Chronic Fatigue Study Group. The chronic fatigue syndrome: a comprehensive approach to its definition and study. Ann Intern Med 1994;121:953-9.

[5] Reeves WC, Lloyd A, Vernon SD, Klimas N, Jason LA, Bleijenberg G, Evengard B, White PD, Nisenbaum R, Unger ER, International Chronic Fatigue Syndrome Study Group: Identification of ambiguities in the 1994 chronic fatigue syndrome research case definition and recommendations for resolution. BMC Health Services Research 2003, 3:25

Competing interests

No competing interests

top

Type of “CFS” patients in this study could on its own explain the different findings to the Lombardi et al. (2009) study

Tom Kindlon   (2010-07-07 02:23)  Irish ME/CFS Association - for Information, Support & Research email

This study [1] uses the empiric criteria [2] to define Chronic Fatigue Syndrome (CFS). There have been many criticisms of these criteria as a method of defining CFS, e.g. [3-13]. Indeed over 2000 individuals have signed a petition asking the CDC to stop using these criteria when doing CFS research [14]. The Chronic Fatigue Syndrome Advisory Committee (CFSAC), which provides advice and recommendations to the Secretary of Health and Human Services via the Assistant Secretary for Health of the U.S. Department of Health and Human Services on issues related to chronic fatigue syndrome (CFS), has said in one of its recommendations it rejects the criteria [15]. I am not going to try to summarise all the criticisms that exist in this comment but concentrate on some numerical calculations and conclusions one can draw from them.

The paper starts by saying that “Chronic fatigue syndrome (CFS) is a complex illness that affects between 0.5 and 2 percent of adults in the U.S. [1, 2].” The references actually give estimates of 0.422% (Chicago study) [16] and 2.54% (Georgia study, a CDC study) [17] which don’t round to 0.5 and 2 per cent (the closest approximation would be 0.4 and 3 per cent!).

The CDC also did a study which used a comparable definition of CFS to the Chicago study (which found a prevalence of 0.422%), which found a prevalence of 0.235%. I will refer to that latter CDC study as Wichita phase #1, to distinguish them from the Wichita patients in this paper (which could be described as Wichita phase #2 patients) as the criteria were different – see [2].

These three random-number studies were expensive and seem to be the only studies in the US that used comparable methodology.

The major difference between the two studies, that found prevalences of less than 0.5% (Wichita phase #1 and Chicago), and the study that found a prevalence of 2.54% (Georgia) is that the Georgia study used the empiric criteria as a method of operationalizing the 1994 definition while the other two studies did not.

The difference between the two lower prevalences (i.e. 0.235% and 0.422%) (from the Chicago and Wichita phase #1) may at least partly be explained by the different exclusions that the teams used. For example, the CDC team exclude patients with an abnormal Romberg test [20].

So it is probably best when examining how much the empiric criteria inflate the prevalence rates for CFS if one compares the two CDC studies (Wichita phase #1 and Georgia) i.e. 235 per 100,000 vs 2540 per 100,000. There was one innovation with the Georgia study: individuals who did not report fatigue at the telephone screening stage were also assessed. This meant “11.5% of subjects with CFS would not have been detected in previous studies that queried participants only for fatigue.”[17] So if one wants to compare like with like, it is 235 per 100,000 vs 2248 per 100,000. In other words, those diagnosed in the first study (Wichita phase #1) would on average make up only 10.45% of those who diagnosed with the empiric criteria in the Georgia study.

However, the figure of 10.45% is likely to be an overestimate: this is because a team of researchers have found that the empiric criteria have a sensitivity of only 65% [6]. If the Wichita phase #1 cohort was comparable, that would mean that such individuals (Wichita phase #1 CFS patients) would only make up 6.8% (=10.45*0.65) of a cohort of empiric criteria CFS patients.

Translating that to the current study where one has a cohort of 51 empiric criteria CFS patients, that would mean an average of 3.46 Wichita phase #1 patients in the study! That is to say, if one uses the way the CDC themselves used to operationalize the 1994 definition [19], only 3.46 of the 51 patients in the current study would satisfy the criteria.

It should also be remembered that patients picked up in random number epidemiological studies may be less functionally impaired than patients who are diagnosed by physicians in specialist centres. For example, a CDC study found that a group of (Wichita phase #1) CFS patients did an average of 48 hours a week of activity (between work, chores and other activities) including spending over 25 hours a week working [21]. This is a higher level of functioning than one tends to see in secondary and tertiary care for Chronic Fatigue Syndrome patients. For example, 563 CFS patients who attended rehabilitation centres in Belgium worked an average of 6.95 hours a week before the course and 5.66 hours a week after the course [22]. Also, most of the Wichita phase #1 patients didn’t have CFS on follow-up [23].

So it is quite feasible that fewer than 3.46 patients (Wichita phase #1-type CFS patients) in the study would be the sort of patients one would see in a secondary or tertiary care. This is relevant because it could well be the case that any pathogen, such as XMRV, might have a lower prevalence in the less functional impaired patients that one finds in random number epidemiological studies than one would find in secondary and tertiary care.

These two reasons [i.e. (i) use of empiric criteria and (ii) use of patients picked up from epidemiological studies] alone could explain the different rates of XMRV found in this study and the previous US study [24].


References:

[1] Switzer WM, Jia H, Hohn O, Zheng H, Tang S, Shankar A, Bannert N, Simmons G, Hendry RM, Falkenberg VR, Reeves WC, Heneine W. Absence of evidence of Xenotropic Murine Leukemia Virus-related virus infection in persons with Chronic Fatigue Syndrome and healthy controls in the United States. Retrovirology. 2010 Jul 1;7(1):57.

[2] Reeves WC, Wagner D, Nisenbaum R, Jones JF, Gurbaxani B, Solomon L, Papanicolaou DA, Unger ER, Vernon SD, Heim C. Chronic fatigue syndrome--a clinically empirical approach to its definition and study. BMC Med. 2005 Dec 15;3:19

[3] Jason, LA, Najar N, Porter N, Reh C. Evaluating the Centers for Disease Control's empirical chronic fatigue syndrome case definition. Journal of Disability Policy Studies 2008, doi:10.1177/1044207308325995.

[4] Kindlon T. Criteria used to define chronic fatigue syndrome questioned. Psychosom Med. 2010 Jun;72(5):506-7.

[5] Jason L. Problems with the New CDC CFS Prevalence Estimates. Website of the International Association for CFS/ME http://www.iacfsme.org/IssueswithCDCEmpiricalCaseDefinitionandPrev/tabid/105/Default.aspx

[6] Jason LA, Evans M, Brown A, Porter N, Hunnell J, Anderson V, Lerch A. Sensitivity and Specificity of the CDC Empirical Chronic Fatigue Syndrome Case Definition. Psychology, 2010, 1: 9-16, doi:10.4236/psych.2010.11002.

[7] Jason LA, Porter N, Brown M, Brown A, & Evans M. (2010). A constructive debate with the CDC on the CFS empirical case definition. Journal of Disability Policy Studies, 20, 251-256. doi: 10.1177/1044207309359515

[8] Jason, L.A., & Richman, J.A. (2007). How science can stigmatize: The case of chronic fatigue syndrome. Journal of Chronic Fatigue Syndrome, 14, 85-103. doi: 10.1080/10573320802092146

[9] Kindlon T. Various comments on: Chronic fatigue syndrome--a clinically empirical approach to its definition and study. BMC Med. 2005 Dec 15;3:19 at: http://www.biomedcentral.com/1741-7015/3/19/comments

[10] Kindlon T. Various comments on: Wagner D, Nisenbaum R, Heim C, Jones JF, Unger ER, Reeves WC. Psychometric properties of the CDC Symptom Inventory for assessment of chronic fatigue syndrome. Popul Health Metr. 2005 Jul 22;3:8.
http://www.biomedcentral.com/1741-7015/3/19/comments

[11] Kindlon T, Various comments on: Reeves WC, Jones JF, Maloney E, Heim C, Hoaglin DC, Boneva RS, Morrissey M, Devlin R. Prevalence of chronic fatigue syndrome in metropolitan, urban, and rural Georgia. Population Health Metrics 2007, 5:5 doi:10.1186/1478-7954-5-5. http://www.pophealthmetrics.com/content/5/1/5/comments

[12] LaBelle S. Comment on: Reeves WC, Jones JF, Maloney E, Heim C, Hoaglin DC, Boneva RS, Morrissey M, Devlin R. Prevalence of chronic fatigue syndrome in metropolitan, urban, and rural Georgia. Population Health Metrics 2007, 5:5 doi:10.1186/1478-7954-5-5. http://www.pophealthmetrics.com/content/5/1/5/comments

[13] Johnson C. Comment on: Reeves WC, Jones JF, Maloney E, Heim C, Hoaglin DC, Boneva RS, Morrissey M, Devlin R. Prevalence of chronic fatigue syndrome in metropolitan, urban, and rural Georgia. Population Health Metrics 2007, 5:5 doi:10.1186/1478-7954-5-5. http://www.pophealthmetrics.com/content/5/1/5/comments

[14] Petition: CDC CFS research should not involve the empirical definition (2005)
http://www.ipetitions.com/petition/empirical_defn_and_cfs_research/

[15] The Chronic Fatigue Syndrome Advisory Committee (CFSAC) recommendations to the Secretary of Health and Human Services. http://www.hhs.gov/advcomcfs/recommendations/10302009.html

[16] Jason LA, Richman JA, Rademaker AW, Jordan KM, Plioplys AV, Taylor RR, McCready W, Huang CF, Plioplys S: A community-based study of chronic fatigue syndrome. Archives of Internal Medicine 1999,
159:2129-2137.

[17] Reeves WC, Jones JF, Maloney E, Heim C, Hoaglin DC, Boneva RS, Morrissey M, Devlin R: Prevalence of chronic fatigue syndrome in metropolitan, urban, and rural Georgia. Popul Health Metr 2007, 8:5.

[18] Reyes M, Nisenbaum R, Hoaglin DC, Unger ER, Emmons C, Randall B, Stewart JA, Abbey S, Jones JF, Gantz N, Minden S, Reeves WC. Prevalence and incidence of chronic fatigue syndrome in Wichita, Kansas. Arch Intern Med 2003;163:1530–6.

[19]. Fukuda K, Straus SE, Hickie I, Sharpe MC, Dobbins JG, Komaroff A, and the International Chronic Fatigue Study Group. The chronic fatigue syndrome: a comprehensive approach to its definition and study. Ann Intern Med 1994;121:953–9.

[20] Unger ER, Nisenbaum R, Moldofsky H, Cesta A, Sammut C, Reyes M, Reeves WC. Sleep assessment in a population-based study of chronic fatigue syndrome. BMC Neurol. 2004 Apr 19;4:6.

[21] Solomon L, Nisenbaum R, Reyes M, Papanicolaou DA, Reeves WC. Functional status of persons with chronic fatigue syndrome in the Wichita, Kansas, population. Health Qual Life Outcomes. 2003 Oct 3;1:48.

[22] Rapport d’evaluation (2002–2004) portant sur l’execution des conventions de re-education entre le Comite de l’assurance soins de sante (INAMI) et les Centres de reference pour le Syndrome de fatigue chronique (SFC). 2006.

[23] Nisenbaum R, Jones JF, Unger ER, Reyes M, Reeves WC. A population-based study of the clinical course of chronic fatigue syndrome. Health Qual Life Outcomes. 2003 Oct 3;1:49.

[24] Lombardi VC, Ruscetti FW, Das Gupta J, Pfost MA, Hagen KS, Peterson DL, Ruscetti SK, Bagni RK, Petrow-Sadowski C, Gold B, Dean M, Silverman RH, Mikovits JA. Detection of an infectious retrovirus, XMRV, in blood cells of patients with chronic fatigue syndrome. Science. 2009 Oct 23;326(5952):585-9. Epub 2009 Oct 8.

Competing interests

No competing interests

top

Questions that the CDC must answer

Paul Watton   (2010-07-07 02:21)  Patient email

There are grave doubts emerging about the methods used by the CDC to validate their ability to detect XMRV.
See Mindy Kitei's blog - http://www.cfscentral.com/ - on 3rd July.

It is also alleged elsewhere, that the CDC, having received a batch of 20 (independently confirmed) XMRV+ve samples from the Whittemore Peterson Institute, either ignored those samples or (even worse) having failed to find XMRV in those samples using their own methods, made no mention of that difficulty in the paper that was subsequently published !!!
Why was this issue not addressed in the peer review process ?

The CDC clearly have some important questions to answer. Their response to these questions will be very revealing as to how genuine their research efforts have actually been.

Competing interests

None

top

Empirically flawed?

John Mitchell jr   (2010-07-07 02:20)  n/a email

This study actually didn't use the 1994 Fukuda criteria to select their cohort, it used the 2005 Empirical criteria(1), which the authors refer to simply as being 'revised' 1994 criteria.

However if you read the 2005 paper(1), only 6 out of 46, or 13%, of the individuals identified as having CFS during the telephone surveillance period which occured between 1997-2000 still had CFS according to the 1994 criteria when the study took place in 2003; it was only when the authors added several SF-36 and MFI subscales aimed at identifying 'reduced activity' (including two SF-36 subscales specifically devoted to mental health, the Social Functioning and Role Emotional(RE) subscales) that these patients were included in the study.

The substitution of reduced activity for fatigue has been questioned before, as by Prof. Peter White in his peer review(3) of another paper by the authors, in which he notes "The authors state that, '..those with a score < well-population medians on the general fatigue or reduced activity scales of the MFI were considered to meet fatigue criteria of the 1994 case definition.' This means that it would be possible to meet the fatigue criterion without significant fatigue; i.e. with reduced activity alone. This is inconsistent with the international study criteria for CFS, which require: 'clinically evaluated, unexplained, persistent or relapsing chronic fatigue (of least 6 months duration) that is of new or definite onset.'"

Prof. White goes on to note that the RE subscale "specifically ask(s) about change in function 'as a result of any emotional problems'", as well as "the RE subscale [having] the lowest correlation coefficient of any of the SF36 subscale scores with any of the three measures of CFS in one of the authors’ previous studies."(2)

There are many other substantial questions on whether 'reduced activity' is a valid equivalent of fatigue, as it is well known that both patients with depression and anxiety report a reduction in activity without having CFS(4,5). Leonard Jason from DePaul University has done several studies on the subject, one of which showed that 38% of patients with depression (but not CFS) were incorrectly classified as having CFS as a result of using the Empirical definition, indicating a major specificity problem with the Empirical definition.(6,7)

To further touch on the issue of depression and it's potential misdiagnosis as CFS, the authors have repeatedly ignored their own recommendations on the issue(8) which state that major depressive disorder (MDD) must have been resolved for 5 years before an individual can be diagnosed with CFS in a research setting. However in this and other studies the authors have stated that only people with current MDD are excluded.

After applying the reduced activity SF-36 subscales, the percentage of individuals jumped from 13% to 40% of the surveillance cohort now qualifying as having the CDC's new definition of CFS(1). Thus by the authors' own admission, 2/3 of the patients in the 2005 study did not qualify as having CFS according to the 1994 criteria.

The idea that the patients included in these studies were possibly misdiagnosed as CFS is reinforced by the fact that in the current study at least one patient scored 100 out of 100 on the Physical Functioning subscale, which means that they answered "No, not limited at all" to all questions on that particular subscale. Therefore at least one patient had no limitations whatsoever due to physical problems, a startling finding for a disease which in the authors' own words, "results in substantial reduction in previous levels of occupational, educational, social, or personal activities." Unfortunately Role Emotional subscale scores were not provided.

As a result of this definitional reformulation, prevalence rates jumped between six- and tenfold virtually overnight.

All of which is interesting of course, but perhaps a more important issue is the 0% prevalence that many of the negative XMRV studies have came up with, with the CDC choosing to refer to an unpublished presentation given at a conference which suggested a prevalence of .1% in place of several published (and unpublished) studies which have found prevalence rates between 2-6%.

Also while several studies in prostate cancer (PC) have reported a significant association between PC and XMRV(as noted in this paper), it is important to note that the CDC and it's German collaborators were two of the groups which did not find such an association(9,10). Perhaps in the future CDC should consider working with groups which have demonstrated an ability to find XMRV as well as groups which have demonstrated their ability to not find it.

References

1. Reeves et al. 'Chronic Fatigue Syndrome – A clinically empirical approach to its definition and study' BMC Medicine 2005, 3:19

2. Wagner et al. 'Psychometric properties of the CDC symptom inventory for the assessment of chronic fatigue syndrome' Population Health Metrics 2005, 3:8)

3. http://www.biomedcentral.com/imedia/1083914155124266_comment.pdf

4. Jerstad et al. 'Prospective reciprocal relations between physical activity and depression in female adolescents' J Consult Clin Psychol. 2010 Apr;78(2):268-72.

5. Strohle 'Physical activity, exercise, depression and anxiety disorders' J Neural Transm (2009) 116:777–784

6. Jason et al. 'Evaluating the Centers for Disease Control’s Empirical Chronic Fatigue Syndrome Case Definition' Journal of Disability Policy Studies September 2009 vol. 20 no. 2 93-100

7. Jason et al. 'A Constructive Debate With the CDC on the Empirical Case Definition of Chronic Fatigue Syndrome' Journal of Disability Policy Studies 2010; 20; 251

8. Reeves et al. 'Identification of ambiguities in the 1994 chronic fatigue syndrome research case definition and recommendations for resolution' BMC Health Services Research 2003, 3:25

9. Hohn et al. 'Lack of evidence for xenotropic murine leukemia virus-related virus(XMRV) in German prostate cancer patients' Retrovirology. 2009 Oct 16;6:92.

10. Switzer et al. 'Prevalence of Xenotropic Murine Leukemia Virus in Prostate Cancer' presented at 2010 CROI http://www.retroconference.org/2010/Abstracts/37160.htm

Competing interests

CFS patient

top

Yet another definition used

Kelly Latta   (2010-07-04 00:18)  medical writer email

The CDC paper enters yet another definition for CFS patients into the XMRV race.

The "revised" 1994 CDC definition, may be colloquially referring to the rarely used empiric definition, which refers to the 2005 paper published by Dr. William Reeves.

Lombardi et al, which originally found XMRV in CFS patients, used the original 1994 Fukada definition.

A key element that may be missing is severity. One of the cardinal CFS symptoms is post exertional malaise unrelieved by rest, not the result of ongoing exertion and last more than 24-hours as required by the 2003 Canadian Consensus definition.

Severity is of course what differentiates vague symptoms commonly found in the general population from pathological symptoms.

It should also be noted that only three of the 51 cases were apparently of acute onset which would be the most likely patients to show signs of viral infection. Unfortunately, none of the papers published thus far tell us what if any other viruses were also found in the patient cohort.

The Switzer paper also states, "...The physical findings in persons meeting the Canadian definition may signal the presence of a neurologic condition considered exclusionary for CFS..."

This a very confusing statement since the WHO classifies CFS in the ICD-10, along with myalgic encephalomyelitis and post viral fatigue syndrome, exclusively as a brain (neurological) disease under G93.3. This can be verified by checking the alphabetical index in the 2006 edition of the ICD-10 found on the WHO website.

Are they referring to yet another neurological disease other than ME/CFS?


References
Switzer, W. M. et al. Retrovirology doi: 10.1186/1742-4690-7-57 (2010).
Lombardi, V. C. et al. Science 326, 585-589 (2009)
Reeves, W.C BMC Med. doi: 10.1186/1741-7015-3-19 (2005).

Competing interests

None

top

Use of Reeves criteria, inter alia, renders this study patently invalid

Justin Reilly   (2010-07-03 11:59)  _ email

Quote from paper:

The 1994 International CFS case definition and the Canadian Consensus Criteria are different and do not necessarily identify similar groups of ill persons. Most notably, the Canadian Criteria include multiple abnormal physical findings such as spatial instability, ataxia, muscle weakness and fasciculation, restless leg syndrome, and tender lymphadenopathy. The physical findings in persons meeting the Canadian definition may signal the presence of a neurologic condition
considered exclusionary for CFS and thus the XMRV positive persons in the Lombardi et
al. study may represent a clinical subset of patients [11].

The Canadian Criteria are the only valid definition of ME/CFIDS. The authors attempt to reframe "CFS" as tired people by using Reeves definition exclusively and saying that neurologic disease and signs are inconsistent with "CFS". And they approve of the Oxford definition used in the dutch study; incorrectly stating that the Dutch and UK studies were well characterized and WPI was not! They are explicitly insisting Oxford and Reeves definitions are valid and Canadian Consensus Criteria are invalid!

Reeves and Oxford definitions are patently invalid. Oxford defines tired people and Reeves defines low functioning people.

Heneine was the person who contributed the most error to the failed the CDC DeFreitas 'replication attempt'.

This study is totally incredible on its face for the above reasons and the fact that if Reeves and Heneine published findings different from what they did they would be acting directly against their personal interests.

Conversely, the fact that NIH (and to a lesser extent FDA) would be acting against their interests by publishing positive data showing a connection between ME/AIDS-X and XMRV makes the NIH/FDA study even more credible than it otherwise would be.

The study, in summary, is facially invalid.




Competing interests

Patient

top

Post a comment