If it's just for the public, I find the paper on PubMed and just used those details like this:
van Lent M, IntHout J, Out HJ. Differences between information in registries and articles did not influence publication acceptance. J Clin Epidemiol. 2015 Sep;68(9):1059-67. doi: 10.1016/j.jclinepi.2014.11.019. Epub 2014 Nov 29. PMID 25542517. http://www.jclinepi.com/article/S0895-4356(14)00497-1/fulltext
A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes
The 39% figure derives from the team's subjective assessments of success or failure (see graphic, 'Reliability test'). Another method assessed whether a statistically significant effect could be found, and produced an even bleaker result. Whereas 97% of the original studies found a significant effect, only 36% of replication studies found significant results
The team also found that the average size of the effects found in the replicated studies was only half that reported in the original studies.
John Ioannidis, an epidemiologist at Stanford University in California, says that the true replication-failure rate could exceed 80%, even higher than Nosek's study suggests. This is because the Reproducibility Project targeted work in highly respected journals, the original scientists worked closely with the replicators, and replicating teams generally opted for papers employing relatively easy methods — all things that should have made replication easier.
But they will have a harder time shrugging off the latest study, says Gelman. “This is empirical evidence, not a theoretical argument. The value of this project is that hopefully people will be less confident about their claims.”
Nosek believes that other scientific fields are likely to have much in common with psychology.
The Editor-in-Chief invites timely articles to be published as minireviews. These minireviews are aimed at being short and focused on a contemporary topic. Invited minireviews consist of approximately 6,000 words of text and 25-30 scientific references. Minireviews must contain one figure highlighting the theme of the article, complete with an explanatory figure legend.
If you want a medical journal format, or not sure what you'll need yet, www.citethisforme.com is good. You paste in the journal article title into the main search box (select the Add Journal tab). You choose the article you want and add it to your list, and so on for each ref. At the end, you pick your referencing format for the whole list (and you can pick by journal name, e.g., BMJ), then to create the list, choose Download>Copy and Paste. You can do this for several formats if you're not sure which one you'll use. You can do this without signing up.
For psychology journals, I use Google Scholar, scholar.google.com (shock horror ). Paste the title of the article into the search box. Click on the "cite" link beneath the article. It gives you three formatting options - including APA style (but not medical journals). You just select, copy and paste in to your doc. It makes occasional errors, like not captitalising all words in the journal title, or leaving out the final page number, but they're easy to spot.
Google scholar doesn't do doi's. If I need them, I get these in one go for my whole reference list from crossref, http://www.crossref.org/simpleTextQuery. You paste your whole reference list into the box, and it automatically adds the doi to the end of each reference.
But these disparate results don’t mean that studies can’t inch us toward truth. “On the one hand, our study shows that results are heavily reliant on analytic choices,” Uhlmann told me. “On the other hand, it also suggests there’s a there there. It’s hard to look at that data and say there’s no bias against dark-skinned players.” Similarly, most of the permutations you could test in the study of politics and the economy produced, at best, only weak effects, which suggests that if there’s a relationship between the number of Democrats or Republicans in office and the economy, it’s not a strong one.
, he and some colleagues looked back at papers their journal had already published. “We had to go back about 17 papers before we found one without an error,” he told me. His journal isn’t alone — similar problems have turned up, he said, in anesthesia, pain, pediatrics and numerous other types of journals.
“Science is great, but it’s low-yield,” Fang told me. “Most experiments fail. That doesn’t mean the challenge isn’t worth it, but we can’t expect every dollar to turn a positive result. Most of the things you try don’t work out — that’s just the nature of the process.” Rather than merely avoiding failure, we need to court truth.
Can the APC be discounted or waived?
A number of institutions have signed up to an open access membership scheme with us, which means that you may be eligible to have your APC covered by your institution, or entitled to a discount on the APC (details on how to confirm your eligibility will be sent to you by email on acceptance of your article). View list of current partners and further information.
If you are publishing in one of our Open Select journals and are based at a U.K. institution, you may be entitled to a reduced APC of £450 as part of our NESLI APC allowance scheme. Please speak to your librarian to find out more.
Authors are eligible to apply for a full or partial waiver of the appropriate article publishing charge (APC) if they are based in countries classified by Research4Life. Authors based in a Band A country can apply for a full waiver; authors based in a Band B country can apply for a 50% discount.
Waivers may also be granted in exceptional cases. If you are not based in a country classified by Research4Life (see above) and are requesting a waiver, you should request this on submission of your article. Your request should include details of the affiliation and country of residence of all authors, details of where the research was conducted, and confirmation of all research grant funding received (including details of any funding body which has stipulated mandatory open access publication).
Please note that in order to guarantee peer review integrity, the waiver process is not managed by journal academic editors.
The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion. To this end, we present a provisional list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats. We provide corrective information for students, instructors, and researchers regarding these terms, which we organize for expository purposes into five categories: inaccurate or misleading terms, frequently misused terms, ambiguous terms, oxymorons, and pleonasms. For each term, we (a) explain why it is problematic, (b) delineate one or more examples of its misuse, and (c) when pertinent, offer recommendations for preferable terms. By being more judicious in their use of terminology, psychologists and psychiatrists can foster clearer thinking in their students and the field at large regarding mental phenomena.
Immunological mistaken identity
New reports of narcolepsy increased after the vaccination campaign against the 2009 A(H1N1) influenza pandemic in some countries but not others. Now Ahmed et al. examine differences between the vaccines used and find a potential mechanistic explanation for the vaccine-specific effect. They found a peptide in influenza nucleopeptide A that shared protein residues with human hypocretin receptor 2, which has been linked to narcolepsy. The vaccine used in unaffected countries contained less influenza nucleoprotein. Indeed, patients with putative vaccine-associated narcolepsy produced antibodies that cross-reacted to both the influenza and the hypocretin receptor 2 epitopes. Although these data do not demonstrate causation, they provide a possible explanation for the association of this particular influenza vaccination with increased reports of narcolepsy.
The sleep disorder narcolepsy is linked to the HLA-DQB1*0602 haplotype and dysregulation of the hypocretin ligand-hypocretin receptor pathway. Narcolepsy was associated with Pandemrix vaccination (an adjuvanted, influenza pandemic vaccine) and also with infection by influenza virus during the 2009 A(H1N1) influenza pandemic. In contrast, very few cases were reported after Focetria vaccination (a differently manufactured adjuvanted influenza pandemic vaccine). We hypothesized that differences between these vaccines (which are derived from inactivated influenza viral proteins) explain the association of narcolepsy with Pandemrix-vaccinated subjects. A mimic peptide was identified from a surface-exposed region of influenza nucleoprotein A that shared protein residues in common with a fragment of the first extracellular domain of hypocretin receptor 2. A significant proportion of sera from HLA-DQB1*0602 haplotype-positive narcoleptic Finnish patients with a history of Pandemrix vaccination (vaccine-associated narcolepsy) contained antibodies to hypocretin receptor 2 compared to sera from nonnarcoleptic individuals with either 2009 A(H1N1) pandemic influenza infection or history of Focetria vaccination. Antibodies from vaccine-associated narcolepsy sera cross-reacted with both influenza nucleoprotein and hypocretin receptor 2, which was demonstrated by competitive binding using 21-mer peptide (containing the identified nucleoprotein mimic) and 55-mer recombinant peptide (first extracellular domain of hypocretin receptor 2) on cell lines expressing human hypocretin receptor 2. Mass spectrometry indicated that relative to Pandemrix, Focetria contained 72.7% less influenza nucleoprotein. In accord, no durable antibody responses to nucleoprotein were detected in sera from Focetria-vaccinated nonnarcoleptic subjects. Thus, differences in vaccine nucleoprotein content and respective immune response may explain the narcolepsy association with Pandemrix.
63 (11%) were excluded as they no longer had chronic fatigue at the time of the search, or had been diagnosed with other conditions explaining their symptoms, such as hypothyroidism, hyperparathyroidism, hydrocephaly, cancer, psychiatric conditions, Crohn's disease and rheumatoid arthritis
I put this over on the other thread, but thought I'd back it up with a pic, showing how seven patients - 25% of patient - seem to have done spectacularly well at the end of the trial (my compilation of three figures, and highlighting). This seems way too good to be mere placebo response and is the main reason I'm so taken with the results of this study.
note that although this is an open access paper, all the figures are copyright the authors. You can see the originals here, specific details of which figures I used are below.
"PEM in the IOM report (2)
The IOM team clearly carried out an exhaustive literature search (hard work since there isn't that much focusing on PEM, and many of the findings come from studies with other primary aims). For example:
PEM exacerbates a patient’s baseline symptoms and, in addition to fatigue and functional impairment ), may result in
flu-like symptoms (e.g., sore throat, tender lymph nodes, feverishness) (VanNess et al., 2010);
pain (e.g., headaches, generalized muscle/joint aches) (Meeus et al., 2014; Van Oosterwijck et al., 2010);
cognitive dysfunction (e.g., difficulty with comprehension, impaired short-term memory, prolonged processing time) (LaManca et al., 1998; Ocon et al., 2012; VanNess et al., 2010);
nausea/ gastrointestinal discomfort; weakness/instability; light-headedness/vertigo; sensory changes (e.g., tingling skin, increased sensitivity to noise) (VanNess et al., 2010);
sleep disturbances (e.g., trouble falling or staying asleep, hypersomnia, unrefreshing sleep) (Davenport et al., 2011a);
and difficulty recovering capacity after physical exertion (Davenport et al., 2011a,b).
While it's a big symptom list, I like the Canadian Criteria approach of focusing on how it's each patient's characteristic symptom cluster that flares with PEM
Canadian Criteria said:
Post-Exertional Malaise and/or Fatigue: There is an inappropriate loss of physical and mental stamina, rapid muscular and cognitive fatigability, post exertio1nal malaise and/or fatigue and/or pain and a tendency for other associated symptoms within the patient's cluster of symptoms to worsen.
This Van Ness study Postexertional malaise in women with chronic fatigue syndrome (n=25) has good tracking of symptoms flaring after a maximal exercise test vs healthy controls, as does this one by Jo Nijs/Van Oostewjick (n=22) after submaximal exercise. (Would be nice to have bigger studies and I hope that will happen in future.)
But I particularly like the Lights' work looking at gene expression after moderate exercise where they also tracked PEM/fatigue, especially as they used an MS comparison group: the differences with MS are marked. I based the graph below on the original data, but simplified for readability (and because I think copyright restrictions may prevent me reproducing the original).
The lower of each pair of lines is for mental fatigue, upper is physical fatigue; pain (not shown) followed a similar pattern but at a lower than mental fatigue. Scores are 0-100, self-rated.
Differences in metabolite-detecting, adrenergic, and immune gene expression following moderate exercise in chronic fatigue syndrome, multiple sclerosis and healthy controls (White 2012)
(view original graph)
Maybe IOM weren't allowed to reproduce the graph either but they cited the study numerous times re PEM.
PEM after cognitive exertion
The best evidence for PEM comes after physical exertion, maximal or moderate. The situation after cognitive exertion is mixed, according to research covered by the IOM. A Cockshell & Mathias study foudn that after a 2-hour neurocognitive battery of tests controls recovered fully after 7 hours on average compared with 57 hours for CFS patients. Not everyone finds such affects, though this might be because people have different PEM thresholds. While a maximal exercise test is likely to push all to PEM, and even a 'moderate' one (70% max heart rate) is likely to affect most, there isn't such an obvious cognitive challenge. At the CMRC conference last autumn, Andrew Lloyd said his group were going to use a driving simulator as a more intensive cognitive challenge - will be interesting to see how this pans out.
PEM in other illnesses
Depending on how it's defined, up to19% of healthy controls recorded PEM, though this falls to 2-7% if stricter criteria are used. By contrast it's very high for CFS patients, even for those defined by Fukuda. One study found 19-20% of depressed patients recorded PEM, another found 64% for depressed patients, but as the report said it's not clear how it was measured. A Komaroff study from 1996 found PEM in 52% of MS patients, which is why I like the White graph above carefully tracking patients after an exercise challenge.
Objective measures of PEM
There's a great summary of this from Julie Rehmeyer in her New York Times Op-Ed that's a lot easier to read than the IOM report:
Unfortunately, no one test can reliably distinguish patients who have chronic fatigue syndrome from those who don’t. The closest thing to a reliable, objective test is a two-day exercise-to-exhaustion challenge on a stationary bike. Sick patients of all varieties may poop out quickly on Day 1 but whatever they do, they can generally repeat it the next day. Not C.F.S. patients; their performance tanks. Physiological measures ensure that the results can’t be faked, and so far, researchers haven’t seen similar results in any other illness. But large studies haven’t been done. The test also has a big problem. It can leave patients much sicker for months.
Oh, did I mention that's my blog Julie kindly links to?
(New Exercise Study Brings Both Illumination and Questions)
I'm pretty sure this section of the IOM report was written by Betsy Keller, who I interviewed for the blog, so same info in a slightly more digestible form.
The IOM concludes re CPET:
By contrast, a single CPET may be insufficient to document the abnormal response of ME/CFS patients to exercise (Keller et al., 2014; Snell et al., 2013). Although some ME/CFS subjects show very low VO2max results on a single CPET, others may show results similar to or only slightly lower than those of healthy sedentary controls (Cook et al., 2012; De Becker et al., 2000; Farquhar et al., 2002; Inbar et al., 2001; Sargent et al., 2002; VanNess et al., 2007). Thus, the functional capacity of a patient may be erroneously overestimated and decreased values attributed only to deconditioning. Repeating the CPET will guard against such misperceptions given that deconditioned but healthy persons are able to replicate their results, even if low, on the second CPET."
PEM in the IOM report (p78- )
my take on the full IOM report section, apologies if I have repeated points made before as I don't have time to re-read the full thread.
First, some highlights of the reports findings on PEM, which is the primary symptom of the SEID case definition - then a closer look at the evidence they cite:
Closer look follows.