In a series of ten posts, I have reviewed ten major planks in the Wessely School’s psychosomatic approach to ME/CFS. This St Patrick’s Day ‘end of term report’ indicates my grades as an independent assessor of the School’s performance to date. The bulk of the assessed work was completed prior to the COVID-19 pandemic. However an adjustment to the final piece of work, which moves from a U to a 1, compensates for the possible detrimental effects of the pandemic.
For more than three decades, the Wessely School has searched for empirical support for its psychosomatic approach to CFS. That search has been in vain. I show here, here and here that the theoretical assumptions of the Wessely approach lack support and have fallen.
The drive to show CBT and GET are effective treatments has been a key part of the failure, e.g. the PACE trial. An independent review by NICE suggests that GET is unsafe and CBT can only be weakly supported and, quite likely, is only a placebo effect.
In facing the mountain of invalidation that the Wessely School is having to endure, it has made the most basic error for any scientist: converting an inconclusive association into a conclusion of causation.
Correlation does not equal causation
Everybody knows it. It is drummed into people’s heads from the very beginning.
Yet, as a journal editor I have discovered that it is the most basic and common error by authors in psychology and healthcare, no matter how experienced the investigator. This common mistake can throw a mantle of doubt over a publication or even an entire research programme.
The fundamental distinction between correlation and causation is legendary. It is taught in first year medical and psychology classes all over the world. Yet, the distinction can allude even the most seasoned researchers in Psychology, Psychiatry and kindred fields.
An introduction to the topic for 14-216 year old students studying Health is here.
A video for Biology GCSE students about the topic is available below.
An often cited example of the correlation=causation mistake concerns the polio epidemics in the US and Europe during the 1940s and 50s in the pre-vaccination period. Polio was crippling thousands of people, mostly children (and still is in some parts of the world). Polio epidemics occurred during summer and autumn. People eat more ice cream during summer and autumn. So for a while, children were warned not to eat ice cream or they would get polio.
Correlation is an association between two variables. In the late 1940s the polio rate (Y) and ice cream sales (X) could show a close correlation but eating ice creams did not cause polio:
X (ice cream eating) -/> Y (polio rate increase)
Causation is a cause and effect relationship between two variables. In 1949 in the US hot weather (Z) led to more people using public swimming pools and to more people eating ice creams (X) so both were caused by hotter weather (Z), which led to higher rates of polio (people swam in non-chlorinated pools more frequently) and to more ice cream eating:
The same kind of erroneous logic occur everyday in science, even among some of the most experienced researchers who are often strongly influenced by confirmation bias.
In October 2020, Adamson, Ali, Santhouse, Wessely and Chalder published a study in the Journal of the Royal Society of Medicine that purported to demonstrate that CBT ‘led to’ significant improvements in CFS patients. The authors had reached the conclusion that CBT caused improvements but the evidence warranted no such thing. What exactly did the authors do in their study?
The authors’ aim was to examine the effectiveness of CBT for CFS in a naturalistic setting and examine what factors, if any, predicted outcome. Note that ME is not mentioned because patients with ME were not included in the study. Nor should they have been because CBT could not possibly have helped them.
They analysed patients’ self-reported ‘symptomology’ over the course of treatment and at three-month follow-up. They also explored baseline factors associated with improvement at follow-up.
Setting and Participants
Data were available for 995 patients receiving CBT for CFS at an outpatient, specialist clinic in the UK.
Main outcome measures
Patients were assessed throughout their treatment using self-report measures including the Chalder Fatigue Scale, 36-item Short Form Health Survey, Hospital Anxiety and Depression Scale and Global Improvement and Satisfaction. Note, these are all self-reported, subjective outcome measures.
“Patients’ fatigue, physical functioning and social adjustment scores significantly improved over the duration of treatment with medium to large effect sizes (|d| = 0.45–0.91). Furthermore, 85% of patients self-reported that they felt an improvement in their fatigue at follow-up and 90% were satisfied with their treatment. None of the regression models convincingly predicted improvement in outcomes with the best model being (R2 = 0.137).”
As stated in the Abstract the Conclusion implies, but not does categorically state, a causal role for the CBT intervention. However, inside the main body of the article the authors state the conclusion that makes the CBT treatment causal in a manner that is unwarranted. They make the fundamental correlation equals causation error.
In a well-argued paper, Brian Hughes and David Tuller (2021) demonstrate that Adamson et al.’s (2020) conclusions are “misplaced and unwarranted.” They had submitted their critique to the Journal of the Royal Society of Medicine but the Editor did not accept it. Hughes and Tuller made a preprint available online and submitted it to the Journal of Health Psychology where it was reviewed and accepted and will shortly appear online. Here I quote from the Abstract:
“[Adamson et al.] interpret their data as revealing significant improvements following cognitive behavioural therapy in a large sample of patients with chronic fatigue syndrome and chronic fatigue. Overall, the research is hampered by several fundamental methodological limitations that are not acknowledged sufficiently, or at all, by the authors. These include: (a) sampling ambiguity; (b) weak measurement; (c) survivor bias; (d) missing data; and (e) lack of a control group. In particular, the study is critically hampered by sample attrition, rendering the presentation of statements in the Abstract misleading with regard to points of fact, and, in our view, urgently requiring a formal published correction. In light of the fact that the paper was approved by multiple peer-reviewers and editors, we reflect on what its publication can teach us about the nature of contemporary scientific publication practices.”
A Few Details
In their paper, Tuller and Hughes point out that the Adamson et al. study and paper:
“are both problematic in several critical respects. For example, the Abstract – the section of the paper most likely to be read by clinicians – contains a crucial error in the way the data are described, and requires urgent correction.” They point out that a conspicuous controversy is overlooked. Adamson et al. write that the intervention is “based on a model which assumes that certain triggers such as a virus and/or stress trigger symptoms of fatigue. Subsequently symptoms are perpetuated inadvertently by unhelpful cognitive and behavioural responses” (p. 396). Treatment involves, among other elements, “addressing unhelpful beliefs which may be interfering with helpful changes” (p. 396).
The theory of unhelpful beliefs was laid out in a 1989 paper by the Wessely team that included two of the Adamson et al. paper’s authors (Wessely and Chalder). Recent posts here, here, and here show that the theory is lacking in any scientific support leaving the theory totally broken.
This fact was brushed under the carpet and simply not mentioned in the Adamson et al. paper.
Tuller and Hughes report that Adamson et al. are similarly selective in their discussion of the literature on CBT. After scrutiny of 172 CBT outcomes, the redrafted NICE guidance makes it perfectly clear that all of the research is of either “low” or “very low” quality. According to NICE, not one claim for CBT efficacy was supported by any evidence exceeding the “low quality” threshold.
To quote Hughes and Tuller, the research reviewed by Adamson et al.:
“is hampered by several fundamental methodological limitations that are not acknowledged sufficiently, or at all, by the authors. These include: (a) sampling ambiguity; (b) weak measurement; (c) survivor bias; (d) missing data; and (e) lack of a control group. Given these issues, in our view, the findings reported by Adamson et al. are unreliable because they are very seriously inflated.”
I consider here the last point only for its relevance to cause and effect.
Lack of a control group
Causality can never be established without a control group or a control condition. Adamson et al. did not include a control group and so their data cannot possibly support an inference about causality.
Yet Adamson et al. write:
“The cognitive behavioural therapy intervention led to significant improvements in patients’ self-reported fatigue, physical functioning and social adjustment” (p. 400).
This direct statement of causality is unjustifiable and, most likely, plain wrong.
The authors realise this – or were made to realise it by the editor or reviewers – because they state:
“the lack of a control condition limits us from drawing any causal inferences, as we cannot be certain that the improvements seen are due to cognitive behavioural therapy alone and not any other extraneous variables” (p. 401).
As Brian Hughes and David Tuller (2021) point out, this statement includes another assertion of causality which is also self-contradictory: “In one sentence, therefore, the authors draw a causal inference while denying the possibility of being able to do just that given their study design.”
Ironically, this kind of assertion is what some psychiatrists used to call ‘schizophrogenic’. Not a bad descriptor in this case. It is also a little piece of ‘doublethink‘ in which the reader is expected to simultaneously accept two mutually contradictory beliefs as correct.
The Adamson et al. study does not and will never warrant the conclusion that CBT “led to” improvements in CFS symptoms.
The draft NICE guidance establishes that the evidence in support of CBT for pwCFS is marginal. It is likely to be nothing more than a placebo effect.
To quote Tuller and Hughes, “the authors have provided a partial dataset suggesting that some of their participants self-reported modest increases in subjective assessments of well-being …These changes in scores might well have happened whether or not CBT had been administered.”
The flight of Adamson et al. into the illegitimate correlation-equals-causation error is possibly a sign of desperation. When nothing is working, there is little option but to make it up as you go along.
The house of cards that is the Wessely School is fast tumbling down, and not before time.
Hans Eysenck’s False Claims Began in the 1950s and 60s
Evidence from Joachim Funke of the Psychologisches Institut, Universität Heidelberg shows that Hans Eysenck’s scholarly output was untrustworthy from the very beginning of his career. In the 1950s and 60s Eysenck positioned himself as the ‘enfant terrible’ of psychoanalysis. Eysenck claimed the evidence in support of the therapy was exceedingly poor or non-existent. Critics pointed out that Eysenck had misrepresented the research literature using incorrect statistics and biased summaries. The paper by A. Dührssen and E. Jorswieck states:
The data published by Eysenck do not concur with the original ones of Fenichel, Alexander, Jones and Knight. According to Eysenck psychoanalysts obtained 43% positive results, yet the authors published results that were 80% positive.
In their paper, A. Dührssen and E. Jorswieck attempted to correct the scientific record. However, the correction was less impactful than Hans Eysenck’s inflammatory diatribes. The full reference is as follows:
Dührssen, A., & Jorswieck, E. (1962). Zur Korrektur von EYSENCKs Berichterstattung über psychoanalytische Behandlungsergebniese. Acta Psychotherapeutica et Psychosomatica, 329-342.
The publication contains a summary in English, which is copied below.
The publications of Eysenck concerning psychoanalytic literature have been studied and checked. The data published by Eysenck do not concur with the original ones of Fenichel, Alexander, Jones and Knight. Since Eysenck himself stated that he had to evaluate the original data according to certain view points in order to get comparable results, his data have been rechecked according to his method. It became apparent that even then other percentages in all details resulted as Eysenck had published. Even the number of finished cases, published by the authors, did not concur with number published by Eysenck. Especially significant is the difference between Eysenck’s and the original data concerning positive therapeutic results of psychoanalysis. According to Eysenck psychoanalysts have 43% positive results, the authors published 80% positive.
It has all gone very quiet on the retractions front. Apparently, the relevant editors and publishers couldn’t care less, such is the poor state of governance in academic publishing. To quote Joachim Funke, the ‘whole mess’ started very early in his career. Sadly, elements of Hans Eysenck’s mythology live on to this very day.
Alexander, F.: Critical evaluation of therapeutic results. In: Five-year report 1932-1937, p. 30-40 (Institute for Psychoanalysis, Chicago).
Eysenck, H.J.: The effects of psychotherapy: An evaluation. J. consulting Psychol. 1952, 16:5, 319 ff.
Eysenck, H. J.: Woran krankt die Psychoanalyse? Monat 88: 18 (1955).
Eysenck, H.J.: Wege und Abwege der Psychologie. In: Rowohlts Deutsche Enzyklopädie, p. 10 8-1 10 (Rowohlt, Hamburg 1956).
Eysenck, H.J.: The effects of psychotherapy. In: Handbook of abnormal psychology, p. 697ff. (New York, 1961). 6€ Fenichel, O. : Statistischer Bericht über die therapeutische Tätigkeit 1920-1930. In: 10 Jahre Berliner Psychoanalytisches Institut, p. 13-19 (Wien, 1930).
Jones, E.: Report of the clinic work 1926-1936. In: The London clinic of psycho-analysis. Decennial Report, p. 10-14 (London, 1936).
Knight, R.P.: Evaluation of the results of psychoanalytic therapy. Amer. J. Psych. 1941: 434ff.
Most visual illusions are produced using carefully contrived drawings or gadgets to fool the visual system into thinking impossible things. Recently, waiting at a train station, I encountered a real-life Ponzo illusion.
The traditional form of the Ponzo illusion is produced by drawing a pair of receding railway lines. The context suggests different depths in the drawing. An object towards the top of the drawing appears larger than an identical object near the bottom of the drawing. Using a principle of size constancy, the visual system estimates the size of any object as its retinal size multiplied by the assumed distance. Thus, the ‘most distant’ of the two identical yellow lines appears to be longer.
The setting of this new Ponzo illusion is a railway station situated at Vitrolles Airport, Marseille (see photo below). The station has glass panelled shelters on the platforms on each side. The glass panel at the front of each shelter displays two rows of grey rectangles. Apart from their decorative function, one assumes that these rows of rectangles are intended to help prevent people from walking into the glass panel as they move in and around the shelter. The photo below shows the arrangement of the two rows of rectangles on the shelter.
The stimuli for the illusion consist of rectangles that are slightly longer than a credit card, approximately 10.0 cm long x 1.5 cm wide with a separation of about 3.0 cm between successive rectangles. The plate glass window is about 5 mm thick and is marked with rectangles on both sides of the glass in perfect alignment so that a 3-D effect is created indicating a false sense of solidity to these rectangles. This ‘3-D look’ may strengthen the Ponzo effect illustrated below.
The illusion is demonstrated in the photo below. Two people sitting directly in front of the shelter are waiting for a train. The upper set of rectangles appears as a set of columns positioned along the railway lines at a distance of approximately 7 metres in front of the two passengers. In this case, the upper set of rectangles appear to have a height of around 2-3 metres. The lower set of rectangles are perceived at their correct location and size on the plate glass window, behind the two passengers. The lower set are actually physically smaller, owing to the camera angle, but the illusion exaggerates the size difference enormously.
Further illustration of the effect indicates how the brain scales the stimuli to the context. When the rectangles are projected onto the opposite platform they appear huge – almost as high as the lamp post of around 5 metres.
When the rectangles are projected onto the nearby platform, however, they appear proportionately smaller (1.0-1.5 metres).
Owing to the camera angles, the actual size of the rectangles in the upper picture is larger (5-10%) than in the lower picture, but nowhere near the illusory ‘expansion’ that takes place when they are projected by the brain to the opposite platform.
Blocking the Distance Cues
The magnitude of the Ponzo illusion becomes somewhat indeterminate when the distances cues were fortuitously blocked by a passing freight train. In this case the rectangles are ‘drawn into’ the scale of the passing wagons, stretching in size beyond the appearance when the wagons are not there.
The Ponzo illusion can be most easily explained in terms of linear perspective. The rectangles look longer when they are projected to the distance of the opposite platform because the brain automatically interprets them as being further away, so we see them as longer. An object located farther away would have to be larger than a nearby object to produce a retinal image of the same size.
The more visual cues surrounding the two vertical lines, the more powerful the illusion. The passing freight train obliterated some of the distance cues and so the length of the lines was more difficult to assess.
A new post explores the possibility that the illusion described above may be more than another form of the Ponzo Illusion.
Peter Gøtzsche’s Expulsion Triggers Mass Resignation
The Board of a prestigious scientific organisation, The Cochrane Collaboration, recently suffered a mass resignation. This post documents the reasons why, using the words of the organisation itself. The board has been reduced from 13 to 6 members, following a vote to expel a founding member for the first time in its 25-year existence.
On 14 September, Peter Gøtzsche, director of the Cochrane’s Nordic Centre and a member of its governing board, posted a statement on the centre’s website. This announced that he had been expelled as a member of the Cochrane Collaboration, after a vote by 6 of 13 of the board’s members.
A further four elected members of the board — which also has appointed members — stepped down in protest. To maintain a balance between appointed and elected members, the board also asked two appointed members to resign.
Gøtzsche claims no justification was given for his expulsion except that he was accused by the board of bringing the organization into “disrepute”. The organization — which carries out systematic reviews of health-care interventions — told Nature it had received “numerous complaints” about Gøtzsche after the publication earlier this year of a critique he co-authored, entitled ‘The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias’ and published in the BMJ Evidence-Based Medicine.
“Cochrane is for anyone interested in using high-quality information to make health decisions. Whether you are a doctor or nurse, patient or carer, researcher or funder, Cochrane evidence provides a powerful tool to enhance your healthcare knowledge and decision making.
Cochrane’s 11,000 members and over 35,000 supporters come from more than 130 countries, worldwide. Our volunteers and contributors are researchers, health professionals, patients, carers, and people passionate about improving health outcomes for everyone, everywhere. Our global independent network gathers and summarizes the best evidence from research to help you make informed choices about treatment and we have been doing this for 25 years.
We do not accept commercial or conflicted funding. This is vital for us to generate authoritative and reliable information, working freely, unconstrained by commercial and financial interests.
Our Strategy to 2020 aims to put Cochrane evidence at the heart of health decision-making all over the world.”
The Strategy to 2020 has hit a stumbling block. GOAL 4 is or was: “Building an effective sustainable organization.”
“To be a diverse, inclusive, and transparent international organization that effectively harnesses the enthusiasm and skills of our contributors, is guided by our principles, governed accountably, managed efficiently, and makes optimal use of its resources.”
In light of the torpedo the shambolic Governing Board has fired at its own organisation, the expulsion of Peter Gøtzsche, Goal 4 of the Strategy now reads like an ill-timed joke. The statement that spells the end of Cochrane is quoted below.
The Cochrane website is currently as bizarre as can be. A Screen Shot taken this morning 2018-09-27 at 07.05.55 shows an announcement of Peter Gøtzsche’s expulsion immediately followed by the announcement of the 25th Anniversary event to celebrate Peter’s Nordic Cochrane Centre and the foundation of the Cochrane Collaboration:
These are extraordinary times and we find ourselves in an extraordinary situation. Your Board is always happy to answer questions about our decisions, and today is no different. We want to explain how we got here today. This wasn’t our original plan because we wanted to behave fairly and with integrity, in a process that respected the privacy of an individual, whilst taking place over a number of days. Days, which unfortunately span this special Colloquium.
This is about the behaviour of one individual. There has been a lengthy investigation into repeated bad behaviour over many years. It is exceptionally unusual for a Board to have to do such an investigation.
Last Thursday, the Board took a decision which divided the Board. Subsequently, four Board members chose to resign. At the same time, others contributed to a public and media campaign of misinformation.
We recognize that the last 24 hours have been exceptionally difficult and as a result, we as a Board have decided to share with you information about the decision that was made, the process by which it was made, and where we are now, in order to act in the best interests of Cochrane.
We now want to put before you as much evidence as we can, so you know what is going on. We cannot tell you everything. All of you will understand why individuals have a right to privacy and confidentiality. We ask that you all respect this, because we may not be able to tell you everything, for legal reasons and reasons of privacy.
By way of background, we are a global organization which operates under British law because we were founded as a UK charity. Our mission is to benefit the public. We are governed by our Articles of Association.
As the Board, we are in fact the employers of the Cochrane staff. All our staff, and our members, have the right to do their work without harassment and personal attacks. We are living in a world where behaviours that cause pain and misery to people, are being ‘called out’. This Board wants to be clear that while we are Trustees of this organization, we will have a “zero tolerance” policy for repeated, seriously bad behaviour. There is a critical need for ALL organizations to look after their staff and members; once repeated, seriously bad behaviour had been recognized, doing nothing was NOT an option.
So, here are the facts as we are able to report them. We may be able to tell you more later, we may not. Time will tell.
This Board decision is not about freedom of speech.
It is not about scientific debate.
It is not about tolerance of dissent.
It is not about someone being unable to criticize a Cochrane Review.
It is about a long-term pattern of behaviour that we say is totally, and utterly, at variance with the principles and governance of the Cochrane Collaboration. This is about integrity, accountability and leadership.
In March this year, we received three complaints about an individual. These were not the first complaints that had ever been received. In fact, the earliest recorded goes back to 2003. Many have been dealt with over the years. Many disputes have arisen. Formal letters have been exchanged. Promises have been made. And broken. Some disputes have been resolved, some have not.
It was clear to the Co-Chairs that the Board had to reach a decision about these most recent complaints. The individual then made serious allegations against one of the Senior Management Team and shared those with the Board. We seemed to be in an impossible situation. How could the Board now reach a decision about the complaints in a fair way? How could we fulfil our responsibilities as employers of the Senior Management Team? Or alternatively, act to admonish that member of the Senior Management Team if they had done wrong?
With guidance from a Trustee with extensive experience of complaints, we proposed asking a totally independent person to undertake a review. The report was to be confidential to the Board.
After failing to get agreement from the individual to an independent review, we then sought legal advice on behalf of Cochrane. We asked the lawyers, what should a Charity such as Cochrane do in this situation? We were advised that various legal consequences flowed from the events – the complaints and the accusations – and that Cochrane should take them seriously.
We asked the lawyers to take particular note of Cochrane’s commitment to transparency. They noted that, but also stressed the importance of confidentiality.
They advised that an independent review was both a sensible and proportionate response.
At the Governing Board Teleconference on 13th June 2018, all Board members read the letter from our lawyers. The lawyers stated that given the serious legal concerns about this matter they strongly recommended an independent review by a very senior lawyer. The Board approved a motion to accept the lawyer’s advice and establish the independent review.
Our lawyers identified a senior independent lawyer (QC) and he was instructed on 2nd July 2018. As part of the process, he invited written submissions from both individuals concerned. He invited both to be interviewed. The lawyer was asked to work to a deadline of the Board Meeting on Thursday last week, 13th September. And, we did in fact receive his preliminary report in time for that meeting. The report completely exonerated the member of the Senior Management Team but did not exonerate the other individual.
Whilst the review was underway, and as a completely separate matter, a paper was published in the journal BMJ-EBM co-authored by the individual concerned on July 27th 2018. The publication of this paper has proved controversial. As a result, the Board received a number of letters of complaint. Each was sent to the individual to allow a written response. In order to avoid any misunderstanding, the Board want you to be clear that this was a matter that arrived very late in this whole process.
So, at the Board Meeting on Thursday September 13th, the trustees reviewed the lawyer’s report of his independent review, and all the material related to the recently published paper. After they had reviewed and discussed this at length, the Trustees exercised their judgement, and looking across a broad range of behaviours, the Board came to a decision to invoke Article 5.2.1. relating to termination of membership. This was not unanimous.
As a result, Article 5.3 was triggered, and the member has been invited to make a written response within seven days.
At this point in time, this person remains a member of the Cochrane Collaboration. We are waiting for the process to be completed. We will report back to you about the outcome as soon as we are able to.
Let us repeat, this is an extremely rare and unusual thing to do. We hope never to have to do this again.
Cochrane Governing Board Edited (without prejudice): 19th September 2018
Critique by Keith Geraghty and Special Issue Editorial in the Journal of Health Psychology (July 31, 2017)
I reproduce here my Editorial from the Special Issue of the Journal of Health Psychology on the PACE Trial. The issue contained an incisive critique of the trial by Dr. Keith J Geraghty (pictured). Keith Geraghty’s landmark paper, ‘PACE-Gate’: When clinical trial evidence meets open data access’, sparked a response by the PACE trial team, and a stream of commentaries, creating a storm of controversy.
The Times carried a report about a ‘mass resignation’ when three pro-PACE editorial board members resigned. The Daily Mail took a similar line, an unnecessary distraction from the main story. The PACE Trial debate continued among researchers, patient organisations, in Parliament, and on 20 September 2017 NICE announced that it would begin a review of its guidance on the diagnosis and treatment of CFS/ME.
In February 2018 Carol Monaghan (MP, Glasgow North West) organised a Parliamentary debate when she predicted that “when the full details of the trial become known, it will be considered one of the biggest medical scandals of the 21st century.”
Over a hundred academics, patient groups, lawyers, and politicians have signed an open letter to the Lancetcalling on the journal to commission an independent reanalysis of the data from the PACE trial, which it published in 2011. Better yet, there ought to be a full retraction.
Following is the text of the Special Issue Editorial.
We are proud that this issue marks a special contribution by the Journal of Health Psychology to the literature concerning interventions to manage adaptation to chronic health problems. The PACE Trial debate reveals deeply embedded differences between critics and investigators. It reveals an unwillingness of the co-principal investigators of the PACE trial to engage in authentic discussion and debate. It leads one to question the wisdom of such a large investment from the public purse (£5million) on what is a textbook example of a poorly done trial.
The Journal of Health Psychology received a submission in the form of a critical review of one of the largest psychotherapy trials ever done, the PACE Trial. PACE was a trial of therapies for patients with myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS), a trial that has been associated with a great deal of controversy (Geraghty, 2016). Following publication of the critical paper by Keith Geraghty (2016), the PACE Trial investigators responded with an Open Peer Commentary paper (White et al., 2017). The review and response were sent to more than 40 experts on both sides of the debate for commentaries.
The resulting collection is rich and varied in the perspectives it offers from a neglected point of view. Many of the commentators should be applauded for their courage, resilience and ‘insider’ understanding of experience with ME/CFS.
The Editorial Board wants to go on record that the PACE Trial investigators and their supporters were given numerous opportunities to participate, even extending the possibility of appeals and re-reviews when they would not normally be offered. That they failed to respond appropriately is disappointing.
Commentaries were invited from an equal number of individuals on both sides of the debate (about 20 from each side of the debate). Many more submissions arrived from the PACE Trial critics than from the pro-PACE side of the debate. All submissions were peer reviewed and judged on merit.
The PACE Trial investigators’ defence of the trial was in a template format that failed to engage with critics. Before submitting their reply, Professors Peter White, Trudie Chalder and Michael Sharpe wrote to me as co-principal investigators of the PACE trial to seek a retraction of sections of Geraghty’s paper, a declaration of conflicts of interest (COI) by Keith Geraghty on the grounds that he suffers from ME/CFS, and publication of their response without peer review (White et al., 4 November 2016, email to David F Marks). All three requests were refused.
On the question of COI, the PACE authors themselves appear to hold strong allegiances to cognitive behavioural therapy (CBT) and graded exercise therapy (GET) – treatments they developed for ME/CFS. Stark COI have been exposed by the commentaries including the PACE authors themselves who hold a double role as advisers to the UK Government Department of Work and Pensions (DWP), a sponsor of PACE, while at the same time working as advisers to large insurance companies who have gone on record about the potential financial losses from ME/CFS being deemed a long-term physical illness. In a further twist to the debate, undeclared COI of Petrie and Weinman (2017) were alleged (Lubet, 2017). Professors Weinman and Petrie adamantly deny that their work as advisers to Atlantis Healthcare represents a COI.
After the online publication of several critical Commentaries, Professors White, Sharpe, Chalder and 16 co-authors were offered a further opportunity to respond to their critics in the round but they chose not to do so.
After peer review, authors were invited to revise their manuscripts in response to reviewer feedback and many made multiple drafts. The outcome is a set of robust papers that should stand the test of time and offer significant new light on what went wrong with the PACE Trial that has been of such high significance for the nature of treatment protocols. It is disappointing that what has been the more dominant other side refused to participate.
Unfortunately, across the pro-PACE group of authors there was a consistent pattern of resistance to the debate. After receiving critical reviews, the pro-PACE authors chose to make only cosmetic changes or not to revise their manuscripts in any way whatsoever. They appeared unwilling to enter into the spirit of scientific debate. They acted with a sense of entitlement not to have to respond to criticism. Two pro-PACE authors even showed disdain for ME/CFS patients, stating: We have no wish to get into debates with patients. In another instance, three pro-PACE authors attempted to subvert the journal’s policy on COI by recommending reviewers who were strongly conflicted, forcing rejection of their paper.
The dearth of pro-PACE manuscripts to start off with (five submissions), the poor quality, the intransigence of authors to revise and the unavoidable rejection of three pro-PACE manuscripts led to an imbalance in papers between the two sides. However, this editor was loathe to compromise standards by publishing unsound pieces in spite of the pressure to go ahead and publish from people who should know better.
We are proud that this issue marks a special contribution by the Journal of Health Psychology to the literature concerning interventions to manage adaptation to chronic health problems. The PACE Trial debate reveals deeply embedded differences between critics and investigators. It also reveals an unwillingness of the co-principal investigators of the PACE trial to engage in discussion and debate. It leads one to question the wisdom of such a large investment from the public purse (£5 million) on what is a textbook example of a poorly done trial.
ME/CFS research has been poorly served by the PACE Trial and a fresh new approach to treatment is clearly warranted. On the basis of this Special Issue, readers can make up their own minds about the scientific merits and demerits of the PACE Trial. It is to be hoped that the debate will provide a more rational basis for evidence-based improvements to the care pathway for hundreds of thousands of patients.
Geraghty, KJ (2016) ‘PACE-Gate’: When clinical trial evidence meets open data access. Journal of Health Psychology 22(9): 1106–1112. Google Scholar, SAGE Journals, ISI
Lubet, S (2017) Defense of the PACE trial is based on argumentation fallacies. Journal of Health Psychology 22(9): 1201–1205. Google Scholar, SAGE Journals, ISI
Petrie, K, Weinman, J (2017) The PACE trial: It’s time to broaden perceptions and move on. Journal of Health Psychology 22(9): 1198–1200. Google Scholar, SAGE Journals, ISI
White, PD, Chalder, T, Sharpe, M. (2017) Response to the editorial by Dr Geraghty. Journal of Health Psychology 22(9): 1113–1117. Google Scholar, SAGE Journals, ISI
The Editorial has been abridged and the photograph of Dr. Keith Geraghty added.
Experiences of life disruption, threat, distress, or adversity can lead to positively evaluated “growth” (Tedeschi and Calhoun, 1995). It has been observed for centuries that benefit finding and posttraumatic growth (PTG) can follow the occurrence of traumatic events including accidents, warfare, death of a loved one, and cancer diagnosis and treatment (Stanton, 2010).
Benefit finding and growth represent a fundamental restorative principle of homeostasis that is continually active towards the achievement of stability, equilibrium and well-being. Adaptation to any life-threatening illness, such as cancer, is facilitated by homeostasis systems that include the drive to find meaning, exert mastery or control over the experience, and bolster self-esteem. Growth and benefit-finding are frequently reported by cancer survivors as they gain awareness of their illness, its treatment and prognosis.
Measurement of PTG
The theoretical model of PTG proposed by Tedeschi and Calhoun suggests growth occurs in different ways. Developing new relationships, finding new appreciation for life, new meanings in life, discovering personal strength, experiencing spiritual change, and realizing new opportunities are all possibilities. The experiences of benefit finding and growth are undeniable. The methods and measurements used for their study, however, raise more questions than answers.
Among cancer populations, reported prevalence rates of perceived PTG range from 53 to 90% and vary according to the type of cancer, time since diagnosis, heterogeneity and ethnicity of the sample, choice of measurement, and many personal factors (Coroiu et al., 2016). Posttraumatic growth is measured using scales such as “The Posttraumatic Growth Inventory” (PTGI), a 21-item measure of positive change following a traumatic or stressful event (Tedeschi and Calhoun, 1996). Respondents rate the degree to which positive change had occurred in their life “as a result of having cancer.” A total PTGI score and five subscale scores (New Possibilities, Relating to Others, Personal Strength, Spiritual Change, and Appreciation of Life) are calculated.
What the Critics Say
Critics have been less than enthusiastic about measuring PGI in this manner. James Coyne and Howard Tennen (2010) argue that: “Every PTG scale asks participants to rate how much they have changed on each scale item as the result of the crisis they faced. Thus, a respondent must: (a) evaluate her/his current standing on the dimension described in the item, e.g., a sense of closeness to others; (b) recall her/his previous standing on the same dimension; (c) compare the current and previous standings; (d) assess the degree of change; and (e) determine how much of that change can be attributed to the stressful encounter. Psychological science, which purportedly guides positive psychology, tells us that people cannot accurately generate or manipulate the information required to faithfully report trauma- or stress-related growth (or to report benefits) that results from threatening encounters…The psychological literature demonstrates consistently that people are unable to recollect personal change accurately” (Coyne and Tennen, 2010, p. 23).
The five steps a-e certainly are a tall order, and it seems highly doubtful that anybody could achieve them with any accuracy. It seems naïve to analyse numbers that research participants place on scales from the PTGI as though they are valid indices of ‘post-traumatic growth’ when no attempt is made to validate these measures. In spite of these criticisms, many studies have been conducted using the PTGI scale.
Quite rightly, Coyne and Tennen (2010) have damned the flawed methods and measures concerning PTG: “We are at a loss to explain why positive psychology investigators continue to endorse the flawed conceptualization and measurement of personal growth following adversity. Despite Peterson’s …warning that the credibility of positive psychology’s claim to science demands close attention to the evidence, post-traumatic growth—a construct that has now generated hundreds of articles—continues to be studied with flawed methods and a disregard for the evidence generated by psychological science. It is this same pattern of disregard that has encouraged extravagant claims regarding the health benefits of positive psychological states among individuals living with cancer” (p. 24).
As long as psychologists use shoddy methods, invalid measures and draw quack conclusions, they will not be taken seriously by outsiders.
Rarely in the history of clinical medicine have doctors and patients been placed so bitterly at loggerheads. The dispute had been a long time coming. Thirty years ago, a few psychiatrists and psychologists offered a hypothesis based on a Psychological Theory in which ME/CFS is constructed as a psychosocial illness. According to their theory, ME/CFS patients have “dysfunctional beliefs” that their symptoms are caused by an organic disease. The ‘Dysfunctional Belief Theory’ (DBT) assumes that no underlying pathology is causing the symptoms; patients are being ‘hypervigilant to normal bodily sensations‘ (Wessely et al., 1989; Wessely et al., 1991).
The Psychological Theory assumes that the physical symptoms of ME/CFS are the result of ‘deconditioning’ or ‘dysregulation’ caused by sedentary behaviour, accompanied by disrupted sleep cycles and stress. Counteracting deconditioning involves normalising sleep cycles, reducing anxiety levels and increasing physical exertion. To put it bluntly, the DBT asserts that ME/CFS is ‘all in the mind’. Small wonder that patient groups have been expressing anger and resentment in their droves.
‘Top-down research’ uses a hierarchy of personnel, duties and skill-sets. The person at the top sets the agenda and the underlings do the work. The structure is a bit like the social hierarchy of ancient Egypt. Unless carefully managed, this top-down approach risks creating a self-fulfilling prophecy from confirmation biases at multiple levels. At the top of the research pyramid sits the ‘Pharaoh’, Regius Professor Sir Simon Wessely KB, MA, BM BCh, MSc, MD, FRCP, FRCPsych, F Med Sci, FKC, Knight of the Realm, President of the Royal College of Medicine, and originator of the DBT. The principal investigators (PIs) for the PACE Trial, Professors White, Chalder and Sharpe, are themselves advocates of the DBT. The PIs all have or had connections both to the Department of Work and Pensions and to insurance companies. The objective of the PACE Trial was to demonstrate that two treatments based on the DBT, cognitive behavioural therapy (CBT) and graded exercise therapy (GET), help ME/CFS patients to recover. There was zero chance the PACE researchers would fail to obtain the results they wanted.
Groupthink, Conflicts and Manipulation
The PACE Trial team were operating within a closed system or groupthink in which they ‘know’ their theory is correct. With every twist and turn, no matter what the actual data show, the investigators are able to confirm their theory. The process is well-known in Psychology. It is a self-indulgent processes of subjective validationandconfirmation bias.Groupthinkoccurs when a groupmakes faulty decisions because grouppressures lead to a deterioration of “mental efficiency, reality testing, and moral judgment” (Janis, 1972). Given this context, we can see reasons to question the investigators’ impartiality with many potential conflicts of interest (Lubet, 2017). Furthermore, critical analysis suggests that the PACE investigators involved themselves in manipulating protocols midway through the trial, selecting confirming data and omitting disconfirming data, and publishing biased reports of findings which created a catalogue of errors.
‘Travesty of Science’
The PACE Trial has been termed a ‘travesty of science’ while sufferers of ME/CFS continue to be offered unhelpful or harmful treatments and are basically being told to ‘pull themselves together’. One commentator has asserted that the situation for ME patients in the UK is: “The 3 Ts – Travesty of Science; Tragedy for Patients and Tantamount to Fraud” (Professor Malcolm Hooper, quoted by Williams, 2017). Serious errors in the design, the protocol and procedures of the PACE Trial are evident. The catalogue of errors is summarised below. The PACE Trial was loaded towards finding significant treatment effects.
A Catalogue of Errors
The claimed benefits of GET and CBT for patient recovery are entirely spurious. The explanation lies in a sequence of serious errors in the design, the changed protocol and procedures of the PACE Trial. The investigators neglected or bypassed accepted scientific procedures for a RCT, as follows:
Category of error
Description of error
Ethical issue: Applying for ethical approval and funding for a long-term trial when the PIs knew already knew CBT effects on ME/CFS were short-lived.
On 3rd November 2000, Sharpe confirmed: “There is a tendency for the difference between those receiving CBT and those receiving the comparison treatment to diminish with time due to a tendency to relapse in the former” (www.cfs.inform/dk). Wessely stated in 2001 that CBT is “not remotely curative” and that: “These interventions are not the answer to CFS” (Editorial: JAMA 19th September 2001:286:11) (Williams, 2016).
Ethical issue: Failure to declare conflicts of interest to Joint Trial Steering Committee.
Undeclared conflicts of interest by the three PIs in the Minutes of the Joint Trial Steering Committee and Data Monitoring Committee held on 27th September 2004.
Ethical issue: Failure to obtain fully informed consent after non-disclosure of conflicts of interest.
Failing to declare their vested financial interests to PACE participants, in particular, that they worked for the PHI industry, advising claims handlers that no payments should be made until applicants had undergone CBT and GET.
Use of their own discredited “Oxford” criteria for entry to the trial.
Patients with ME would have been screened out of the PACE Trial even though ME/CFS has been classified by the WHO as a neurological disease since 1969 (ICD-10 G93.3).
Inadequate outcome measures.Using only subjective outcome measures.
The original protocol included the collection of actigraphy data as an objective outcome measure. However, after the Trial started, the decision was taken that no post-intervention actigraphy data should be obtained.
Changing the primary outcomes of the trial after receiving the raw data.
Altering outcome measures mid-trial in a manner which gave improved outcomes.
Changing entry criteria midway through the trial.
Altering the inclusion criteria for trial entry after the main outcome measures were lowered so that some participants (13%) met recovery criteria at the trial entry point.
The statistical analysis plan was published two years after selective results had been published.
The Re-definition of “recovery” was not specified in the statistical analysis plan.
Sending participants newsletters promoting one treatment arm over another, thus contaminating the trial.
Lack of comparable placebo/control groups with inexperienced occupational therapists providing a control treatment and experienced therapists provided CBT.
Repeatedly informing participants in the GET and CBT groups that the therapies could help them get better.
Giving patients in the CBT and GET arms having more sessions than in the control group.
Allowing therapists from different arms to communicate with each other about how patients were doing.
Lack of transparency
Blocking release of the raw data for five years preventing independent analysis by external experts.
Blocking release of the raw data for five years and preventing independent analysis by external experts was tantamount to a cover-up of the true findings. An editorial by Keith Geraghty (2016) was entitled ‘PACE-Gate’. ME/CFS patient associations were rightly suspicious of the recovery claims concerning the GET arm of the trial because of their own experiences of intense fatigue after ordinary levels of activity which were inconsistent with the recovery claims of the PACE Trial reports. For many sufferers, even moderate exercise results in long ‘wipe-outs’ in which they are almost immobilized by muscle weakness and joint pain. In the US, post-exertional relapse has been recognized as the defining criterion of the illness by the Centers for Disease Control, the National Institutes of Health and the Institute of Medicine. For the PACE investigators, however, the announced recovery results validated their conviction that psychotherapy and exercise provided the key to reversing ME/CFS.
Alem Matthees Obtains Data Release
When Alem Matthees, a ME/CFS patient, sought the original data under the Freedom of Information Act and a British Freedom of Information tribunal ordered the PACE team to disclose their raw data, some of the data were re-analysed according to the original protocols. The legal costs of the tribunal at which QMUL were forced to release the data, against their strenuous objections, was over £245,000.The re-analysis of the PACE Trial data revealed that the so-called “recovery” under CBT and GET all but disappeared (Carolyn Wilshire, Tom Kindlon, Alem Matthees and Simon McGrath, 2016). The recovery rate for CBT fell to seven percent and the rate for GET fell to four percent, which were statistically indistinguishable from the three percent rate for the untreated controls. Graded exercise and CBT are still being routinely prescribed for ME/CFS in the UK despite patient reports that the treatments can cause intolerable pain and relapse. The analysis of the PACE Trial by independent critics has revealed a catalogue of errors and provides an object lesson in how not to conduct a scientific trial. The trial can be useful to instructors in research design and methodology for that purpose.
Following the re-analyses of the PACE Trial, the DBT is dead in the water. There is an urgent need for new theoretical approaches and scientifically-based treatments for ME/CFS patients. Meanwhile, there is repair work to be done to rebuild patient trust in the medical profession after this misplaced attempt to apply the Psychological Theory to the unexplained syndrome of ME/CFS. The envelope theory of Jason et al. (2009) proposes that people with ME/CFS need to balance their perceived and expended energy levels and provides one way forward, pending further research.
Ultimately, patients, doctors and psychologists are waiting for an organic account of ME/CFS competent to explain the symptoms and to open the door to effective treatments. Patients have a right to nothing less.
Psychology is full of theories, not ‘General Theories’, but ‘Mini-Theories’ or ‘Models’. Most Mini-Theories/Models are wrong. Unfortunately these incorrect theories and models often persist in everyday practice. This happens because Psychologists are reluctant to give up their theories. These incorrect theories then act like ‘mass delusions’, which can have consequences for others, especially students and patients.
Academic Psychology suffers from ‘delusions of grandeur’. It is as if an entire academic discipline is manifesting a chronic disorder – a kind of ‘Scientific Psychosis’. Psychologists claim that Psychology is a Science but there is no objective evidence to support it. In fact, the evidence suggests the exact opposite.
The ability to ape proper science is not in doubt. Laboratories, experiments and grants, thousands of journals, books, institutes and universities all espouse Psychology as a Science. Many psychologists even wear white lab coats and poke around in animals’ brains. The ability to mimic genuine scientists like Physicists or Biologists, however, does not make Psychology a science. It actually makes a mockery of science.
There are many reasons why this is the case. I mention here two:
1) Psychology does not meet even the most essential criterion for an authentic science – quantitative measurement along ratio scales.
2) Unlike all the true natural sciences, Psychology lacks a general theory. A general theory is held by the majority of scientists working in the field.
The shared belief of the vast majority of psychologists that they are scientists, when all of the evidence suggests that this can’t be true, is a form of professional ‘mass hysteria’. Psychologists share a belief system of scientific delusion, thought disorder and conceptual confusion. They then impose their beliefs, not only on one another, but on their students and their patients.
Students and Patients
Many students and patients are having none of it. They refuse to be suckered in by the claim. But they have to be courageous enough to come out of the closet and say it. If they dare to say it in an essay or exam, then they’d better be prepared for a grade C, D, E or F.
On a few rare occasions, established psychologists have expressed their doubts about the scientific credentials of Psychology. For example, Jan Smedslund wrote about: “Why Psychology Cannot be an Empirical Science.” There is increasing evidence that many patients are skeptical about Psychology also.
Folie à deux (“madness of two”) occurs when delusional beliefs are transmitted from one individual to another. When one dominant person imposes their delusional beliefs on another, it is folie imposée. In this case, the second person probably would never have become deluded if left to themselves. The second person is expected ultimately to reject the delusion of the first person, due to disproof of the delusional assumptions, and protest. This protest, however, will fall upon deaf ears.
The situation I describe is far from hypothetical. It exists day in, day out, for millions of patients. One particular patient group are those labeled with ‘Medically Unexplained Symptoms’ (MUS). Within this group is a particular group of patients with MyalgicEncephalomyelitis (“ME”) and/or Chronic Fatigue Syndrome (“CFS”).
Delusional thinking certainly can hurt and embarrass the individuals having the delusion (Psychologists and Psychiatrists). It can also be imposed upon others, for example, people in their care (Patients). To the help-seeking Patient, the Psychologist (or Psychiatrist) is an expert who follows the rules of Science. The Science informs the aetiology, diagnosis, and treatment of the Patient.
Treating Patients with ME/CFS
I consider here how many psychologists in the UK treat people labeled with ME/CFS. This treatment comes with the full backing of NICE (currently under review).
Psychological treatment for patients labeled with ME/CFS is based on a Psychological Theory of the illness. This theory is highly contested and has caused major controversies that has divided Patients from Psychologists and Psychiatrists.
The main Psychological Theory of ME/CFS asserts that ‘maladaptive’ cognitions and behaviours perpetuate the fatigue and impairment of individuals with ME/CFS (Wessely, David, Butler and Chalder, 1989). These authors represent the two main professions concerned with psychological illness, Psychology and Psychiatry. They state: “It is essential to agree jointly on an acceptable model, because people need to understand their illness. The cognitive – behavioural model …can explain the continuation of symptoms in many patients.” This is where the imposition of the therapist’s model snaps in. “The process is therefore a transfer of responsibility from the doctor, in terms of his duty to diagnose, to the patient, confirming his or her duty to participate in the process of rehabilitation in collaboration with the doctor, physiotherapist, family and others.” (p. 26).
Although the Psychological Theory is contested by many scientists, patients and patient organisations who assume that their symptoms have an organic basis, i.e. a Physical Theory.
Vercoulen et al. (1998) developed a model of ME/CFS based on the Psychological Theory. However, Song and Jason (2005) suggested that the Psychological Theory was inaccurate for individuals with ME/CFS. In spite of the evidence against it, the Psychological Theory continues as the basis for cognitive behavioural and graded exercise therapies (GET) offered to individuals with ME/CFS. One reason for the continued use of an unsupported Psychological Theory is the PACE Trial, a lesson in how not to do proper science. Like most research, this trial was organised by a team and, in this case, the majority of principle investigators were Psychiatrists. This trial has been described as “one of the biggest medical scandals of the 21st century.”
New Approach Needed
In spite of the lack of empirical support, the Psychological Theory of ME/CFS lives on. ME/CFS patients are subjected to CBT and GET. Patients and patient organisations protest about the treatments and are opposed to the Psychological Theory. Perhaps Psychologists need to turn the Psychological Theory of unhelpful beliefs upon themselves. If ME/CFS has a physical (e.g. immunological) cause, then once the cause has been established, patients will have the chance of an effective treatment and decent care and support.
The problems that exist for Psychologists’ treatment of patients with MUS and ME/CFS exist more generally across the discipline. A totally new approach is necessary. Instead of tinkering with the problems at a cosmetic level by papering over the cracks, there is a need for root-and-branch change of a radical kind. The measurement problem must be addressed and there is a need for a general theory. A new General Theory of Behaviour takes a step in that direction.
Here I introduce a powerful new explanation of the obesity ‘epidemic’. I reveal some surprising but brutal truths about the condition. For example, obesity is unavoidable for the majority of people in contemporary living conditions. Without radical change, the ‘epidemic’ will get much, much worse.
Obesity an ‘Epidemic’?
Notice I put the word ‘epidemic’ in single quote marks. This is because the word can only really be applied to infectious diseases. Obesity is not a disease. It’s not infectious. Obesity is a bodily condition of being overweight. It is defined loosely as having a body mass index (BMI) above 30. This places people at increased risk for a variety of chronic conditions. Unpleasant things like diabetes Type 2, cardiovascular diseases, cancer and obstructive sleep apnea. [As a scientific measure the BMI is a bit of a joke, by the way, but we’ll leave that for another post.]
Two billion people alive today are overweight or living with obesity. There is no sign that the obesity epidemic is slowing down or that medical science has an understanding of the problem. A universal feature of living beings called ‘homeostasis’ is linked to obesity. Its disruption, dyshomeostasis, is a contributory cause of overweight and obesity.
Obesity is an unavoidable human response to contemporary conditions of living. ‘Blaming and shaming’ individual sufferers is oppressive and is a part of the problem, not part of the solution. Blame and shame makes matters far, far worse. Only by reversing this form of prejudice, and the chronically stressful living conditions of hundreds of millions of people, is there any hope that we can stop the ‘epidemic’.
This book is not for the faint-hearted. It cuts through the ‘shock-horror’ narrative of obesity with brutal truths about the serious and intransigent nature of obesity. Once the causes are fully understood, the obesity epidemic can be stopped. And about time too! This book is a step towards that goal.