The Journal of Health Psychology has published a new paper on “Research misconduct complaints and institutional logics: The case of Hans Eysenck and the British Psychological Society” [October 28, 2020].
The paper provides an analysis of the reasons Hans J Eysenck’s misconduct has not been fully investigated by the BPS.
The authors, Russell Craig, Anthony Pelosi, Dennis Tourish, urge the BPS to investigate this complaint afresh. They also support calls for the establishment of an independent National Research Integrity Ombudsperson to deal more effectively with allegations of research misconduct.
This paper is on Open Access and should be widely read.
I write this blog as a long-term investigator into psychology and the paranormal. This post concerns a saga of intellectual dishonesty by the late Cambridge University psychologist, Carl Sargent, and his mentor, Professor Hans J Eysenck, of King’s College London. A diary of events weaves a dark story that many wish the world would forget, but the story needs to be told. The parties in this story displayed gullibility, bias and wilful deceit. One of them (CS) was forced to leave his academic post and seek another career. Many years after his death, the other (HJE) stands accused of producing ‘unsafe’ publications on an industrial scale.
“Hans Jürgen Eysenck, was born in Berlin onand died in London on, was a British psychologist of German origin known for his work on personality , heritability of intelligence , behavioral therapies and for his critiques of psychoanalysis. At the time of his death, Eysenck was the most frequently cited living psychologist in English-language scientific journals.”
Hans J Eysenck’s intellectual honesty was recently the focus of renewed controversy after Anthony Pelosi exposed a series of impossible findings published by Eysenck in the field of health psychology (see here, here and here). 26 of Eysenck’s publications were recently considered “unsafe” by an investigation by King’s College London, and many others are also suspected.
Richard Smith (personal communication) astutely remarked as follows: “When forensic accountants detect fraud they assume that everything else from that person may well be fraudulent. Scientists tend to do the opposite–assuming that everything is OK until proved to be fraudulent. But as proving fraud is hard lots of highly questionable material remains untouched.”
Smith continues: “I think of the example of R K Chandra, who was eventually found guilty not only of research fraud but also of financial and business fraud. His first paper established to be fraudulent was in 1989. Why, I ask myself, would you start being honest after you’d practised fraud–yet hundreds of his papers are left unremarked, including unfortunately some that have been shown to be fraudulent.”
A reliable source and long-time colleague of Eysenck’s states: “Eysenck was a mendacious charlatan. I base that not so much on his published fiction but his denial of the link between smoking and cancer was pernicious. His espousal of the beliefs of the John Birch Society was egregious…a grant had to be withdrawn and several researchers dismissed.“
A profile of Hans Eysenck based on his biography by Rod Buchanan and also his books with Carl Sargent provide insights into Eysenck’s intellectual values as a scientist and scholar. There were four books with Sargent, all with Eysenck as first author:
The collaboration between the two authors began in the early 1980s in Sargent’s heyday at Cambridge and continued until 1996.
A Distorted Account of Parapsychology
These four books present a distorted and strongly biased view that psychic powers are scientifically proven. The evidence suggests exactly the opposite (Marks, 2020).
Eysenck’s and Sargent’s ‘ naivety and credulity are everywhere apparent. They present a one-sided view of the scientific evidence on psi and affect the naive stance that fraud and trickery do not need to be considered. David Nias and Geoffrey Dean (1986) summarised their criticisms of the Eysenck/Sargent books thus: “the failure of Eysenck and Sargent’s books to cover trickery and credulity is a serious deficiency” (p.368).
In my opinion, these books are among the most distorted and misleading accounts of parapsychological phenomena ever published by academic psychologists. The four books are a total disgrace and how Eysenck had the gall to put his name to them – perhaps only to build his reputation as the fearless contrarian – is beyond imagination.
In addition to the terrible scholarship, there is convincing evidence of scientific fraud by Sargent. How much Hans Eysenck knew about this, we will never know exactly because Eysenck requested that his papers be destroyed after his death. However, Blackmore’s report on Sargent’s fraud became public knowledge several years into the collaboration and years before the third and fourth books with Eysenck were published.
If Sargent had kept his trickery hidden from Eysenck then Eysenck could have been an innocent party. In a partnership built over 14+ years, surely there would have been a conversation that included a question of the kind, ‘Oh, I hear you left Cambridge, why was that?” If, as seems likely, Sargent ‘fessed up’ by admitting the occurrence of some kind of experimental ‘error’, then Eysenck could have been party to covering up Sargent’s deceit. Did Eysenck imagine nobody would notice? or perhaps he simply did not care. After all, that great Cambridge genius, Isaac Newton, had done the same kind of thing, and Eysenck saw no problem with a bit of data fudging. According to Eysenck, a ‘genius’ does whatever is necessary to prove their theories, as he had stated in one of his many pot-boilers.
“[Sargent’s Ganzfeld] research was providing dramatically positive results for ESP in the GF and mine was not, so the idea was for me to learn from his methods in the hope of achieving similarly good results …. After watching several trials and studying the procedures carefully, I concluded that CS’s experimental protocols were so well designed that the spectacular results I saw must either be evidence for ESP or for fraud. I then took various simple precautions and observed further trials during which it became clear that CS had deliberately violated his own protocols and in one trial had almost certainly cheated. I waited several years for him to respond to my claims and eventually they were published along with his denial. (Harley & Matthews, 1987; Sargent, 1987).”
Sargent’s Career Temporarily Blossoms with Eysenck
In this period, Sargent developed his career in parapsychology at Cambridge with Blackmore’s ‘cheating’ report brushed under the carpet.
1980: Sargent writes a monograph, Exploring Psi in the Ganzfeld. Parapsychological Monographs No 17.
Sargent, C. L., Harley, T. A., Lane, J. and Radcliffe, K. publish: ‘Ganzfeld psi optimization in relation to session duration’, Research in Parapsychology 1980, 82-84.
1981: Sargent and Matthews publish ‘Ganzfeld GESP performance in variable duration testing’. Journal of Parapsychology 1981, 159-160
1982: Eysenck and Sargent (1982) publish their first book together, Explaining the unexplained: mysteries of the paranormal. Weidenfeld and Nicolson; First Edition.
1983: Eysenck and Sargent publish their second book, Know Your Own PSI-Q.
Then the Inevitable Downfall
1984: The Parapsychological Association Council asked Martin Johnson to head a committee to investigate SB’s accusation of fraud by Sargent. My book, Psychology and the Paranormal, describes what happened next;
The Parapsychological Association (PA) invited CS to provide an account of the ‘errors’ that SB had reported, but he declined to offer any explanation. The PA President, Stanley Krippner, wrote to CS at four different addresses, but still received no reply. The PA’s ‘Sargent Case Report’ dated 10 December 1986 found that, in spite of strong reservations about CS’s randomisation technique, there was insufficient evidence that CS had used unethical procedures.
CS was ‘reproved’ for failing to respond to the PA’s request for information. However, CS had allowed his PA membership to lapse through non-payment of dues, but he was informed that, should he wish to renew his membership, his application would be considered with ‘extreme prejudice’, I.e. CS would I likely be re-admitted as a member.
The final report of this committee reprimanded Sargent for failing to respond to their request for information within a reasonable time.
1985: Sargent leaves Cambridge University and the parapsychology field [stated in the 2nd edition of Explaining the unexplained: mysteries of the paranormal, 1993].
At some point, Sargent moves into full-time authoring of game-books.
1987: Susan Blackmore’s 1979 report is finally published: ‘A Report of a Visit to Carl Sargent’s Laboratory’, Journal of the Society for Psychical Research, 54, 186-198.
1993: Undeterred by the report of cheating, Eysenck and Sargent publish their third book, Explaining the unexplained: mysteries of the paranormal. (2nd ed.)
1996: Eysenck and Sargent publish their fourth book, Are You Psychic?: Tests & Games to Measure Your Powers (1996), a revised version of ‘Know your own Psi-Q’.
The Hidden Truth
Two editions of the book by H. J. Eysenck and Sargent (1982, 1993) raise questions about how much Eysenck knew of the fraud accusations against Sargent in Blackmore’s SPR report of 1979. In the 1982 edition of the first book, the procedural problems with Sargent’s GF research are not even mentioned. In the 1993 edition, the authors refer to ‘spirited exchanges on GF research’ between Blackmore, and Sargent and Harley (p. 189).
However, the Ganzfeld evidence of psi is described by them as ‘very, very powerful indeed’. They do not mention the accusations of fraud, CS’s departure from Cambridge University, and his repeated non-cooperation with the Parapsychology Association enquiry.
I obtained an update from Susan Blackmore on her current thinking about her 30-year-old allegation of fraud by CS and on psi research more generally, which I reproduce below. Here are Susan Blackmore’s answers to a few specific questions:
Do you think, in the light of everything that has come to light, CS committed fraud at Cambridge? (Ideally, a yes or a no).
Yes, at least on one specific trial.
Do you think CS knowingly deceived anybody (including possibly himself) or was he simply a victim of confirmation bias/subjective validation?
Is there anything else you would like to say about research on psi?
In the light of my decades of research on psi, and especially because of my experiences with the GF, I now believe that the possibility of psi existing is vanishingly small, though not zero. I am glad other people continue to study the subject because it would be so important to science if psi did exist. But for myself, I think doing any further psi research would be a complete waste of time. I would not expect to find any phenomena to study, let alone any that could lead us to an explanatory theory. I may yet be proved wrong of course. (Blackmore, personal communication, 1 August 2019)
Summary of facts and conclusions
A consistent pattern of data manipulation in Hans Eysenck’s and at least two collaborators’ research practice is evident over several decades. Yet only recently have journals found it necessary to retract 14 of Hans Eysenck’s papers and to publish 71 expressions of concern. One paper of concern was published by the Proceedings of the Royal Society of Medicine in 1946.
A reliable source accused Eysenck of cheating with his data analyses in the 1960s and other colleagues and PhD students publicly critiqued Eysenck’s laboratory methods.
In the late 1970s/early 1980s, Eysenck formed a long-term collaboration with a Cambridge academic Carl Sargent in spite of the fact that Carl Sargent had been accused of fraud in 1979. Eysenck and Sargent’s joint publications, with Eysenck as senior author, occurred over the period 1982-1993.
In 1992 and 1993, Anthony Pelosi, Louis Appleby and others raised serious questions about publications by Eysenck with R Grossarth-Maticek (Pelosi, AJ, Appleby, L (1992) Psychological influences on cancer and ischaemic heart disease. British Medical Journal 304: 1295–1298.Pelosi, AJ, Appleby, L (1993) Personality and fatal diseases. British Medical Journal 306: 1666–1667.) The authorities failed to respond.
Anthony Pelosi (2019) again voices his concerns. This author’s editorial appealing to Kings College London to open an enquiry finally led to concrete action. 25 publications by H J Eysenck and R Grossarth-Maticek have been deemed by KCL to be unsafe.
As suspicions strengthened over a 75-year period from the mid-1940s, torpor and complacency in the academic system enabled research malpractice to continue, not only Eysenck’s and Sargent’s, but across the board.
The currently available systems for regulating research integrity and malpractice are an abject failure. A totally new approach is required. An independent National Research Integrity Ombudsperson needs to be established to significantly improve the governance of academic research.
The replication crisis in science begins with faked data. I discuss here a well-known recent case, Hans J Eysenck. An enquiry at King’s College London and scientific journals concluded that multiple publications by Hans J Eysenck’s are ‘unsafe’ and must be retracted. These recent events suggest that the entire edifice of Eysenck’s work warrants re-examination. In this post I examine some early experimental research by Eysenck and his students at the Institute of Psychiatry during the 1950s and 60s.
Hans J Eysenck was a chameleon-figure in the science of psychology. Eysenck doctored data from the very beginning of his theorising. Time and again HJE proved that he was a master of camouflage. I examine here some historically significant data that HJE used to promote his biological theory of personality, data that were used by HJE in a misleading way to promote his theories.
The evidence suggests that HJE massaged data to give them more ‘scientific’ appeal. HJE’s biological theory had predicted that introverts would condition more quickly than extraverts. The original data were collected by Cyril M Franks who had worked for his PhD under Eysenck’s supervision at the Institute of Psychiatry, London. Even Franks would later turn upon the master for his misleading methodology and data analysis. However, HJE dismissed and vehemently attacked all of his critics, claiming they were wrong, foolhardy and unreasonable.
HJE used a series of questionable practices (QPRs) that raised many eyebrows including insiders at the Institute of Psychiatry. Eysenck’s theory of personality became the subject of scathing criticism. Chapter 5 of Playing With Fire by Rod Buchanan provides the full details.
My personal skepticism about HJE began as an undergraduate student when a lecturer, Vernon Hamilton, told me that HJE had ‘cheated’ with his data – see Hamilton’s critique here. Other telling criticism was published by Storms and Sigal here and in another article with Franks: see Sigal, Star and Franks here.
In spite of all of the controversy, which he seemed to rather enjoy, HJE became one of the most influential psychologists of all time. His Nature paper has been cited 6331 times.
In light of the recent exposure of Eysenck as a person who carried out serial publication fraud, it is informative to take a close look at Cyril Franks’ PhD research that in HJE’s creative accounting became a foundation stone of HJE’s first theory of personality.
EYSENCK’S DOCTORED CURVES
An almost perfect set of findings, one might assume – too good to be true even. My detailed scrutiny suggests that this was indeed the case. When one examines the data HJE used to generate these two curves, we see anything but smoothly increasing scores.
Franks tested a hypothesis attributed to Pavlov: “Neurotics of the dysthymic type form conditioned reflexes rapidly, and these reflexes are difficult to extinguish; neurotics of the hysteric type form conditioned reflexes slowly, and these reflexes are easy to extinguish”. Franks chose data from 20 dysthymic patients (having rejected data from 8 others), 20 hysteric patients (having rejected data from 7 others), and 20 non-patients …in a specially constructed soundproof conditioning laboratory.” The results for the dysthymic and hysteric groups were as follows:
Not unreasonably, Franks concluded that dysthymics give significantly more CR’s than hysterics. Buoyed by his initial success, Franks carried out another study to examine the factor of extraversion/introversion in the same eye-blink conditioning task. In this instance, Franks hypothesised, following HJE’s theory, that the introverts conditioned more quickly than the extraverts.
Franks’ 1957 data again show the rates of classical conditioning in eye-blink responses in this case for 15 introverts and 15 extraverts. According to Eysenck’s theory, the former group should show more rapid conditioning than the latter. The maximum score was 15.
EYSENCK COMBINED DATA FROM FRANKS’ TWO STUDIES
HJE combined the data from Franks’ two studies in a rather creative and unconventional manner. HJE combined data from groups of introvert non-patients with patients diagnosed with dysthymia and he combine data from a group of extravert non-patients and patients categorised as hysterics. The data from the two Franks studies were a hotchpotch that needs untangling.
1) Eysenck combined the data from the extraverts with the data from the patients classified as hysterics and the data from the introverts with that collected from the dysthymics This rather odd amalgam smoothed out many of the jagged edges in the two data sets.
2) There was no justification for assuming that the CR rates began at zero because all four groups had minimum scores well above zero. This fact was pointed out by Vernon Hamilton. Yet HJE doctored the data look this way by imposing curves that started at a zero origin.
The next figure shows the data after they had been combined, groups D and I together, and groups H and E together. I show the combined data with HJE’s smooth curves and the data points as HJE reported them.
EYSENCK ACHIEVED THE LOOK HE WANTED USING CHILDISHLY SIMPLE METHODS
HJE’s 4-step approach to a successful scientific outcome proceeded as follows:
First, HJE combined data from 4 different groups to create two new groups even though there was no scientific basis for doing so.
Second, although HJE’s and my computations of the combined data points show a fair degree of consistency, HJE appears to have ‘adjusted’ a few data points that didn’t fit the curve.
Third, HJE’s gave his smoothed curves zero starting points, contrary to the actual data, which indicate above-zero baseline scores. HJE attempted to disguise the fact that the groups had radically different, non-zero starting points.
Fourth, HJE ignored the fact that two lines with identical slope fitted the data equally well.
Using these devices, HJE promoted the data as respectable science fit for publication in the most reputable journals. This analysis suggests something rather different – that HJE was an out-and-out charlatan.
The left panel of the diagram below shows HJE’s published curves with his cunningly averaged data-points converted to percentages (small dots), and the same averaged data-points obtained by this author (DFM; o’s and x’s). In most cases, HJE’s and DFM’s data points coincide. In at least 4 instances, however, HJE’s points lie closer to the theoretical curve than the correct figures would suggest.
The right-hand panel shows the fit to the same two data sets using linear plots with identical slope. Neither of the fitted functions look anywhere near perfect, but there is no prima facie case for preferring the curvilinear to the linear fit.
HJE constructed a curvilinear association between eye-blink classical conditioning rates and questionnaire measures of extraversion-introversion. These curves were artificially doctored to suggest that introverts conditioned more quickly than extraverts, as HJE’s theory had predicted. By combining data that did not belong together, HJE was able to smooth the data sets, which when considered separately did not fit the predictions quite so well. HJE avoided a feasible alternative (null) hypothesis that the two groups produced identical rates of conditioning. In so doing, HJE helped to establish his first biological theory of personality. This was not only bad science, it was faked science, the work of a chameleon.
A flash-back to 1975 when Uri Geller came to town. Super-psychic or super-charlatan? Who to Believe?
On the one hand, a scientific report published in Nature verifying Geller’s psychic abilities under supposedly cheat-proof conditions, and on the other, a highly speculative but critical attack published simultaneously in New Scientist. While Targ and Puthoff’s reply … would seem to invalidate his explanations of their significant results, it is difficult to disregard the doubts Hanlon has raised about the “circus atmosphere” that he believes surrounded the SRI experiments. Frankly, at the end of 1974, we were puzzled and confused. Weighing all the evidence available at that time, it seemed impossible to decide whether Geller was a genuine psychic or an ingenious and highly skilled hoaxer. Clearly, the Geller effect had to be taken seriously as, in either case, there would be much of interest to learn about the mechanics of psychic performance. Clearly, what was needed was more experimentation.
Our first live encounter with Geller was accidental. On 23 March 1975 Geller arrived in New Zealand from Australia to begin a series of four “lecture-demonstrations” of his psychic powers. To facilitate communications, I ( David Marks) checked into the same hotel as Geller in the Dominion capital, Wellington. Hopefully we could obtain a sufficient level of cooperation to complete a series of laboratory tests. I left a letter for Geller at the hotel reception inviting his participation in some experiments.
I had been told by Geller’s local agent, Bruce Warwick, that Geller was due to arrive on the ten o’clock plane, and so an arrangement was made to talk with Geller the next morning after a press conference. At eight o’clock on the evening of the 23, I went down to dinner in the almost empty hotel restaurant. At about nine o’clock a party of noisy, flamboyant people sat down at the table next to me in the quiet dining room. From their accents, some were obviously Americans, others Australians, and others sounded like Americanized Israelis.
Suddenly, as I idly scanned their faces, to my utter amazement, I saw Uri Geller. Apparently, he had materialized himself into New Zealand prior to the aircraft’s arrival! He was sitting with his back to me, not more than ten feet away, opposite a woman with blond hair who spoke loudly and clearly with a distinct American accent.
Geller’s Faux Pas
Although I was dying to meet Geller, my first reaction was to leave, as the last thing I wanted to do was invade Geller’s privacy. However, it was I who had been there first, and they had sat next to me, not vice versa, so I decided to finish my dinner and then leave.
To this day I can still hardly believe what took place in the next few moments. The American woman (whom I knew later to be Miss Solveig Clark, one of Geller’s personal assistants) asked Geller in a clear and distinctive voice whether he had “read the letter from Dr. Marks.” Like most other people, I find it hard not to tune in to a conversation when my name is mentioned. I heard Geller reply: “Keep that guy away from me; he’ll pick up the signals (sic).”
No words can describe how I felt at that moment. What signals? Could these be the signals described in the New Scientist? Who was Geller’s female confidante? Was Puharich there, or Shipi Shtrang? Although I couldn’t answer all these questions, Geller had already told me more than I ever imagined would be possible. Yet Geller was blissfully ignorant of this major faux pas. I couldn’t help feeling that if Geller were truly psychic, he’d certainly have sensed my presence and avoided giving away trade secrets!
No question about it, from that moment I sensed that Uri Geller was nothing more than a not-so-clever trickster.
Anybody can bend a spoon, as long as you have a firm grip. Try it without touching, however, and its a very different story.
Geller successfully conned pretty much the whole world into believing he had special powers.
We called for an enquiry (Marks, 2019). H J Eysenck’s ex-employer, the Institute of Psychiatry in Denmark Hill, is now a part of King’s College London (KCL).
The enquiry at KCL concluded that 25 publications were unsafe. However, the enquiry report remains unpublished and incomplete.
KCL reviewed publications written by Eysenck with his collaborator Ronald Grossarth-Maticek. The enquiry failed to investigate 36 other bogus items based on exactly the same data collected by Eysenck’s collaborator.
The KCL enquiry must be properly completed to include the entire set of 61 bogus publications.
The Eysenck affair makes a strong case for a National Research Integrity Ombudsperson.
Eysenck’s research had been conducted with a German sociologist, Ronald Grossarth-Maticek, while claiming affiliation to Eysenck’s employer, the Institute of Psychiatry, now part of King’s College London (KCL). In a survey of Eysenck’s publications about fatal illness and personality, I identified a provisional total of 61 that exceeded any reasonable boundary of scientific credibility. Based on Pelosi’s case and my review of these dubious publications, I called for an investigation by KCL into these publications (Marks, 2019).
On 3rd December 2018 I sent a pre-publication copy of Anthony Pelosi’s review and my editorial to the Principal of KCL, Professor Edward Byrne. On 13th December 2018, I received a reply informing me that a considered response would follow a KCL review.
FOUR MONTH DELAY
On 25th June 2019 I was informed that KCL had completed its enquiry to examine publications authored by Professor Hans Eysenck with Professor Ronald Grossarth-Maticek. Professor Byrne said that KCL had contacted the University of Heidelberg where Professor Grossarth-Maticek is associated. Professor Byrne confirmed that the enquiry had found “a number of papers” to be questionable and that KCL would be writing to the editors of the relevant journals to inform them. I requested a copy of the enquiry report but heard nothing more until October 4th 2019 when I received the enquiry report dated ‘May 2019’.
The reason for the 4-month delay is unclear.
According to the enquiry report, the Principal had asked the Institute of Psychiatry, Psychology & Neuroscience (IoPPN) to set up a committee to examine publications authored by Professor Hans Eysenck with Professor Ronald Grossarth-Maticek.
Why only the publications co-authored with Grossarth-Maticek? The reason for this limitation in the scope of the enquiry is not given.
The enquiry committee expressed its concerns about the Eysenck and Grossarth-Maticek papers in the following terms:
“The concerns are based on two issues. First, the validity of the datasets, in terms of recruitment of participants, administration of measures, reliability of outcome ascertainment, biases in data collection, absence of relevant covariates, and selection of cases analysed in each article. Second, the implausibility of the results presented, many of which show effect sizes virtually unknown in medical science. For example, the relative risk of dying of cancer for individuals with ‘cancer-prone’ personality compared with healthy personality was over 100, while the risk of cancer mortality was reduced 80% by bibliotherapy. These findings are incompatible with modern clinical science and the understanding of disease processes.”
“The Committee shared the concerns made by the critics of this body of work. We have come to the conclusion that we consider the published results of studies that included the results of the analyses of data collected as part of the intervention or observational studies to be unsafe and that the editors of the journals should be informed of our decision. We have highlighted 26 papers (Appendix 1) which were published in 11 journals which are still in existence.”
SHIFTING THE BLAME
As noted, the KCL enquiry was based on the publications Eysenck co-authored with Ronald Grossarth-Maticek. Was this a manoeuvre designed to try and shift the blame away from Eysenck towards Grossarth-Maticek?
If so, it failed.
Any implication that Eysenck was a hapless victim of a dishonest act of data manipulation by Grossarth-Maticek is inconsistent with the evidence. After all, a large subset of 36 single-authored publications by the great man himself were based, partly or entirely, on the same body of research data as the co-authored publications.
There are so many publications both with and without his collaborator. Many of these publications cover exactly the same material. Multiple publication of the same material is definitely not part of the normally recognised process of academic publication.
There is indubitable evidence in Eysenck’s multiple publications of the Questionable Writing Practice of self-plagiarism.
OTHER EXAMPLES OF EYSENCK’S FRAUD
The KCL enquiry failed to identify the full extent of Eysenck’s fraud. Its enquiry must be extended to examine the ‘safety’ of Eysenck’s 36 bogus single-authored publications. It should also examine Eysenck’s multiple publications covering the same ground for evidence of self-plagiarism.
On no less than 20 co-authored papers. Grossarth-Maticek was falsely shown as being affiliated to the Institute of Psychiatry This fraud can be laid squarely at Eysenck’s door. Grossarth-Maticek could not have asserted this false affiliation without the deliberate connivance of Eysenck.
Recent personal communications with Ronald Grossarth-Maticek indicate that Grossarth-Maticek does not have even a minimal command of English. It can be reasonably assumed that Grossarth-Maticek was 100% reliant on Eysenck to produce the English language versions of the 25 unsafe papers.
TOTAL BODY OF BOGUS WORK MUST BE CONSIDERED
In total, Eysenck was responsible for no less than 61 publications using the bogus data sets. This total body of 61 publications includes more than 40 peer-reviewed journal articles, 10 book chapters and two books, each in three editions.
The proper thing for editors and publishers is to retract all 61 publications.
To quote James Heathers: “Eysenck would eclipse Diederik Stapel (58) as the most retracted psychologist in history, a scarcely believable legacy for someone who was at one time the most cited psychologist on the planet” (Heathers, 2019).
INABILITY TO PROPERLY INVESTIGATE FRAUD INSIDE ACADEMIC INSTITUTIONS
There are strong reasons to doubt the capability of KCL and other academic institutions to properly and fully investigate academic misconduct because of the obvious conflict of interest. In a previous case when I brought a complaint of academic misconduct to KCL, the institution failed to follow its procedures for investigating the complaint. In the Eysenck case, it has investigated less than half of the publications pertaining to the complaint.
All of which leads one to conclude that there is an urgent need to establish a National Research Integrity Ombudsperson to investigate allegations of academic misconduct.
The need for an independent UK body to promote good governance, management and conduct of academic, scientific and medical research could never be stronger than in the present situation. The Eysenck affair requires the full attention of the institutions that govern scientific practice.
The only professional body for psychologists, the British Psychological Society, washed its hands of the problem by passing the entire responsibility to KCL.
This is not an issue about a single individual’s alleged misconduct, or a single institution, it is about the integrity of science. Without a genuine ability to assure governance, quality and integrity, science is a failure unto to itself, to reason and to ethics.
As James Heathers points out:
the question is, does anybody have the will to do anything about it?
During the 1980s and 1990s, Hans J Eysenck conducted a programme of research into the causes, prevention and treatment of fatal diseases in collaboration with one of his protégés, Ronald Grossarth-Maticek. This led to what must be the most astonishing series of findings ever published in the peer-reviewed scientific literature with effect sizes that have never otherwise been encounterered in biomedical research. This article outlines just some of these reported findings and signposts readers to extremely serious scientific and ethical criticisms that were published almost three decades ago. Confidential internal documents that have become available as a result of litigation against tobacco companies provide additional insights into this work. It is suggested that this research programme has led to one of the worst scientific scandals of all time. A call is made for a long overdue formal inquiry.
The University of London (UL) is a complex, federal institution including University College London (UCL) the LSE, King’s College London and the London Business School. The University is the world’s oldest provider of academic awards through distance and flexible learning, dating back to 1858. The UL website proudly announces that it: “has been shortlisted for the International Impact Award at the 2018 Times Higher Education Awards, known as the ‘Oscars’ for higher education.
The academic context of an institution of the size and complexity of the UL is one of intense external and internal competition. These colleges compete fiercely for resources on a national and international stage. Many of them do exceedingly well. They are obsessed by their positions in various public league tables. For example, the Times Higher Education (2018) World University Rankings for 2019 place Imperial College, UCL, LSE and King’s at 9th, 14th, 26th and 38th places respectively in a table of 1250 universities. These rankings matter and the only game in town is to move up the table. Oxford and Cambridge are in first and second place, with Stanford, MIT, CalTech and the Ivy League universities not far behind.
Within UL itself, there is intense rivalry between the member colleges, the Medical Schools, the Schools and departments within those colleges, research groups and units within departments, and finally, between individual academics. The white heat of competition needs to be directly observed or experienced to be believed. Academics at every level are under huge and intense pressure to obtain research funding and to publish peer-reviewed papers in high-impact journals to raise the perceived status of their schools and departments, and to secure funding in the form of research grants and to do all of these things as quickly as possible. As a consequence, simply to stay in the race, each and every method that produces the most outstanding results will be tested and tried. Unfortunately, from time to time, this inevitably means that academics resort to fraudulent practices.
This always does harm; it harms patients, biomedicine and science. It also harms the reputations of the individuals concerned and their institutions. For this reason, information about scientific misconduct seldom finds its way into public arenas, yet it is a notable part of ‘behind the scenes’ academic history. In “Scientific misconduct and the myth of self-correction in science”, Stroebe, Postmes and Spears (2012) discuss 40 cases of fraud that occurred between 1974 and 2012. The majority occurred in Biomedicine and the only two UK cases were at UL. Academic institutions prefer to keep scientific fraud committed by their employees behind closed doors. Then with the inevitable leaks, news of ‘scandals’ creates headlines in the mainstream media. This means that academic responses to fraud are driven by scandals. To quote Richard Smith (2006): “They accumulate to a point where the scientific community can no longer ignore them and `something has to be done’. Usually this process is excruciatingly slow.”
There have been several examples of proven scientific misconduct involving fabrication and fraud at several esteemed colleges within London University. London University has been blighted with a high proportion of ‘celebrity’ fraud cases, a few of which are summarised below.
UNIVERSITY COLLEGE LONDON – BURT SCANDAL
Sir Cyril Burt at University College London claimed a child’s intelligence is mainly inherited and social circumstances play only a minor role. Burt was a eugenicist and he fabricated data in a manner that suggested the genetic theories of intelligence were confirmed. Burt’s research formed the basis of education policy from the 1920s until Burt died in 1971. Soon afterwards evidence of fraud began to seep out, as if from a leaky bucket.
Notable exposures were by Leon Kamin (1974) in his book, The science and politics of IQ and Oliver Gillie (1976, October 24) who claimed that “Crucial data was faked by eminent psychologist” in the Sunday Times
Burt was alleged to have invented results, assistants and authors to fit his theory that intelligence has primarily a genetic basis. It is widely accepted today that Burt was a fraudster although he still has defenders.
ROYAL FREE HOSPITAL – WAKEFIELD SCANDAL
A fraudulent article in The Lancet falsely linked the MMR vaccine to autism. The publicity about this scared large numbers of parents. Dr. Andrew J Wakefield and a team (1998) at the Royal Free Hospital and School of Medicine, UL, falsified their findings. This resulted in a substantial drop in vaccinations causing unnecessary deaths among thousands of unprotected children (e.g., Braunstein, 2012; Deere, 2012). In spite of significant public and scientific concerns, the Wakefield paper was not retracted until February 2010, 12 years after the original publication. The paper received 1330 citations in the 12-year period prior to retraction and 1260 citations since the retraction. The false evidence that MMR vaccine causes autism is widely cited to the present day, and the paper forms the backbone of an international anti-vaxxing campaign which Wakefield leads from Austin, Texas (Glenza, 2018).
ST GEORGE’S MEDICAL SCHOOL – PEARCE SCANDAL
Dr. Malcolm Pearce of St George’s Medical School, LU, claimed that a 29-year-old woman had given birth to a healthy baby after he had successfully relocated a five-week-old ectopic foetus into her womb (Pearce et al., 1994). The report excited worldwide interest and hope to thousands of women who are prone to pregnancies that start outside the uterus and end in miscarriage. However, Dr Pearce’s patient records had been tampered with, colleagues knew nothing of this astonishing procedure, and the mother could not be tracked down. Pearce had falsified his evidence. The GMC ruled that fraud had happened and struck off his name from the register. His fraud actually ended two careers.
BIRKBECK COLLEGE AND UCL SCANDAL
Turner (2018) describes a “a major research scandal, after an inquiry found that scientific papers were doctored over an eleven year period.” Professor David Latchman, Master of Birkbeck College and one of the country’s top geneticists, was accused of “recklessness” by allowing research fraud to take place at UCL’s Institute of Child Health. The report states that UCL launched a formal investigation after a whistleblower alleged fraud in dozens papers published by the Institute.
It is alleged that a panel of experts found that two scientists, Dr Anastasis Stephanou and Dr Tiziano Scarabelli, were guilty of research misconduct by manipulating images in seven published papers. Professor Latchman, a former Dean of the Institute, is cited as an author on all seven of the papers. In a paper published in the Journal of the American College of Cardiology, the panel said there was “clear evidence” of cloning, where parts of an image were copied and pasted elsewhere.
KING’S COLLEGE LONDON – PETERS AND BANERJEE SCANDAL
Another college in UL tainted by fraud is King’s College. According to the King’s College’s website (https://www.kcl.ac.uk/lsm/about/history/index.aspx) the College was founded in 1829 as a university college “in the tradition of the Church of England”. The first King’s Professor to gain prominence as a fraudster wasProfessor Timothy Peters, professor of clinical biochemistry at King’s College School of Medicine and Dentistry, who was found guilty of serious professional misconduct in 2001. He was given a severe reprimand by the General Medical Council (GMC) for failing to take action over falsified research published by a junior doctor he was supervising (Dyer, 2001).
Professor Peters had been the research supervisor of Dr Anjan Banerjee, a junior doctor at King’s College Hospital between 1988 and 1991. Dr Banerjee, aged 41, was suspended from practice by the GMC for 12 months in December 2000 for publishing fraudulent research (BMJ 2000;321:1429). When the GMC suspended him, he had already been suspended from his job as consultant surgeon at the Royal Halifax Infirmary as a result of unconnected allegations concerning financial fraud, and he resigned after the GMC suspension. In spite of everything, Dr Banerjee was awarded fellowships at three Royal Colleges and also the MBE! Nice work, if you can get it.
KING’S COLLEGE LONDON – HANS J EYSENCK AND R GROSSARTH-MATICEK SCANDAL
Hans Eysenck did his doctorate at UCL under the supervision of Cyril Burt (see section above about the Burt Scandal).
My Open Letter to the President of King’s College, London, Professor David Byrne, draws attention to the 30-year old scandal concerning the dodgy data, impossible claims and dirty tobacco money that are the foundation of multiple dubious publications by Professor H J Eysenck and R Grossarth-Maticek’s. An investigation by KCL of these events is long overdue and a report of a review by KCL is currently awaited. Watch this space…
<Prof Hans J Eysenck Roland Grossarth-Maticek>
UNIVERSITY COLLEGE LONDON – AHLUWALIA SCANDAL
The Ahluwalia scandal is described in detail by Dr Geoff. It involved multiple acts of fraud. Jatinder Ahluwalia was obviously a very shrewd operator. In spite of getting found out on more than one occasion, Ahluwalia was able to gain employment in several prestigious institutions including Cambridge University, Imperial College London, UCL and the University of East London.
These cases indicate the relative ease with which the academic fraudster can accomplish fame and fortune at some of the most prestigious institutions in the land. The extremely poor record of the authorities at colleges in London University in discovering and calling out fraud is something to behold.
Critique by Keith Geraghty and Special Issue Editorial in the Journal of Health Psychology (July 31, 2017)
I reproduce here my Editorial from the Special Issue of the Journal of Health Psychology on the PACE Trial. The issue contained an incisive critique of the trial by Dr. Keith J Geraghty (pictured). Keith Geraghty’s landmark paper, ‘PACE-Gate’: When clinical trial evidence meets open data access’, sparked a response by the PACE trial team, and a stream of commentaries, creating a storm of controversy.
The Times carried a report about a ‘mass resignation’ when three pro-PACE editorial board members resigned. The Daily Mail took a similar line, an unnecessary distraction from the main story. The PACE Trial debate continued among researchers, patient organisations, in Parliament, and on 20 September 2017 NICE announced that it would begin a review of its guidance on the diagnosis and treatment of CFS/ME.
In February 2018 Carol Monaghan (MP, Glasgow North West) organised a Parliamentary debate when she predicted that “when the full details of the trial become known, it will be considered one of the biggest medical scandals of the 21st century.”
Over a hundred academics, patient groups, lawyers, and politicians have signed an open letter to the Lancetcalling on the journal to commission an independent reanalysis of the data from the PACE trial, which it published in 2011. Better yet, there ought to be a full retraction.
Following is the text of the Special Issue Editorial.
We are proud that this issue marks a special contribution by the Journal of Health Psychology to the literature concerning interventions to manage adaptation to chronic health problems. The PACE Trial debate reveals deeply embedded differences between critics and investigators. It reveals an unwillingness of the co-principal investigators of the PACE trial to engage in authentic discussion and debate. It leads one to question the wisdom of such a large investment from the public purse (£5million) on what is a textbook example of a poorly done trial.
The Journal of Health Psychology received a submission in the form of a critical review of one of the largest psychotherapy trials ever done, the PACE Trial. PACE was a trial of therapies for patients with myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS), a trial that has been associated with a great deal of controversy (Geraghty, 2016). Following publication of the critical paper by Keith Geraghty (2016), the PACE Trial investigators responded with an Open Peer Commentary paper (White et al., 2017). The review and response were sent to more than 40 experts on both sides of the debate for commentaries.
The resulting collection is rich and varied in the perspectives it offers from a neglected point of view. Many of the commentators should be applauded for their courage, resilience and ‘insider’ understanding of experience with ME/CFS.
The Editorial Board wants to go on record that the PACE Trial investigators and their supporters were given numerous opportunities to participate, even extending the possibility of appeals and re-reviews when they would not normally be offered. That they failed to respond appropriately is disappointing.
Commentaries were invited from an equal number of individuals on both sides of the debate (about 20 from each side of the debate). Many more submissions arrived from the PACE Trial critics than from the pro-PACE side of the debate. All submissions were peer reviewed and judged on merit.
The PACE Trial investigators’ defence of the trial was in a template format that failed to engage with critics. Before submitting their reply, Professors Peter White, Trudie Chalder and Michael Sharpe wrote to me as co-principal investigators of the PACE trial to seek a retraction of sections of Geraghty’s paper, a declaration of conflicts of interest (COI) by Keith Geraghty on the grounds that he suffers from ME/CFS, and publication of their response without peer review (White et al., 4 November 2016, email to David F Marks). All three requests were refused.
On the question of COI, the PACE authors themselves appear to hold strong allegiances to cognitive behavioural therapy (CBT) and graded exercise therapy (GET) – treatments they developed for ME/CFS. Stark COI have been exposed by the commentaries including the PACE authors themselves who hold a double role as advisers to the UK Government Department of Work and Pensions (DWP), a sponsor of PACE, while at the same time working as advisers to large insurance companies who have gone on record about the potential financial losses from ME/CFS being deemed a long-term physical illness. In a further twist to the debate, undeclared COI of Petrie and Weinman (2017) were alleged (Lubet, 2017). Professors Weinman and Petrie adamantly deny that their work as advisers to Atlantis Healthcare represents a COI.
After the online publication of several critical Commentaries, Professors White, Sharpe, Chalder and 16 co-authors were offered a further opportunity to respond to their critics in the round but they chose not to do so.
After peer review, authors were invited to revise their manuscripts in response to reviewer feedback and many made multiple drafts. The outcome is a set of robust papers that should stand the test of time and offer significant new light on what went wrong with the PACE Trial that has been of such high significance for the nature of treatment protocols. It is disappointing that what has been the more dominant other side refused to participate.
Unfortunately, across the pro-PACE group of authors there was a consistent pattern of resistance to the debate. After receiving critical reviews, the pro-PACE authors chose to make only cosmetic changes or not to revise their manuscripts in any way whatsoever. They appeared unwilling to enter into the spirit of scientific debate. They acted with a sense of entitlement not to have to respond to criticism. Two pro-PACE authors even showed disdain for ME/CFS patients, stating: We have no wish to get into debates with patients. In another instance, three pro-PACE authors attempted to subvert the journal’s policy on COI by recommending reviewers who were strongly conflicted, forcing rejection of their paper.
The dearth of pro-PACE manuscripts to start off with (five submissions), the poor quality, the intransigence of authors to revise and the unavoidable rejection of three pro-PACE manuscripts led to an imbalance in papers between the two sides. However, this editor was loathe to compromise standards by publishing unsound pieces in spite of the pressure to go ahead and publish from people who should know better.
We are proud that this issue marks a special contribution by the Journal of Health Psychology to the literature concerning interventions to manage adaptation to chronic health problems. The PACE Trial debate reveals deeply embedded differences between critics and investigators. It also reveals an unwillingness of the co-principal investigators of the PACE trial to engage in discussion and debate. It leads one to question the wisdom of such a large investment from the public purse (£5 million) on what is a textbook example of a poorly done trial.
ME/CFS research has been poorly served by the PACE Trial and a fresh new approach to treatment is clearly warranted. On the basis of this Special Issue, readers can make up their own minds about the scientific merits and demerits of the PACE Trial. It is to be hoped that the debate will provide a more rational basis for evidence-based improvements to the care pathway for hundreds of thousands of patients.
Geraghty, KJ (2016) ‘PACE-Gate’: When clinical trial evidence meets open data access. Journal of Health Psychology 22(9): 1106–1112. Google Scholar, SAGE Journals, ISI
Lubet, S (2017) Defense of the PACE trial is based on argumentation fallacies. Journal of Health Psychology 22(9): 1201–1205. Google Scholar, SAGE Journals, ISI
Petrie, K, Weinman, J (2017) The PACE trial: It’s time to broaden perceptions and move on. Journal of Health Psychology 22(9): 1198–1200. Google Scholar, SAGE Journals, ISI
White, PD, Chalder, T, Sharpe, M. (2017) Response to the editorial by Dr Geraghty. Journal of Health Psychology 22(9): 1113–1117. Google Scholar, SAGE Journals, ISI
The Editorial has been abridged and the photograph of Dr. Keith Geraghty added.
We discuss here the chequered history of the claims by Psychologists and others about the links between personality and illness, particularly heart disease and cancer. The research has been marred by dirty money and allegations of fraud.
Speculation about ‘Type A’ and ‘Type B’ personalities and coronary heart disease (CHD) has existed for at least 70 years. The distinction between the two personalities was introduced in the mid-1950s by the cardiologists Meyer Friedman and Ray Rosenman (1974) Type A behavior and your heart. Their ideas can be traced to Franz Alexander one of the ‘fathers’ of psychosomatic medicine.
The Type A personality is described this: highly competitive and achievement oriented, not prepared to suffer fools gladly, always in a hurry and unable to bear delays and queues, hostile and aggressive, inclined to read, eat and drive very fast, and constantly thinking what to do next, even when supposedly listening to someone else. Type A was thought to be at greater risk of CHD,
The Type B personality is: relaxed, laid back, lethargic, even- tempered, amiable and philosophical about life, relatively slow in speech and action, and generally has enough time for everyone and everything.
The Type A personality is similar to Galen’s choleric temperament, and Type B with the phlegmatic. It is well known that men are at greater risk of CHD than women.
The key pioneering study of Type A personality and CHD was the Western Collaborative Group Study (WCGS). Over 3,000 Californian men, aged from 39 to 59, were followed up initially over a period of eight-and-a-half years, and later extending to 22 years plus. At the eight-and-a-half-year follow-up, Type As were twice as likely compared with Type Bs to suffer from subsequent CHD. 7% developed some signs of CHD and two-thirds of these were Type As. This increased risk was there even when other risk factors, such as blood pressure and cigarette smoking, were statistically controlled.
Similar results were obtained in another large-scale study in Framingham, Massachusetts. This time the sample contained both men and women. By the early 1980s, it was confidently asserted that Type A characteristics were as much a risk factor for heart disease as high blood pressure, high cholesterol levels and even smoking.
Failure to Replicate
Later research failed to support these early findings. When Ragland and Brand (1988) conducted a 22-year follow-up of the WCGS, using CHD mortality as the crucially important measure, they failed to find any consistent evidence of an association.
Further research continued up to the late 1980s, yielding few positive findings. Reviewing this evidence, Myrtek (2001) suggests that the modest number of positive findings that did exist were the result of over-reliance on angina as the measure of CHD. Considering studies that adopted hard criteria, including mortality, Myrtek concludes that Type A personality is not a risk factor for CHD.
Enter the Tobacco Industry
With such disappointing results, why did Type A obtain so much publicity over more than 40 years? The reason is in part connected with the involvement of the US tobacco industry.
Mark Petticrew et al. (2012) analysed material lodged at the Legacy Tobacco Documents Library. This is a vast collection of documents that the companies were obliged to make public following litigation in 1998. These documents show that, for over 40 years from the 1950s, the industry heavily funded research into links between personality, CHD and cancer. The industry was hoping to demonstrate that personality variables were associated with cigarette smoking.
Any such links would undermine the alleged causal links between smoking and disease. Thus, for example, if it could be shown that Type A personalities were both more likely to smoke than Type Bs, and more likely to develop CHD, then it could be argued that smoking might be just an innocent background variable.
The Philip Morris company funded Meyer Friedman, the originator of Type A research, for the Meyer Friedman Institute. The research aimed to show that Type A personalities could be changed by interventions, thereby presumably reducing proneness to CHD even if they continued to smoke.
Petticrew et al. show that, while most Type A–CHD studies were not funded by the tobacco industry, most of the positive results were tobacco-funded. As has been pointed out in many areas of science, positive findings invariably get a great deal more publicity than negative findings and rebuttals.
Hans J Eysenck
The late H J Eysenck was one of the most controversial psychologists who ever lived. Generations of UK psychology students had to study his books as gospel.
The German-born, British psychologist worked at the Institute of Psychiatry, University of London. He did a PhD under Sir Cyril Burt who was proved to have fabricated researchers and data to support his eugenic theory of intelligence. (Kamin, 1974, The science and politics of IQ).
Eysenck used the tobacco industry as a source of funding for his research on psychological theories of personality. According to Pringle (1996), Eysenck received nearly £800,000 to support his research on personality and cancer. Eysenck’s results were a spectacular exception to the general run of negative findings in this field. Eysenck (1988) claimed that personality variables are much more strongly related to death from cancer than even cigarette smoking.
One of my lecturers while I was an undergraduate had worked for Eysenck as a research assistant for a year. It had seemed clear to him that data massaging was required before placing Eysenck’s studies into publication. Data manipulation or even worse, outright fraud, has surfaced in a major re-analysis of Eysenck’s work on tobacco and personality.
Two of Eysenck’s papers, with Ronald Grossarth-Maticek (pictured above), based in Crvenka, Serbia, claimed to have identified personality types that increase the risk of cancer by about 120 times and heart disease by about 25 times (Grossarth-Maticek and Eysenck, 1991; Eysenck and Grossarth-Maticek, 1991). They also claimed to have tested a new method of psychological treatment that could reduce the death rate for disease prone personalities over the next 13 years from 80% to 32%. These claims are too good to be true.
These extraordinary claims were not received favourably by others in this field. Fox (1988) dismissed earlier reports by Eysenck and Grossarth-Maticek as ‘simply unbelievable’ and the 1991 papers were subjected to devastating critiques by Pelosi and Appleby (1992, 1993) and Amelang, Schmidt-Rathjens and Matthews (1996). The ‘cancer prone personality’ was not clearly described and seems to have been an odd amalgam of emotional distance and excessive dependence.
A Case of Fraud?
After pointing out a large number of errors, omissions, obscurities and implausible data, in a manner reminiscent of Leon Kamin’s analysis of Burt’s twin IQ data, Pelosi and Appleby comment:
It is unfortunate that Eysenck and Grossarth-Maticek omit the most basic information that might explain why their findings are so different from all the others in this field. The methods are either not given or are described so generally that they remain obscure on even the most important points . . . Also essential details are missing from the results, and the analyses used are often inappropriate.
(Pelosi and Appleby, 1992: 1297).
They never used the word “fraud”. They didn’t need to. For an update of this story, see this post
Rarely in the history of clinical medicine have doctors and patients been placed so bitterly at loggerheads. The dispute had been a long time coming. Thirty years ago, a few psychiatrists and psychologists offered a hypothesis based on a Psychological Theory in which ME/CFS is constructed as a psychosocial illness. According to their theory, ME/CFS patients have “dysfunctional beliefs” that their symptoms are caused by an organic disease. The ‘Dysfunctional Belief Theory’ (DBT) assumes that no underlying pathology is causing the symptoms; patients are being ‘hypervigilant to normal bodily sensations‘ (Wessely et al., 1989; Wessely et al., 1991).
The Psychological Theory assumes that the physical symptoms of ME/CFS are the result of ‘deconditioning’ or ‘dysregulation’ caused by sedentary behaviour, accompanied by disrupted sleep cycles and stress. Counteracting deconditioning involves normalising sleep cycles, reducing anxiety levels and increasing physical exertion. To put it bluntly, the DBT asserts that ME/CFS is ‘all in the mind’. Small wonder that patient groups have been expressing anger and resentment in their droves.
‘Top-down research’ uses a hierarchy of personnel, duties and skill-sets. The person at the top sets the agenda and the underlings do the work. The structure is a bit like the social hierarchy of ancient Egypt. Unless carefully managed, this top-down approach risks creating a self-fulfilling prophecy from confirmation biases at multiple levels. At the top of the research pyramid sits the ‘Pharaoh’, Regius Professor Sir Simon Wessely KB, MA, BM BCh, MSc, MD, FRCP, FRCPsych, F Med Sci, FKC, Knight of the Realm, President of the Royal College of Medicine, and originator of the DBT. The principal investigators (PIs) for the PACE Trial, Professors White, Chalder and Sharpe, are themselves advocates of the DBT. The PIs all have or had connections both to the Department of Work and Pensions and to insurance companies. The objective of the PACE Trial was to demonstrate that two treatments based on the DBT, cognitive behavioural therapy (CBT) and graded exercise therapy (GET), help ME/CFS patients to recover. There was zero chance the PACE researchers would fail to obtain the results they wanted.
Groupthink, Conflicts and Manipulation
The PACE Trial team were operating within a closed system or groupthink in which they ‘know’ their theory is correct. With every twist and turn, no matter what the actual data show, the investigators are able to confirm their theory. The process is well-known in Psychology. It is a self-indulgent processes of subjective validationandconfirmation bias.Groupthinkoccurs when a groupmakes faulty decisions because grouppressures lead to a deterioration of “mental efficiency, reality testing, and moral judgment” (Janis, 1972). Given this context, we can see reasons to question the investigators’ impartiality with many potential conflicts of interest (Lubet, 2017). Furthermore, critical analysis suggests that the PACE investigators involved themselves in manipulating protocols midway through the trial, selecting confirming data and omitting disconfirming data, and publishing biased reports of findings which created a catalogue of errors.
‘Travesty of Science’
The PACE Trial has been termed a ‘travesty of science’ while sufferers of ME/CFS continue to be offered unhelpful or harmful treatments and are basically being told to ‘pull themselves together’. One commentator has asserted that the situation for ME patients in the UK is: “The 3 Ts – Travesty of Science; Tragedy for Patients and Tantamount to Fraud” (Professor Malcolm Hooper, quoted by Williams, 2017). Serious errors in the design, the protocol and procedures of the PACE Trial are evident. The catalogue of errors is summarised below. The PACE Trial was loaded towards finding significant treatment effects.
A Catalogue of Errors
The claimed benefits of GET and CBT for patient recovery are entirely spurious. The explanation lies in a sequence of serious errors in the design, the changed protocol and procedures of the PACE Trial. The investigators neglected or bypassed accepted scientific procedures for a RCT, as follows:
Category of error
Description of error
Ethical issue: Applying for ethical approval and funding for a long-term trial when the PIs knew already knew CBT effects on ME/CFS were short-lived.
On 3rd November 2000, Sharpe confirmed: “There is a tendency for the difference between those receiving CBT and those receiving the comparison treatment to diminish with time due to a tendency to relapse in the former” (www.cfs.inform/dk). Wessely stated in 2001 that CBT is “not remotely curative” and that: “These interventions are not the answer to CFS” (Editorial: JAMA 19th September 2001:286:11) (Williams, 2016).
Ethical issue: Failure to declare conflicts of interest to Joint Trial Steering Committee.
Undeclared conflicts of interest by the three PIs in the Minutes of the Joint Trial Steering Committee and Data Monitoring Committee held on 27th September 2004.
Ethical issue: Failure to obtain fully informed consent after non-disclosure of conflicts of interest.
Failing to declare their vested financial interests to PACE participants, in particular, that they worked for the PHI industry, advising claims handlers that no payments should be made until applicants had undergone CBT and GET.
Use of their own discredited “Oxford” criteria for entry to the trial.
Patients with ME would have been screened out of the PACE Trial even though ME/CFS has been classified by the WHO as a neurological disease since 1969 (ICD-10 G93.3).
Inadequate outcome measures.Using only subjective outcome measures.
The original protocol included the collection of actigraphy data as an objective outcome measure. However, after the Trial started, the decision was taken that no post-intervention actigraphy data should be obtained.
Changing the primary outcomes of the trial after receiving the raw data.
Altering outcome measures mid-trial in a manner which gave improved outcomes.
Changing entry criteria midway through the trial.
Altering the inclusion criteria for trial entry after the main outcome measures were lowered so that some participants (13%) met recovery criteria at the trial entry point.
The statistical analysis plan was published two years after selective results had been published.
The Re-definition of “recovery” was not specified in the statistical analysis plan.
Sending participants newsletters promoting one treatment arm over another, thus contaminating the trial.
Lack of comparable placebo/control groups with inexperienced occupational therapists providing a control treatment and experienced therapists provided CBT.
Repeatedly informing participants in the GET and CBT groups that the therapies could help them get better.
Giving patients in the CBT and GET arms having more sessions than in the control group.
Allowing therapists from different arms to communicate with each other about how patients were doing.
Lack of transparency
Blocking release of the raw data for five years preventing independent analysis by external experts.
Blocking release of the raw data for five years and preventing independent analysis by external experts was tantamount to a cover-up of the true findings. An editorial by Keith Geraghty (2016) was entitled ‘PACE-Gate’. ME/CFS patient associations were rightly suspicious of the recovery claims concerning the GET arm of the trial because of their own experiences of intense fatigue after ordinary levels of activity which were inconsistent with the recovery claims of the PACE Trial reports. For many sufferers, even moderate exercise results in long ‘wipe-outs’ in which they are almost immobilized by muscle weakness and joint pain. In the US, post-exertional relapse has been recognized as the defining criterion of the illness by the Centers for Disease Control, the National Institutes of Health and the Institute of Medicine. For the PACE investigators, however, the announced recovery results validated their conviction that psychotherapy and exercise provided the key to reversing ME/CFS.
Alem Matthees Obtains Data Release
When Alem Matthees, a ME/CFS patient, sought the original data under the Freedom of Information Act and a British Freedom of Information tribunal ordered the PACE team to disclose their raw data, some of the data were re-analysed according to the original protocols. The legal costs of the tribunal at which QMUL were forced to release the data, against their strenuous objections, was over £245,000.The re-analysis of the PACE Trial data revealed that the so-called “recovery” under CBT and GET all but disappeared (Carolyn Wilshire, Tom Kindlon, Alem Matthees and Simon McGrath, 2016). The recovery rate for CBT fell to seven percent and the rate for GET fell to four percent, which were statistically indistinguishable from the three percent rate for the untreated controls. Graded exercise and CBT are still being routinely prescribed for ME/CFS in the UK despite patient reports that the treatments can cause intolerable pain and relapse. The analysis of the PACE Trial by independent critics has revealed a catalogue of errors and provides an object lesson in how not to conduct a scientific trial. The trial can be useful to instructors in research design and methodology for that purpose.
Following the re-analyses of the PACE Trial, the DBT is dead in the water. There is an urgent need for new theoretical approaches and scientifically-based treatments for ME/CFS patients. Meanwhile, there is repair work to be done to rebuild patient trust in the medical profession after this misplaced attempt to apply the Psychological Theory to the unexplained syndrome of ME/CFS. The envelope theory of Jason et al. (2009) proposes that people with ME/CFS need to balance their perceived and expended energy levels and provides one way forward, pending further research.
Ultimately, patients, doctors and psychologists are waiting for an organic account of ME/CFS competent to explain the symptoms and to open the door to effective treatments. Patients have a right to nothing less.