A New Ponzo Illusion

Most visual illusions are produced using carefully contrived drawings or gadgets to fool the visual system into thinking impossible things.  Recently,  waiting at a train station, I encountered a real-life Ponzo illusion.

The Illusion

The traditional form of the Ponzo illusion is produced by drawing a pair of receding railway lines. The context suggests different depths in the drawing. An object towards the top of the drawing appears larger than an identical object near the bottom of the drawing.  Using a principle of size constancy, the visual system estimates the size of any object as its retinal size multiplied by the assumed distance. Thus, the ‘most distant’ of the two identical yellow lines appears to be longer.

Ponzo_illusion

The Setting

The setting of this new Ponzo illusion is a railway station situated at Vitrolles Airport, Marseille (see photo below).  The station has glass panelled shelters on the platforms on each side. The glass panel at the front of each shelter displays two rows of grey rectangles. Apart from their decorative function, one assumes that these rows of rectangles are intended to help prevent people from walking into the glass panel as they move in and around the shelter. The photo below shows the arrangement of the two rows of rectangles on the shelter.

IMG-9407.jpg

The Stimuli

The stimuli for the illusion consist of rectangles that are slightly longer than a credit card, approximately 10.0 cm long x 1.5 cm wide with a separation of about 3.0 cm between successive rectangles. The plate glass window is about 5 mm thick and is marked with rectangles on both sides of the glass in perfect alignment so that a 3-D effect is created indicating a false sense of solidity to these rectangles. This ‘3-D look’ may strengthen the Ponzo effect illustrated below.

IMG-9426.JPG

The Illusion

The illusion is demonstrated in the photo below.  Two people sitting directly in front of the shelter are waiting for a train. The upper set of rectangles appears as a set of columns positioned along the railway lines at a distance of approximately 7 metres in front of the two passengers. In this case, the upper set of rectangles appear to have a height of around 2-3 metres. The lower set of rectangles are perceived at their correct location and size on the plate glass window, behind the two passengers. The lower set are actually physically smaller, owing to the camera angle, but the illusion exaggerates the size difference enormously.

IMG-9431.JPG

Further illustration of the effect indicates how the brain scales the stimuli to the context. When the rectangles are projected onto the opposite platform they appear huge – almost as high as the lamp post of around 5 metres.

When the rectangles are projected onto the nearby platform, however, they appear proportionately smaller (1.0-1.5 metres).

IMG-9388z

IMG-9389.JPGOwing to the camera angles, the actual size of the rectangles in the upper picture is larger (5-10%) than in the lower picture, but nowhere near the illusory ‘expansion’ that takes place when they are projected by the brain to the opposite platform.

Blocking the Distance Cues

The magnitude of the Ponzo illusion becomes somewhat indeterminate when the distances cues were fortuitously blocked by a passing freight train. In this case the rectangles are ‘drawn into’ the scale of the passing wagons, stretching in size beyond the appearance when the wagons are not there.

IMG-9410.jpg

Explanation

The Ponzo illusion can be most easily explained in terms of linear perspective. The rectangles looks longer when they are projected to the distance of the opposite platform because the brain automatically interprets them as being further away, so we see them as longer. An object located farther away would have to be larger than a nearby object to produce a retinal image of the same size.

PONZO.png

The more visual cues surrounding the two vertical lines, the more powerful the illusion. The passing freight train obliterated some of the distance cues and so the length of the lines were more difficult to assess.

Cochrane Catastrophe

Peter Gøtzsche’s Expulsion Triggers Mass Resignation

The Board of a prestigious scientific organisation, The Cochrane Collaboration,  recently suffered a mass resignation.  This post documents the reasons why, using the words of the organisation itself. The board has been reduced from 13 to 6 members, following a vote to expel a founding member  for the first time in its 25-year existence.

On 14 September, Peter Gøtzsche, director of the Cochrane’s Nordic Centre and a member of its governing board, posted a statement on the centre’s website. This announced that he had been expelled as a member of the Cochrane Collaboration, after a vote by 6 of 13 of the board’s members.

A further four elected members of the board — which also has appointed members — stepped down in protest. To maintain a balance between appointed and elected members, the board also asked two appointed members to resign.

Gøtzsche claims no justification was given for his expulsion except that he was accused by the board of bringing the organization into “disrepute”. The organization — which carries out systematic reviews of health-care interventions — told Nature it had received “numerous complaints” about Gøtzsche after the publication earlier this year of a critique he co-authored, entitled ‘The Cochrane HPV vaccine review was incomplete and ignored important evidence of bias’ and published in the BMJ Evidence-Based Medicine.

Who or What is Cochrane?

I quote from the Cochrane website:

“Cochrane is for anyone interested in using high-quality information to make health decisions. Whether you are a doctor or nurse, patient or carer, researcher or funder, Cochrane evidence provides a powerful tool to enhance your healthcare knowledge and decision making.

Cochrane’s 11,000 members and over 35,000 supporters come from more than 130 countries, worldwide. Our volunteers and contributors are researchers, health professionals, patients, carers, and people passionate about improving health outcomes for everyone, everywhere. Our global independent network gathers and summarizes the best evidence from research to help you make informed choices about treatment and we have been doing this for 25 years.

We do not accept commercial or conflicted funding. This is vital for us to generate authoritative and reliable information, working freely, unconstrained by commercial and financial interests.

Our Strategy to 2020 aims to put Cochrane evidence at the heart of health decision-making all over the world.”

The Strategy to 2020 has hit a stumbling block. GOAL 4 is or was: “Building an effective sustainable organization.”

“To be a diverse, inclusive, and transparent international organization that effectively harnesses the enthusiasm and skills of our contributors, is guided by our principles, governed accountably, managed efficiently, and makes optimal use of its resources.”

In light of the torpedo the shambolic Governing Board has fired at its own organisation, the expulsion of Peter Gøtzsche, Goal 4 of the Strategy now reads like an ill-timed joke. The statement that spells the end of Cochrane is quoted below.

Bizarre Situation

The Cochrane website is currently as bizarre as can be. A Screen Shot taken this morning 2018-09-27 at 07.05.55 shows an announcement of Peter Gøtzsche’s expulsion immediately followed by the announcement of the 25th Anniversary event to celebrate Peter’s Nordic Cochrane Centre and the foundation of the Cochrane Collaboration:

Screen Shot 2018-09-27 at 07.05.55

Statement from Cochrane’s Governing Board

Statement made by the Governing Board at Cochrane’s 2018 Annual General Meeting, 17th September, at the Edinburgh Cochrane Colloquium

“Dear Cochrane members,

These are extraordinary times and we find ourselves in an extraordinary situation. Your Board is always happy to answer questions about our decisions, and today is no different. We want to explain how we got here today. This wasn’t our original plan because we wanted to behave fairly and with integrity, in a process that respected the privacy of an individual, whilst taking place over a number of days. Days, which unfortunately span this special Colloquium.

This is about the behaviour of one individual. There has been a lengthy investigation into repeated bad behaviour over many years. It is exceptionally unusual for a Board to have to do such an investigation.

Last Thursday, the Board took a decision which divided the Board. Subsequently, four Board members chose to resign. At the same time, others contributed to a public and media campaign of misinformation.

We recognize that the last 24 hours have been exceptionally difficult and as a result, we as a Board have decided to share with you information about the decision that was made, the process by which it was made, and where we are now, in order to act in the best interests of Cochrane.

We now want to put before you as much evidence as we can, so you know what is going on. We cannot tell you everything. All of you will understand why individuals have a right to privacy and confidentiality. We ask that you all respect this, because we may not be able to tell you everything, for legal reasons and reasons of privacy.

By way of background, we are a global organization which operates under British law because we were founded as a UK charity. Our mission is to benefit the public. We are governed by our Articles of Association.

As the Board, we are in fact the employers of the Cochrane staff. All our staff, and our members, have the right to do their work without harassment and personal attacks. We are living in a world where behaviours that cause pain and misery to people, are being ‘called out’. This Board wants to be clear that while we are Trustees of this organization, we will have a “zero tolerance” policy for repeated, seriously bad behaviour. There is a critical need for ALL organizations to look after their staff and members; once repeated, seriously bad behaviour had been recognized, doing nothing was NOT an option.

So, here are the facts as we are able to report them. We may be able to tell you more later, we may not. Time will tell.

This Board decision is not about freedom of speech.
It is not about scientific debate.
It is not about tolerance of dissent.
It is not about someone being unable to criticize a Cochrane Review.

It is about a long-term pattern of behaviour that we say is totally, and utterly, at variance with the principles and governance of the Cochrane Collaboration. This is about integrity, accountability and leadership.

In March this year, we received three complaints about an individual. These were not the first complaints that had ever been received. In fact, the earliest recorded goes back to 2003. Many have been dealt with over the years. Many disputes have arisen. Formal letters have been exchanged. Promises have been made. And broken. Some disputes have been resolved, some have not.

It was clear to the Co-Chairs that the Board had to reach a decision about these most recent complaints. The individual then made serious allegations against one of the Senior Management Team and shared those with the Board. We seemed to be in an impossible situation. How could the Board now reach a decision about the complaints in a fair way? How could we fulfil our responsibilities as employers of the Senior Management Team? Or alternatively, act to admonish that member of the Senior Management Team if they had done wrong?

With guidance from a Trustee with extensive experience of complaints, we proposed asking a totally independent person to undertake a review. The report was to be confidential to the Board.

After failing to get agreement from the individual to an independent review, we then sought legal advice on behalf of Cochrane. We asked the lawyers, what should a Charity such as Cochrane do in this situation? We were advised that various legal consequences flowed from the events – the complaints and the accusations – and that Cochrane should take them seriously.

We asked the lawyers to take particular note of Cochrane’s commitment to transparency. They noted that, but also stressed the importance of confidentiality.

They advised that an independent review was both a sensible and proportionate response.

At the Governing Board Teleconference on 13th June 2018, all Board members read the letter from our lawyers. The lawyers stated that given the serious legal concerns about this matter they strongly recommended an independent review by a very senior lawyer. The Board approved a motion to accept the lawyer’s advice and establish the independent review.

Our lawyers identified a senior independent lawyer (QC) and he was instructed on 2nd July 2018. As part of the process, he invited written submissions from both individuals concerned. He invited both to be interviewed. The lawyer was asked to work to a deadline of the Board Meeting on Thursday last week, 13th September. And, we did in fact receive his preliminary report in time for that meeting. The report completely exonerated the member of the Senior Management Team but did not exonerate the other individual.

Whilst the review was underway, and as a completely separate matter, a paper was published in the journal BMJ-EBM co-authored by the individual concerned on July 27th 2018. The publication of this paper has proved controversial. As a result, the Board received a number of letters of complaint. Each was sent to the individual to allow a written response. In order to avoid any misunderstanding, the Board want you to be clear that this was a matter that arrived very late in this whole process.

So, at the Board Meeting on Thursday September 13th, the trustees reviewed the lawyer’s report of his independent review, and all the material related to the recently published paper. After they had reviewed and discussed this at length, the Trustees exercised their judgement, and looking across a broad range of behaviours, the Board came to a decision to invoke Article 5.2.1. relating to termination of membership. This was not unanimous.

As a result, Article 5.3 was triggered, and the member has been invited to make a written response within seven days.

At this point in time, this person remains a member of the Cochrane Collaboration. We are waiting for the process to be completed. We will report back to you about the outcome as soon as we are able to.

Let us repeat, this is an extremely rare and unusual thing to do. We hope never to have to do this again.

Cochrane Governing Board
Edited (without prejudice): 19th September 2018

Wednesday, September 19, 2018″

 

 

 

Special issue on the PACE Trial

We are proud that this issue marks a special contribution by the Journal of Health Psychology to the literature concerning interventions to manage adaptation to chronic health problems. The PACE Trial debate reveals deeply embedded differences between critics and investigators. It reveals an unwillingness of the co-principal investigators of the PACE trial to engage in authentic discussion and debate. It leads one to question the wisdom of such a large investment from the public purse (£5million) on what is a textbook example of a poorly done trial.

The Journal of Health Psychology received a submission in the form of a critical review of one of the largest psychotherapy trials ever done, the PACE Trial. PACE was a trial of therapies for patients with myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS), a trial that has been associated with a great deal of controversy (Geraghty, 2016). Following publication of the critical paper by Keith Geraghty (2016), the PACE Trial investigators responded with an Open Peer Commentary paper (White et al., 2017). The review and response were sent to more than 40 experts on both sides of the debate for commentaries.

The resulting collection is rich and varied in the perspectives it offers from a neglected point of view. Many of the commentators should be applauded for their courage, resilience and ‘insider’ understanding of experience with ME/CFS.

The Editorial Board wants to go on record that the PACE Trial investigators and their supporters were given numerous opportunities to participate, even extending the possibility of appeals and re-reviews when they would not normally be offered. That they failed to respond appropriately is disappointing.

Commentaries were invited from an equal number of individuals on both sides of the debate (about 20 from each side of the debate). Many more submissions arrived from the PACE Trial critics than from the pro-PACE side of the debate. All submissions were peer reviewed and judged on merit.

The PACE Trial investigators’ defence of the trial was in a template format that failed to engage with critics. Before submitting their reply, Professors Peter White, Trudie Chalder and Michael Sharpe wrote to me as co-principal investigators of the PACE trial to seek a retraction of sections of Geraghty’s paper, a declaration of conflicts of interest (COI) by Keith Geraghty on the grounds that he suffers from ME/CFS, and publication of their response without peer review (White et al., 4 November 2016, email to David F Marks). All three requests were refused.

On the question of COI, the PACE authors themselves appear to hold strong allegiances to cognitive behavioural therapy (CBT) and graded exercise therapy (GET) – treatments they developed for ME/CFS. Stark COI have been exposed by the commentaries including the PACE authors themselves who hold a double role as advisers to the UK Government Department of Work and Pensions (DWP), a sponsor of PACE, while at the same time working as advisers to large insurance companies who have gone on record about the potential financial losses from ME/CFS being deemed a long-term physical illness. In a further twist to the debate, undeclared COI of Petrie and Weinman (2017) were alleged (Lubet, 2017). Professors Weinman and Petrie adamantly deny that their work as advisers to Atlantis Healthcare represents a COI.

After the online publication of several critical Commentaries, Professors White, Sharpe, Chalder and 16 co-authors were offered a further opportunity to respond to their critics in the round but they chose not to do so.

After peer review, authors were invited to revise their manuscripts in response to reviewer feedback and many made multiple drafts. The outcome is a set of robust papers that should stand the test of time and offer significant new light on what went wrong with the PACE Trial that has been of such high significance for the nature of treatment protocols. It is disappointing that what has been the more dominant other side refused to participate.

Unfortunately, across the pro-PACE group of authors there was a consistent pattern of resistance to the debate. After receiving critical reviews, the pro-PACE authors chose to make only cosmetic changes or not to revise their manuscripts in any way whatsoever. They appeared unwilling to enter into the spirit of scientific debate. They acted with a sense of entitlement not to have to respond to criticism. Two pro-PACE authors even showed disdain for ME/CFS patients, stating: We have no wish to get into debates with patients. In another instance, three pro-PACE authors attempted to subvert the journal’s policy on COI by recommending reviewers who were strongly conflicted, forcing rejection of their paper.

The dearth of pro-PACE manuscripts to start off with (five submissions), the poor quality, the intransigence of authors to revise and the unavoidable rejection of three pro-PACE manuscripts led to an imbalance in papers between the two sides. However, this editor was loathe to compromise standards by publishing unsound pieces in spite of the pressure to go ahead and publish from people who should know better.

We are proud that this issue marks a special contribution by the Journal of Health Psychology to the literature concerning interventions to manage adaptation to chronic health problems. The PACE Trial debate reveals deeply embedded differences between critics and investigators. It also reveals an unwillingness of the co-principal investigators of the PACE trial to engage in discussion and debate. It leads one to question the wisdom of such a large investment from the public purse (£5 million) on what is a textbook example of a poorly done trial.

ME/CFS research has been poorly served by the PACE Trial and a fresh new approach to treatment is clearly warranted. On the basis of this Special Issue, readers can make up their own minds about the scientific merits and demerits of the PACE Trial. It is to be hoped that the debate will provide a more rational basis for evidence-based improvements to the care pathway for hundreds of thousands of patients.

References

Geraghty, KJ (2016‘PACE-Gate’: When clinical trial evidence meets open data access. Journal of Health Psychology 22(9): 11061112Google ScholarSAGE JournalsISI
Lubet, S (2017Defense of the PACE trial is based on argumentation fallacies. Journal of Health Psychology 22(9): 12011205Google ScholarSAGE JournalsISI
Petrie, K, Weinman, J (2017The PACE trial: It’s time to broaden perceptions and move on. Journal of Health Psychology 22(9): 11981200Google ScholarSAGE JournalsISI
White, PD, Chalder, T, Sharpe, M. (2017Response to the editorial by Dr Geraghty. Journal of Health Psychology 22(9): 11131117Google ScholarSAGE JournalsISI

The Editorial has been abridged and the photograph of Dr. Keith Geraghty added.

Post-Traumatic Growth

Post-Traumatic Growth 

Experiences of life disruption, threat, distress, or adversity can lead to positively evaluated “growth” (Tedeschi and Calhoun, 1995). It has been observed for centuries that benefit finding and posttraumatic growth (PTG) can follow the occurrence of traumatic events including accidents, warfare, death of a loved one, and cancer diagnosis and treatment (Stanton, 2010).

Benefit finding and growth represent a fundamental restorative principle of homeostasis that is continually active towards the achievement of stability, equilibrium and well-being. Adaptation to any life-threatening illness, such as cancer, is facilitated by homeostasis systems that include the drive to find meaning, exert mastery or control over the experience, and bolster self-esteem. Growth and benefit-finding are frequently reported by cancer survivors as they gain awareness of their illness, its treatment and prognosis.

Measurement of PTG

The theoretical model of PTG proposed by Tedeschi and Calhoun suggests growth occurs in different ways.  Developing new relationships, finding new appreciation for life, new meanings in life, discovering personal strength, experiencing spiritual change, and realizing new opportunities are all possibilities. The experiences of benefit finding and growth are undeniable. The methods and measurements used for their study, however, raise more questions than answers.

Among cancer populations, reported prevalence rates of perceived PTG range from 53 to 90% and vary according to the type of cancer, time since diagnosis, heterogeneity and ethnicity of the sample, choice of measurement, and many personal factors (Coroiu et al., 2016). Posttraumatic growth is measured using scales such as “The Posttraumatic Growth Inventory” (PTGI), a 21-item measure of positive change following a traumatic or stressful event (Tedeschi and Calhoun, 1996). Respondents rate the degree to which positive change had occurred in their life “as a result of having cancer.” A total PTGI score and five subscale scores (New Possibilities, Relating to Others, Personal Strength, Spiritual Change, and Appreciation of Life) are calculated.

What the Critics Say

Critics have been less than enthusiastic about measuring PGI in this manner. James Coyne and Howard Tennen (2010) argue that: “Every PTG scale asks participants to rate how much they have changed on each scale item as the result of the crisis they faced. Thus, a respondent must: (a) evaluate her/his current standing on the dimension described in the item, e.g., a sense of closeness to others; (b) recall her/his previous standing on the same dimension; (c) compare the current and previous standings; (d) assess the degree of change; and (e) determine how much of that change can be attributed to the stressful encounter. Psychological science, which purportedly guides positive psychology, tells us that people cannot accurately generate or manipulate the information required to faithfully report trauma- or stress-related growth (or to report benefits) that results from threatening encounters…The psychological literature demonstrates consistently that people are unable to recollect personal change accurately” (Coyne and Tennen, 2010, p. 23).

The five steps a-e certainly are a tall order, and it seems highly doubtful that anybody could achieve them with any accuracy. It seems naïve to analyse numbers that research participants place on scales from the PTGI as though they are valid indices of ‘post-traumatic growth’ when no attempt is made to validate these measures.  In spite of these criticisms, many studies have been conducted using the PTGI scale.

Quack Science 

Quite rightly, Coyne and Tennen (2010) have damned the flawed methods and measures concerning PTG: “We are at a loss to explain why positive psychology investigators continue to endorse the flawed conceptualization and measurement of personal growth following adversity. Despite Peterson’s …warning that the credibility of positive psychology’s claim to science demands close attention to the evidence, post-traumatic growth—a construct that has now generated hundreds of articles—continues to be studied with flawed methods and a disregard for the evidence generated by psychological science. It is this same pattern of disregard that has encouraged extravagant claims regarding the health benefits of positive psychological states among individuals living with cancer” (p. 24).

As long as psychologists use shoddy methods, invalid measures and draw quack conclusions, they will not be taken seriously by outsiders.

Based on a section of: David F Marks et al. (2018) Health Psychology. Theory, Research & Practice (5th ed.) SAGE Publications Ltd.

The PACE Trial: A Catalogue of Errors

What was the PACE Trial?

Rarely in the history of clinical medicine have doctors and patients been placed so bitterly at loggerheads. The dispute had been a long time coming. Thirty years ago, a few psychiatrists and psychologists offered a hypothesis based on a Psychological Theory in which ME/CFS is constructed as a psychosocial illness. According to their theory, ME/CFS patients have “dysfunctional beliefs” that their symptoms are caused by an organic disease. The ‘Dysfunctional Belief Theory’ (DBT) assumes that no underlying pathology is causing the symptoms; patients are being ‘hypervigilant to normal bodily sensations‘ (Wessely et al., 1989; Wessely et al., 1991).

The Psychological Theory assumes that the physical symptoms of ME/CFS are the result of ‘deconditioning’ or ‘dysregulation’ caused by sedentary behaviour, accompanied by disrupted sleep cycles and stress. Counteracting deconditioning involves normalising sleep cycles, reducing anxiety levels and increasing physical exertion. To put it bluntly, the DBT asserts that ME/CFS is ‘all in the mind’.  Small wonder that patient groups have been expressing anger and resentment in their droves.

Top-Down Research

‘Top-down research’ uses a hierarchy of personnel, duties and skill-sets. The person at the top sets the agenda and the underlings do the work. The structure is a bit like the social hierarchy of ancient Egypt. Unless carefully managed, this top-down approach risks creating a self-fulfilling prophecy from confirmation biases at multiple levels. At the top of the research pyramid sits the ‘Pharaoh’, Regius Professor Sir Simon Wessely KB, MA, BM BCh, MSc, MD, FRCP, FRCPsych, F Med Sci, FKC, Knight of the Realm, President of the Royal College of Medicine, and originator of the DBT.  The principal investigators (PIs) for the PACE Trial, Professors White, Chalder and Sharpe, are themselves advocates of the DBT.  The PIs all have or had connections both to the Department of Work and Pensions and to insurance companies. The objective of the PACE Trial was to demonstrate that two treatments based on the DBT, cognitive behavioural therapy (CBT) and graded exercise therapy (GET), help ME/CFS patients to recover. There was zero chance the PACE researchers would fail to obtain the results they wanted. 

Groupthink, Conflicts and Manipulation

The PACE Trial team were operating within a closed system or groupthink in which they ‘know’ their theory is correct. With every twist and turn, no matter what the actual data show, the investigators are able to confirm their theory. The process is well-known in Psychology. It is a self-indulgent processes of subjective validation and confirmation bias.  Groupthink occurs when a group makes faulty decisions because group pressures lead to a deterioration of “mental efficiency, reality testing, and moral judgment” (Janis, 1972). Given this context, we can see reasons to question the investigators’ impartiality with many potential conflicts of interest (Lubet, 2017). Furthermore, critical analysis suggests that the PACE investigators involved themselves in manipulating protocols midway through the trial, selecting confirming data and omitting disconfirming data, and publishing biased reports of findings which created a catalogue of errors.

‘Travesty of Science’

The PACE Trial has been termed a ‘travesty of science’ while sufferers of ME/CFS continue to be offered unhelpful or harmful treatments and are basically being told to ‘pull themselves together’. One commentator has asserted that the situation for ME patients in the UK is: The 3 Ts – Travesty of Science; Tragedy for Patients and Tantamount to Fraud” (Professor Malcolm Hooper, quoted by Williams, 2017). Serious errors in the design, the protocol and procedures of the PACE Trial are evident. The catalogue of errors is summarised below. The PACE Trial was loaded towards finding significant treatment effects.

A Catalogue of Errors

The claimed benefits of GET and CBT for patient recovery are entirely spurious. The explanation lies in a sequence of serious errors in the design, the changed protocol and procedures of the PACE Trial. The investigators neglected or bypassed accepted scientific procedures for a RCT, as follows:

Error Category of error Description of error
1Ethical issue: Applying for ethical approval and funding for a long-term trial when the PIs knew already knew CBT effects on ME/CFS were short-lived. On 3rd November 2000, Sharpe confirmed: “There is a tendency for the difference between those receiving CBT and those receiving the comparison treatment to diminish with time due to a tendency to relapse in the former” (www.cfs.inform/dk). Wessely stated in 2001 that CBT is “not remotely curative” and that: “These interventions are not the answer to CFS” (Editorial: JAMA 19th September 2001:286:11) (Williams, 2016).
2Ethical issue: Failure to declare conflicts of interest to Joint Trial Steering Committee.Undeclared conflicts of interest by the three PIs in the Minutes of the Joint Trial Steering Committee and Data Monitoring Committee held on 27th September 2004.
3Ethical issue: Failure to obtain fully informed consent after non-disclosure of conflicts of interest.Failing to declare their vested financial interests to PACE participants, in particular, that they worked for the PHI industry, advising claims handlers that no payments should be made until applicants had undergone CBT and GET.
4Use of their own discredited “Oxford” criteria for entry to the trial.Patients with ME would have been screened out of the PACE Trial even though ME/CFS has been classified by the WHO as a neurological disease since 1969 (ICD-10 G93.3).
5Inadequate outcome measures.Using only subjective outcome measures.The original protocol included the collection of actigraphy data as an objective outcome measure. However, after the Trial started, the decision was taken that no post-intervention actigraphy data should be obtained.
6Changing the primary outcomes of the trial after receiving the raw data. Altering outcome measures mid-trial in a manner which gave improved outcomes.
7Changing entry criteria midway through the trial. Altering the inclusion criteria for trial entry after the main outcome measures were lowered so that some participants (13%) met recovery criteria at the trial entry point.
8The statistical analysis plan was published two years after selective results had been published. The Re-definition of “recovery” was not specified in the statistical analysis plan.
9Inadequate control Sending participants newsletters promoting one treatment arm over another, thus contaminating the trial.
10Inadequate controlLack of comparable placebo/control groups with inexperienced occupational therapists providing a control treatment and experienced therapists provided CBT.
11Inadequate controlRepeatedly informing participants in the GET and CBT groups that the therapies could help them get better.
12Inadequate control Giving patients in the CBT and GET arms having more sessions than in the control group.
13Inadequate controlAllowing therapists from different arms to communicate with each other about how patients were doing.

14

Lack of transparency

Blocking release of the raw data for five years preventing independent analysis by external experts.

Cover-Up

Blocking release of the raw data for five years and preventing independent analysis by external experts was tantamount to a cover-up of the true findings. An editorial by Keith Geraghty (2016) was entitled ‘PACE-Gate’. ME/CFS patient associations were rightly suspicious of the recovery claims concerning the GET arm of the trial because of their own experiences of intense fatigue after ordinary levels of activity which were inconsistent with the recovery claims of the PACE Trial reports. For many sufferers, even moderate exercise results in long ‘wipe-outs’ in which they are almost immobilized by muscle weakness and joint pain. In the US, post-exertional relapse has been recognized as the defining criterion of the illness by the Centers for Disease Control, the National Institutes of Health and the Institute of Medicine. For the PACE investigators, however, the announced recovery results validated their conviction that psychotherapy and exercise provided the key to reversing ME/CFS.

Alem Matthees Obtains Data Release

When Alem Matthees, a ME/CFS patient, sought the original data under the Freedom of Information Act and a British Freedom of Information tribunal ordered the PACE team to disclose their raw data, some of the data were re-analysed according to the original protocols. The legal costs of the tribunal at which QMUL were forced to release the data, against their strenuous objections, was over £245,000. The re-analysis of the PACE Trial data revealed that the so-called “recovery” under CBT and GET all but disappeared (Carolyn Wilshire, Tom Kindlon, Alem Matthees and Simon McGrath, 2016). The recovery rate for CBT fell to seven percent and the rate for GET fell to four percent, which were statistically indistinguishable from the three percent rate for the untreated controls. Graded exercise and CBT are still being routinely prescribed for ME/CFS in the UK despite patient reports that the treatments can cause intolerable pain and relapse. The analysis of the PACE Trial by independent critics has revealed a catalogue of errors and provides an object lesson in how not to conduct a scientific trial. The trial can be useful to instructors in research design and methodology for that purpose.

Following the re-analyses of the PACE Trial, the DBT is dead in the water. There is an urgent need for new theoretical approaches and scientifically-based treatments for ME/CFS patients. Meanwhile, there is repair work to be done to rebuild patient trust in the medical profession after this misplaced attempt to apply the Psychological Theory to the unexplained syndrome of ME/CFS. The envelope theory of Jason et al. (2009) proposes that people with ME/CFS need to balance their perceived and expended energy levels and provides one way forward, pending further research.

Ultimately, patients, doctors and psychologists are waiting for an organic account of ME/CFS competent to explain the symptoms and to open the door to effective treatments. Patients have a right to nothing less.

An extract from: David F Marks et al. (2018) Health Psychology. Theory, Research & Practice (5th ed.) SAGE Publications Ltd.

Psychology – Science or Delusion?

Featured

‘Mass Delusion’

Psychology is full of theories, not ‘General Theories’, but ‘Mini-Theories’ or ‘Models’.  Most Mini-Theories/Models are wrong.  Unfortunately these incorrect theories and models often persist in everyday practice. This happens because Psychologists are reluctant to give up their theories. These incorrect theories then act like ‘mass delusions’, which can have consequences for others, especially students and patients.

Academic Psychology suffers from ‘delusions of grandeur’. It is as if an entire academic discipline is manifesting a chronic disorder – a kind of  ‘Scientific Psychosis’.   Psychologists claim that Psychology is a Science but there is no objective evidence to support it.  In fact, the evidence suggests the exact opposite.

Aping Science

The ability to ape proper science is not in doubt. Laboratories, experiments and grants, thousands of journals, books, institutes and universities all espouse Psychology as a Science.  Many psychologists even wear white lab coats and poke around in animals’ brains. The ability to mimic genuine scientists like Physicists or Biologists, however, does not make Psychology a science. It actually makes a mockery of science.

There are many reasons why this is the case. I mention here two:

1) Psychology does not meet even the most essential criterion for an authentic science – quantitative  measurement along ratio scales.

2) Unlike all the true natural sciences, Psychology lacks a general theory. A general theory is held by the majority of scientists working in the field.

The shared belief of the vast majority of psychologists that they are scientists, when all of the evidence suggests that this can’t be true,  is a form of professional ‘mass hysteria’.  Psychologists share a belief system of scientific delusion, thought disorder and conceptual confusion. They then impose their beliefs, not only on one another, but on their students and their patients.

Students and Patients

Many students and patients are having none of it.  They refuse to be suckered in by the claim.  But they have to be courageous enough to come out of the closet and say it. If they dare to say it in an essay or exam, then they’d better be prepared for a grade C, D, E or F.

Researchers have found that  “medical students think their psychology lectures are “soft and fluffy”students think psychology is less important than the other natural scienceschildren rate psychological questions as easier than chemistry or biology questions; and expert testimony supporting an insanity defence is seen as less convincing when delivered by a psychologist than a psychiatrist.”

On a few rare occasions, established psychologists have expressed their doubts about the scientific credentials of Psychology. For example, Jan Smedslund wrote about: “Why Psychology Cannot be an Empirical Science.” There is increasing evidence that many patients are skeptical about Psychology also.

woman-outnumbered-by-male-scientists

Folie Imposée

Folie à deux (“madness of two”) occurs when delusional beliefs are transmitted from one individual to another.  When one dominant person imposes their delusional beliefs on another, it is folie imposée. In this case, the second person probably would never have become deluded if left to themselves. The second person is expected ultimately to reject the delusion of the first person, due to disproof of the delusional assumptions, and protest. This protest, however, will fall upon deaf ears.

The situation I describe is far from hypothetical.  It exists day in, day out, for millions of patients. One particular patient group are those labeled with ‘Medically Unexplained Symptoms’ (MUS).  Within this group is a particular group of patients with Myalgic Encephalomyelitis (“ME”) and/or Chronic Fatigue Syndrome (“CFS”).

Delusional thinking certainly can hurt and embarrass the individuals having the delusion (Psychologists and Psychiatrists). It can also be imposed upon others, for example, people in their care (Patients). To the help-seeking Patient, the Psychologist (or Psychiatrist) is an expert who follows the rules of Science. The Science informs the aetiology, diagnosis, and treatment of the Patient.

Treating Patients with ME/CFS

I consider here how many psychologists in the UK treat people labeled with ME/CFS. This treatment comes with the full backing of NICE (currently under review).

Psychological treatment for patients labeled with ME/CFS is based on a Psychological Theory of the illness. This theory is highly contested and has caused major controversies that has divided Patients from Psychologists and Psychiatrists.

The main Psychological Theory of ME/CFS asserts that ‘maladaptive’ cognitions and behaviours perpetuate the fatigue and impairment of individuals with ME/CFS (Wessely, David, Butler and Chalder, 1989). These authors represent the two main professions concerned with psychological illness, Psychology and Psychiatry.  They state: “It is essential to agree jointly on an acceptable model, because people need to understand their illness. The cognitive – behavioural model …can explain the continuation of symptoms in many patients.” This is where the imposition of the therapist’s model snaps in. “The process is therefore a transfer of responsibility from the doctor, in terms of his duty to diagnose, to the patient, confirming his or her duty to participate in the process of rehabilitation in collaboration with the doctor, physiotherapist, family and others.” (p. 26).

Although the Psychological Theory is contested by many scientists, patients and patient organisations who assume that their symptoms have an organic basis, i.e. a Physical Theory.

Vercoulen et al. (1998) developed a model of ME/CFS based on the Psychological Theory. However, Song and Jason (2005) suggested that the Psychological Theory was inaccurate for individuals with ME/CFS. In spite of the evidence against it, the Psychological Theory continues as the basis for cognitive behavioural and graded exercise therapies (GET) offered to individuals with ME/CFS. One reason for the continued use of an unsupported Psychological Theory is the PACE Trial, a lesson in how not to do proper science. Like most research, this trial was organised by a team and, in this case, the majority of principle investigators were Psychiatrists. This trial has been described as “one of the biggest medical scandals of the 21st century.”

New Approach Needed

In spite of the lack of empirical support, the Psychological Theory of ME/CFS lives on. ME/CFS patients are subjected to CBT and GET.  Patients and patient organisations protest about the treatments and are opposed to the Psychological Theory.  Perhaps Psychologists need to turn the Psychological Theory of unhelpful beliefs upon themselves.  If  ME/CFS has a physical (e.g. immunological) cause, then once the cause has been established, patients will have the chance of an effective treatment and decent care and  support.

The problems that exist for Psychologists’ treatment of patients with MUS and ME/CFS exist more generally across the discipline. A totally new approach is necessary.  Instead of tinkering with the problems at a cosmetic level by papering over the cracks, there is a need for root-and-branch change of a radical kind. The measurement problem must be addressed and there is a need for a general theory.   A new General Theory of Behaviour takes a step in that direction.

Stopping the Obesity ‘Epidemic’

Purpose of Post

Here I introduce a powerful new explanation of the obesity ‘epidemic’. I reveal some surprising but brutal truths about the condition. For example, obesity is unavoidable for the majority of people in contemporary living conditions. Without radical change, the ‘epidemic’ will get much, much worse.

Obesity an ‘Epidemic’?

Notice I put the word ‘epidemic’ in single quote marks. This is because the word can only really be applied to infectious diseases. Obesity is not a disease. It’s not infectious. Obesity is a bodily condition of being overweight. It is defined loosely as having a body mass index (BMI) above 30.  This places people at increased risk for a variety of chronic conditions.  Unpleasant things like diabetes Type 2, cardiovascular diseases, cancer and obstructive sleep apnea. [As a scientific measure the BMI is a bit of a joke, by the way, but we’ll leave that for another post.]

The Problem

Two billion people alive today are overweight or living with obesity. There is no sign that the obesity epidemic is slowing down or that medical science has an understanding of the problem. A universal feature of living beings called ‘homeostasis’ is linked to obesity. Its disruption, dyshomeostasis, is a contributory cause of overweight and obesity.

The Solution

Obesity is an unavoidable human response to contemporary conditions of living. ‘Blaming and shaming’ individual sufferers is oppressive and is a part of the problem, not part of the solution. Blame and shame makes matters far, far worse. Only by reversing this form of prejudice, and the chronically stressful living conditions of hundreds of millions of people, is there any hope that we can stop the ‘epidemic’.

Take-home Message

This book is not for the faint-hearted. It cuts through the ‘shock-horror’ narrative of obesity with brutal truths about the serious and intransigent nature of obesity. Once the causes are fully understood, the obesity epidemic can be stopped. And about time too! This book is a step towards that goal.

Grab a Free Copy Now

The book is available as a kindle edition , as a free e-book here , or at iBooks ,or it can be read freely here, or here.  So there’s really no excuse for not getting hold of a copy!

The Persistence of Error

There is an embarrassing, unanswered question about theories and models in Psychology that is screaming to be answered. If the evidence in support of Psychology’s models and theories is so meagre and feeble, how have they survived for such a long time?

The scientific method is intended to be a fail-safe procedure for abandoning disconfirmed hypotheses and progressing with hypotheses that appear not to be disconfirmed. The psychologists who dream up these theories and test them claim to be scientists, so what the heck is going on?

One reason that theories and models become semi-permanent features of textbooks and degree programmes is that simple rules at the very heart of science are persistently broken. If a theory is tested and found wanting, then one of two things happens: either (1) the theory is revised and retested or (2) the theory is abandoned. The history of science suggests that (1) is far more frequent than (2). Investigators become attached to the theories and models that they are working with, not to mention their careers, and they invest significant amounts of time, energy and funds in them, and are loath to give them up, a bit like a worn-out but comfortable armchair.

We’ve all been there – seen it, done it, even have the T-shirt:

t-shirt-homme-le-petit-encule

Nothing dishonest is happening in most such cases, simply an unwitting bias to confirm one’s theoretical predilections. This is the well-known confirmation bias studied by, yes, you guessed it, psychologists (e.g. Nickerson, 1988).

The process of theory or model testing is illustrated in the diagram. The diagram shows how the research process insulates theories and models against negative results, leading to the persistence of error over many decades. Continuous cycles of revisions and extensions following meagre or negative results protect the model from its ultimate abandonment until every possible amendment and extension has been tested and tried and found to be wanting.

Screen Shot 2018-09-07 at 08.51.32What textbooks don’t tell you: the persistence of error – the manner in which a model or theory is ‘insulated’ against negative results

Several protective measures are available to insulate investigators from ‘negative’ results:
(1) Amend the model and test it again, a process that can be repeated indefinitely.
(2) Test and retest the model ignoring the ‘bad’ results until some positive results appear that can happen purely by chance (a type 2 error).
(3) Carry out some ‘statistical wizardry’ to concoct a more favourable-looking outcome.
(4) Do nothing, i.e. do not publish the findings, and/or:
(5) Look for another theory or model to test and start all over again!

Beside all of these issues, there is increasing evidence of lack of replication, selective publication of positive findings, and outright fraud in psychological research, all of which militate against authentic separation of fact from fantasy (Yong, 2012).

Little attention has been paid to the cultural, socio-political and economic conditions that create the context for individual health experience and behaviour (Marks, 1996). Thousands of studies have accumulated to the evidence base that is showing that socio-cognitive approach provides inadequate theories of behaviour change. Any theory that neglects the complex cognitive, emotional and behavioural conditions that influence human choices is unlikely to be fit for purpose. Furthermore, health psychology theories are disconnected from the known cultural, socio-political, and community contexts of health behaviour (Marks, 2002). Slowly but surely these issues are becoming more widely recognized across the discipline and, at some point in the future, could become mainstream.

As we have seen, critics of the socio-cognitive approach have suggested that SCMs are tautological and irrefutable (Geir Smedslund, 2000). If this is true, then no matter how many studies are carried out to investigate a social cognitive theory, there will be no genuine progress in understanding.

Weinstein (1993: 324) summarized the state of health behaviour research as follows: ‘despite a large empirical literature, there is still no consensus that certain models of health behaviour are more accurate than others, that certain variables are more influential than others, or that certain behaviours or situations are understood better than others.’ Unfortunately, there has been little improvement since then. The individual-level approach to health interventions focuses on theoretical models, piloting, testing and running randomized controlled trials to demonstrate efficacy.

It has been estimated that the time from conception to funding and completing the process of demonstrated effectiveness can take at least 17 years (Clark, 2008). Meta-analyses, reviewed here, suggest that the ‘proof of the pudding’ in the form of truly effective individual-level interventions is yet to materialize.  Alternative approaches for the creation of interventions for at-risk communities and population groups are needed. A fresh approach requires a general theory of behaviour that encompasses human intentionality, desire and purpose within an ontology of change.

Psychology as a Natural Science. Part I: Measurement

I wished, by treating Psychology like a natural science, to help her to become one.

William James

The Problem

For more than a century, Psychologists have struggled to make their discipline a ‘proper science’.  From introspection, to behaviorism and then to cognitivism, Psychology has fallen somewhat awkwardly between the biological and social sciences. Suffering existential doubt, and always looking over their shoulders, Psychologists never quite found a place of comfort at the high table of Science. Contributing to this liminal status have been three issues, measurement, theory, and paradigm.

In this article, I discuss measurement in Academic Psychology. The branch of Academic Psychology that is usually held up to be the most ‘scientific’ is Psychometrics, otherwise known as ‘Psychological Measurement’. Bizarrely, it is also the largest thorn in the side of Academic Psychology considered as a science. I explain some of the reasons for this curious state of affairs below.

S. S. Stevens – “Mass Delusion”

Attributes of the physical world are measured quantitatively. Attributes of the psychological world are more ‘sticky’ to deal with. For good reason, psychologists are unable to measure many of the most interesting psychological attributes in any direct and objective manner. Unfortunately, measurement in Psychology is an ‘Emperor’s clothes’ story.  The early years as an infant science were spent paddling at the shallow end of the pool with attempts to make psychophysics and ability testing the showcases of a new quantitative science. But it was all downhill from there on.

In spite of limited successes, Psychology’s ‘measurement problem’ has never been satisfactorily resolved. S.S. Stevens’ Handbook of Experimental Psychology (1951) invoked ‘operationism’ as a potential solution and, since that time, Psychologists have assumed as an act of faith that measurement is the assignment of numbers to attributes according to rules. Sadly, Stevens’ solution is a mass delusion, a sleight of mind.

Joel Michell: “Thought Disorder”

Among his many in-depth writings about Psychological measurement, Joel Michell (1997) summarized the situation thus: “…establishing quantitative science involves two research tasks: the scientific one of showing that the relevant attribute is quantitative; and the instrumental one of constructing procedures for numerically estimating magnitudes. From Fechner onwards, the dominant tradition in quantitative Psychology ignored this task. Stevens’ definition rationalized this neglect. The widespread acceptance of this definition within Psychology made this neglect systemic, with the consequence that the implications of contemporary research in measurement theory for undertaking the scientific task are not appreciated…when the ideological support structures of a science sustain serious blind spots like this, then that science is in the grip of some kind of thought disorder.” (Michell, 1997).

A ‘kind of thought disorder’ – strong terms but it is true.

It is apparent that numbers can be readily allocated to attributes using a non-random rule (the operational definition of measurement) that would generate ‘measurements’ that are not quantitatively meaningful. For example, numerals can be allocated to colours: red = 1, blue = 2, green = 3, etc. The rule used to allocate the numbers is clearly not random, and the allocation therefore counts as measurement, according to Stevens. However, it would be patent nonsense to assert that ‘green is 3 × red’ or that ‘blue is 2 × red’, or that ‘green minus blue equals red’. Intervals and ratios cannot be inferred from a simple ordering of scores along a scale. Yet this is how psychological measurement is usually carried out.

Stevens’ oxymoronic approach aimed to circumvent the requirement that only quantitative attributes can be measured in spite of the self-evident fact that psychological constructs such as subjective well-being are nothing like physical variables (Michell, 1999, Measurement in Psychology). However, positivist psychometricians blithely treat qualitative psychological constructs as if they are quantitative in nature and as amenable to measurement as physical characteristics without ever demonstrating so. For more than 60 years many psychologists have lived in a make-believe world where ‘measurement’ consists of numbers allocated to stimuli on ordinal or Likert-type scales. This feature alone cuts off at its roots the claim that Psychology is a quantitative science on a par with the natural sciences.

Measurement can be defined as the estimation of the magnitude of a quantitative attribute relative to a unit (Michell, 2003). Before quantification can happen, it is first necessary to obtain evidence that the relevant attribute is quantitative in structure. This has rarely, if ever, been carried out in Psychology. Unfortunately, it is arguably the case that the definition of measurement within Psychology since Stevens’ (1951) operationism is incorrect and Psychologists’ claims about being able to measure psychological attributes can be questioned (Michell, 1999, 2002). Contrary to common beliefs within the discipline, psychological attributes may not actually be quantitative at all, and hence not amenable to coherent numerical measurement and statistical analyses that make unwarranted assumptions about the numbers collected as data.

Psychometric Myth

Psychometricians often make the precarious assumption that ordinal scales constitute a valid description of underlying quantitative attributes, that psychological attributes are measurable on interval scales.  Otherwise there can be no basis for quantitative measurement across large domains of the discipline. Michell (2012) argued that: “the most plausible hypothesis is that the kinds of attributes psychometricians aspire to measure are merely ordinal attributes with impure differences of degree, a feature logically incompatible with quantitative structure. If so, psychometrics is built upon a myth (p. 255).

This view is supported by Klaas Sijtsma (2012) who argues that the real measurement problem in Psychology is the absence of well-developed theories about psychological attributes and a lack of any evidence to support the assumption that psychological attributes are continuous and quantitative in nature.

Scientific Psychosis

A person with delusions of grandeur can be labeled as suffering from psychosis. But what if a whole discipline has delusions of grandeur? In this case the term ‘Scientific Psychosis’ would not seem inappropriate.

Using ordinal data as if they are interval or ratio scale data leads to incorrect inferences and false conclusions. Using totals and averages requires data to be on an interval scale. Performing parametric analyses on ordinal data can produce biased estimates of variances, covariances, and correlations and spurious interaction effects.

Yet these practices are regular, everyday occurrences in Academic Psychology. I am not talking about first year undergraduate lab classes. I am talking about people at all levels from illustrious professors at Harvard, Yale, Princeton, Oxford and Cambridge.  They not only regularly break the basic rules of measurement themselves on a wholesale basis, they negligently train their students to do it also.

If the received wisdom about measurement in Academic Psychology is characterised as mass delusional, thought disordered and confused, we have a serious problem, a very serious problem. And the problem seems to be getting worse. We can quite justifiably call this syndrome: ‘Scientific Psychosis’.

Thurstone: Ratio Scaling

To be consistent with its claim to be a science, psychologists must use measures that preserve the requirements of a ratio scale, namely, that there are meaningful ratios between measurements. For example, if you have a cold and took three paracetamol tablets today and four yesterday, you could say that the frequency today was ¾ or .75 of what it was yesterday. Measuring objects by using a known scale and comparing the measurements works well for properties for which scales of measurement exist. L L Thurstone (1927) used the method of pair comparisons to derive scale values for any set of stimulus objects with the Law of Comparative Judgement which states:

Screen Shot 2018-08-17 at 17.19.26

In his ‘Analytic Hierarchy Process’, Saaty (2008) also uses direct comparisons between pairs of objects to establish measurements for intangible properties that have no scales of measurement. The value derived for each element depends on what other elements are in the set.  Relative scales are derived by making pairwise comparisons using numerical judgments from an absolute scale of numbers (e.g. 0-9). Measurements to represent comparisons define a cardinal scale of absolute numbers that is stronger than a ratio scale.

Intuitive measurement is something that we take for granted in everyday life. However the way intuitive measurement works may be far from intuitive.  Consider how we are able estimate and compare magnitudes of objects, even when we have never actually seen these objects. For example, how do we compare the sizes of animals such as lions and hippos and judge which is larger or which is smaller? One theory of this process that appears to be especially accurate is described below.

Reference Point Theory

One theory of the estimation and comparison of magnitudes assumes there are implicit minimal and maximal reference points at the extreme ends of the distribution. As a special case of the Law of Comparative Judgement, the theory assumes that stimulus objects are represented by distributions with variances that increase with distance from the reference point contained in the question (Marks, 1972).

Screen Shot 2018-08-17 at 17.22.19

DM with JL PhDThis photo from 1969 shows the author and ‘subject’ with the basic apparatus and stimuli from Experiments 7 and 8 of the author’s doctoral research at Sheffield University, ‘An Investigation of Subjective Probability Judgements’.

Keith J Holyoak

In 2014, Reference Point Theory received strong empirical support from a team at UCLA under the leadership of Keith J Holyoak.  Keith is not only a Distinguished Professor but he is Editor of Psychological Review.  Chen, Lu and Holyoak (2014) present a model of how magnitudes can be acquired and compared based on BARTlet, a simpler version of ‘Bayesian Analogy with Relational Transformations’ (BART, Lu, Chen, & Holyoak, 2012). The authors concluded that Reference Point Theory provided the best fit to their data:

“BARTlet provides a computational realization of a qualitative hypothesis proposed four decades ago by Marks (1972)…The reference-point hypothesis implies that the congruity effect results from differences in the discriminability of magnitudes represented in working memory, rather than a bias in encoding (e.g., Marschark & Paivio, 1979) or a linguistic influence (Banks et al., 1975). BARTlet provides a well-specified mechanism by which reference points can alter discriminability in direct judgments of discriminability (Holyoak & Mah, 1982) as well as speeded tasks (p. 46).”

download-3

As well as being a Distinguished Professor at UCLA, and editing Psychological Review, Keith J Holyoak is also a poet and translator of classical Chinese poetry.  Kudos!

“The greatest scientists are artists as well.” (Albert Einstein).

“The greatest scientists are artists as well,” said Albert Einstein