Article Text
Abstract
Objective To investigate whether and which negative diagnosis-related experiences of patients newly diagnosed with colorectal cancer (CRC) are associated with a poorer overall rating of care, to help prioritise interventions.
Methods A secondary data analysis was conducted using the English National Cancer Patient Experience Survey 2018. Analysis was restricted to responses by patients with CRC diagnosed within 12 months of survey, through pathways other than population screening. Nine diagnosis-related questions were selected (six objective and three feelings-based). The primary analysis used multivariable logistic regression to predict poorer overall care rating from negative experience responses to the six objective questions, adjusted for confounders. The sensitivity analysis additionally included the three feelings-based questions. Predictors of poorer overall rating with a significance level at p<0.01 were retained in the final models.
Results 4069 CRC patient survey responses were analysed. In the primary analysis, negative experiences were reported between 4% (‘Enough information about diagnostic test’) and 21% (‘Given written information about your cancer type’) of respondents. In multivariable analysis, all six objective questions were predictive of poorer overall rating, with ORs ranging from 1.6 to 3.5. In the multivariable sensitivity analysis, eight of nine negative experiences were predictive.
Conclusion Negative experiences reported on diagnosis-related questions were almost always associated with a higher likelihood of a poorer overall care rating. To reduce negative diagnostic experiences, the most apt interventions to incorporate into workflows may be informing patients to bring someone to their diagnosis consultation and routine provision of tumour-specific information relevant to patient circumstances.
- CANCER
- COLORECTAL CANCER
- HEALTH SERVICE RESEARCH
Data availability statement
Data are available in a public, open access repository.
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
Statistics from Altmetric.com
WHAT IS ALREADY KNOWN ON THIS TOPIC
Colorectal cancer is among the most commonly diagnosed cancers, accounting for about 10% of cancer diagnoses reported globally. Diagnosis-related experiences are particularly salient to perception of care quality of patients with cancer. Patient experience data can be used to guide interventions to improve care.
WHAT THIS STUDY ADDS
Between 4% and 21% of respondents reported negative diagnostic experiences to the six questions included in the primary analysis. Negative responses to any and all of the six objective questions were significantly predictive of poorer overall care rating in multivariable analysis. In a sensitivity analysis, eight of nine negative experiences were predictive, comprising five of six objective questions and all three feelings-based questions. The majority of the negative experiences that strongly predicted poorer overall care rating were related to patients’ unmet information needs.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
The significance of negative diagnosis-related patient experiences as predictors of poor overall care rating is reflective of key areas for intervention in care quality. Given that majority of the stronger predictors of poor overall care rating were related to provider information provision and communication, intervention directed towards any of these experiences may improve care rating.
Introduction
In 2020, the International Agency for Research on Cancer estimated that cancers of the colon and rectum represent about 10% of reported cancers and 9% of cancer deaths worldwide.1 This makes colorectal cancer (CRC) the third highest routinely registrable cancer by incidence and second highest by mortality. There are over 40 000 new CRC cases every year in the UK.2 Five-year CRC survival has more than doubled in the last 40 years,2 largely due to screening programmes and treatment advances.
In 2000, the UK National Health Service (NHS) Cancer Plan acknowledged poor patient experience as a central problem in care provision.3 In 2012, the National Clinical Guideline Centre provided clinical guidance with the goal of improving patient experience, updated most recently in 2021.4 Patient-reported experience measures (PREMs) provide insight into patient perspectives while receiving care; PREMs focus on the impact of care processes on patients.5 Studies have shown strong associations between patient experiences and outcomes.6 7
The English Cancer Patient Experience Survey (CPES) has been conducted on behalf of NHS England since 2010. The survey aimed to monitor national cancer care processes to drive quality improvement through patient experiences. The design and implementation of the CPES is overseen by its Cancer Patient Experience Advisory Group that consists of patients, professionals, voluntary sector representatives, academics and patient survey experts.8 Analyses of other surveys of cancer care have identified elements of patient experience during the diagnosis period that could be improved,9 10 while strong associations have been found in the 2016 CPES data between overall rating and the meeting of information needs during the diagnostic phase.11 Our recent evaluation of CRC patient responses from the 2017 CPES found a consistent trend for positive diagnosis experiences to increase with age, and to be higher in male patients.12 To extend our evaluation, we aimed to identify whether aspects of patient experience leading up to diagnosis are associated with poorer overall care ratings by patients with CRC. We further aimed to help prioritise which diagnosis-related aspect of care should be targeted for improvement.
Materials and methods
The CPES is commissioned annually by the NHS to survey patients with cancer aged 16 years and over who have been discharged from a hospital following an inpatient or outpatient treatment related to cancer. The CPES 2018 surveyed patients who had treatment between April and June 2018; publicly available anonymised data were accessed.13 The CPES records patient experiences using 69 questions about diagnosis, care support, treatment and follow-up. Also recorded are: age group (16–24, 25–34, 35–44, 45–54, 55–64, 65–74, 75–84 and 85+), gender, ethnic group (British mixed ethnicity vs white British), Index of Multiple Deprivation (IMD) quintile rank and International Classification of Diseases-10 cancer type. Age group was recoded to four groups (<55, 55–64, 65–74, 75+ years).
Analysis was restricted to respondents with a primary diagnosis of colon or rectal cancer (International Classification of Diseases for Oncology codes C18–20). To reduce the potential for recall bias, respondents were restricted to those with a recent diagnosis by excluding patients who: were sampled for a previous year’s survey (2015, 2016 or 2017); had their diagnostic test more than 12 months before the survey (as elicited from the response to Question 4 (Q4)); and were being treated for their cancer for longer than 12 months (Q60). Since routine screening was only available to people aged 60–75 years in 2018, patients diagnosed via screening (Q1) were also excluded to ensure relatively homogenous diagnostic experiences. Patients identified through screening are known to have a different staging profile, and the publicly available CPES data do not include information on disease staging. Respondents with a missing overall care rating response (Q59) were also excluded.
Diagnostic experiences and overall care rating
The first 11 questions of the survey measure experiences related to the diagnosis period. Two questions were excluded: Q4, used to determine the study population; and Q3, explored patient-related delay and was not therefore relevant to our aim of identifying actionable items under provider control. The remaining nine question-responses were categorised into positive, negative and uninformative (eg, Don’t know/Can’t remember) experiences in line with NHS scoring and reporting standards as indicated in the technical documents of the data accessed (table 1). For this study, an NHS designated ‘0’ score on a response category for question was denoted as a negative experience, whereas a score of ‘1’ signified a positive experience.13 Unscored NHS response categories were deemed at uninformative responses. For example, Q1 asked, ‘Before you were told you needed to go to hospital about cancer, how many times did you see your GP (family doctor) about the health problem caused by cancer?’ NHS assigned a score of ‘1’ to ‘I saw my GP once’ and ‘I saw my GP twice’. These were positive experiences. NHS scored a ‘0’ for ‘I saw my GP 3 or 4 times’ and ‘I saw my GP 5 or more times’, indicating a negative experience. There were no scores assigned to ‘Don’t know/can’t remember’, hence these were uninformative responses.
Three of the nine questions (Q2, Q6 and Q9) were excluded from the primary analysis as they asked about how the patient ‘felt’ rather than what happened. We chose to focus the primary analysis on processes of care because feelings may be on the causal pathway between the objective experiences and the overall rating and thus, when modelled statistically, may cloud our understanding of which processes need to be targeted. This approach is also in accordance with other literature analysing CPES data.14 15 For example, Q2 asked, ‘How do you feel about the length of time you had to wait before your first appointment with a hospital doctor?’ These questions were included in a secondary sensitivity analysis to explore whether they enhance our understanding of the interplay between processes and feeling.
The dependent variable was patients’ poorer overall rating of their cancer care. This was determined by using responses to Q59, ‘Overall, how would you rate your care?’, on a rating scale of 0–10 (from very poor to very good). The scale was dichotomised in line with a recent analysis of the 2015 CPES data,16 with a rating of less than 8 out of 10 defined as a poorer overall rating.
Statistical analysis
Multivariable logistic regression was used to predict a poorer overall care rating. Two levels were defined, the sociodemographic (Level 1) and the experiential (Level 2) reflecting a conceptual hierarchy, and modelled accordingly.17 In Level 1 analysis, sociodemographic characteristics were modelled until all variables retained were statistically significant. In Level 2, final Level 1 variables were regarded as forced, with Level 2 variables only retained if statistically significant. For all models, the area under the curve (AUC) was measured as an indicator of model discriminatory power. The Hosmer-Lemeshow test was used to assess goodness of fit, with a non-significant test result indicating adequate fit; as the sample size was large (~4000), the number of groups used for assessing goodness of fit was increased in line with recommendations to standardise the power of the Hosmer-Lemeshow test.18
Level 1: sociodemographic characteristics. Univariable models were run for each sociodemographic characteristic (ie, age group, gender, ethnicity, IMD and cancer type). Variables significant at p<0.20 were assessed in multivariable analysis. In multivariable analysis, only variables significant at p<0.01 were retained (final Level 1 model).
Level 2: primary analysis. The six variables were modelled as dummy variables comparing ‘negative’ and ‘uninformative’ experiences to the referent group of ‘positive’ experience (see table 1); as the focus was negative experience, both the overall question and the dummy variable for negative experience had to meet the statistical thresholds. Screen-positive Level 2 variables (p<0.2) were forward modelled by adding to the final Level 1 model until all Level 2 variables that remained were statistically significant at p<0.01.
Level 2: sensitivity analysis. A secondary analysis was carried out by adding the three ‘feelings’ questions (Q2, Q6 and Q9) in the Level 2 multivariable analysis. This analysis followed the same procedure as the primary analysis.
Results
The CPES 2018 received a total of 73 817 patient responses (response rate 64%) for inpatient episodes or day cases between April and June 2018. As shown in online supplemental eFigure 1, 7646 of the 73 817 respondents had CRC. Of these, 4971 patients were not previously sampled in 2015–2017 and had been diagnosed and treated for CRC within the past 12 months. From these, 763 patients diagnosed via screening and 139 patients with missing overall care rating responses were excluded, leaving 4069 patients in the final sample; 498 (12.2%) of these respondents had a poorer overall care rating.
Supplemental material
Figure 1 illustrates the distribution of missing, non-informative and negative responses to the questions included in the primary analysis. The lowest proportion of negative experiences related to a question about whether patients were given all the information needed about their diagnostic test (Q5), with only 4% reporting that they did not have the information. Responses to the remaining five questions ranged from 16% for Q8 (told could bring family/friend to cancer revelation) to 21% for Q11 (given written information about your cancer type).
Table 2 shows the results for the primary and secondary analyses. Of the available demographic characteristics, only age group and gender were significant in Level 1 analysis; older patients were less likely to rate their care poorly, and females more likely to do so.
The central columns of table 2 show that a negative experience on each of the six diagnosis-related questions was significantly associated with a poorer rating for overall care in multivariable analysis, after controlling for age group and gender. The highest adjusted OR of a poorer overall care rating was found for those who were not given information regarding their diagnostic test (Q5), 3.50 (95% CI 2.38, 5.14). Intermediate ORs were found for number of pre-referral general practitioner (GP) visits (Q1), understood explanation of test results (Q7), told could bring family/friend to explanation (Q8) and given written information about their cancer (Q11). Respondents who did not understand the explanation of their diagnosis (Q10) had the lowest adjusted OR of 1.64 (95% CI 1.26, 2.14) for a poorer overall care rating. The full model had an AUC of 0.77 (95% CI 0.75, 0.79), and the fit was adequate (Χ278 df=80; p=0.43).
Figure 2 illustrates the proportion with poorer overall care rating for each total number of negative responses on the six diagnosis-related questions in the primary analysis. Of respondents with no negative experiences reported, only 4% reported a poorer overall care rating. This increased with the number of negative experiences; 27% of the respondents with a negative experience on any three out of six questions rated their overall care poorly.
The right-hand columns of table 2 display ORs from the sensitivity analysis which included three additional questions reflecting patient ‘feelings’. Five of the six initial questions remained significant predictors of poorer overall rating, but the number of pre-referral GP visits (Q1) was no longer significant. All three ‘feelings’ questions were significant predictors of poorer overall rating, with ORs of 2.06 (95% CI 1.58, 2.70; Q2—feel about wait for referral), 1.93 (95% CI 1.45, 2.59; Q6—feel about wait for diagnostic test) and 1.56 (95% CI 1.18, 2.06; Q9—feel about way told about cancer). Again, a lack of information about the diagnostic test (Q5) had the highest OR for a poorer overall care rating (2.92; 95% CI 1.96, 4.35). The extended model had only a marginally higher AUC of 0.80 (95% CI 0.77, 0.82); the fit was again adequate (Χ275 df=78; p=0.39). The correlation between number of GP visits (Q1) and feeling about time to referral Q2 was 0.35, explaining its removal from the model. The other two sets of questions were not so closely related conceptually. Having all the information you need before test (Q5) was not strongly related to feelings about length of time to test results (Q6; r=0.17), leading to retention of both questions in the sensitivity analysis. Feelings about the way you were told (Q9) was more strongly correlated with being told the results in a way you could understand (Q7; r=0.29) than being told you could bring family member (Q8; r=0.24); all three questions remained significant in the sensitivity analysis.
Discussion
This study of 4069 respondents to the CPES 2018 found that all six diagnosis-related experiences were independently strongly associated with patients’ overall care rating, after controlling for age group and gender. ORs ranged from 1.6 to 3.6. The primary analysis suggests that intervening to improve any of these diagnosis-related experiences could affect patients’ perception of the overall experience. This was further supported by the sensitivity analysis, with all three feelings-mediated questions statistically significant in the final model. Of the negative diagnosis-related experiences, many are related to information provision.
The results are summarised graphically in figure 3, with questions grouped as primary care (Q1), information provision (Q5 and Q11), provider communication (Q7, Q8 and Q10) and patient feelings (Q2, Q6 and Q9). While improved service provision should improve how patients feel about services, it was hypothesised that feelings about an experience would be on the causal pathway between the experience and a low overall care rating; including it could potentially exclude more objective variables. The sensitivity analysis confirms this to some extent as the question about number of pre-referral GP visits (Q1) was no longer statistically significant, reflecting its overlap with a question about the respondents feeling about the length of wait for hospital referral (Q2).
Timely primary care referrals are key to early-stage diagnosis and contribute to treatment success. The primary analysis findings suggest that having three or more GP visits before hospital referral is associated with a higher likelihood of poorer care rating (OR=1.8). Recent focus on the increasing incidence of early-onset CRC has highlighted this issue, with both researchers19 and patients20 21 expressing the concern that delays in primary care recognition and referral are partly responsible for later stages of disease presentation. It is acknowledged that ambiguity of symptoms plays a large part in diagnostic delay,19 22 23 but also recognised that adherence to updated clinical guidelines, responding to increasing incidence in younger age groups, can lead to improvements.24 Primary care providers potentially play a pivotal role in increasing patient uptake of population screening opportunities.25
An analysis of 2016 CPES questions found that meeting information needs in the pretreatment and post-treatment periods was most strongly associated with higher average care ratings.11 Inadequate provision of information related to CRC diagnosis and treatment journey has been reported, with 24% of patients reporting a desire for more information.26 We found that relatively few respondents (4%) reported inadequate information about their diagnostic tests, which is a positive. By contrast, we found that 21% reported not receiving written information about the type of cancer they had (15%), or found it difficult to understand (6%), a situation which invites prompt system response. Around 18–19% reported difficulty understanding explanations of either test results or their diagnoses.
It is plausible that this one in five with negative experiences includes a disproportionate share of people with physical or learning disabilities, or those with limited English, whose needs are inadequately addressed. The NHS specifically provides guidance that these special needs should be addressed,4 but perception of quality care of patients with CRC has, for example, been shown elsewhere to be lower for those with limited English.27 There is evidence that non-white ethnic groups and individuals living in socioeconomically deprived areas are less likely to respond to CPES surveys28; to the extent that they are more likely to experience communication difficulties, the rates of negative experiences relating to understanding may be underestimated. Individual providers need to check understanding during these key moments, while local service systems need to identify the target groups who have particular difficulties with understanding and trial interventions to improve these aspects of service provision.
Finally, the revelation of a cancer diagnosis can cause emotional overload for patients. It is unfortunate then that 16% were either not told they could bring someone to that appointment (14%) or told by phone or letter (2%). It is also notable that, in the sensitivity analysis, this question was independently predictive of a poorer overall care rating despite the simultaneous inclusion of a question about how the respondent felt about the way they were told they had cancer, which was more closely correlated with the questions about being told about your diagnosis in a way you could understand. As with the provision of written information, informing someone that they can be accompanied to the consultation would appear to be a relatively easy addition to the workflow.
A strength of the present study is that it investigated the care experiences of a homogeneous population—patients with CRC diagnosed and treated in the preceding 12 months. Our results confirm the findings of an earlier investigation of predictors for overall care rating which encompassed multiple cancer types and took into account the entire treatment journey, which identified several key predictors of an overall care rating including time to first hospital visit and receiving written information about the diagnosis.16 Our results extend this work by showing several other diagnostic experiences are also important independent predictors for patients with CRC diagnosed through pathways other than screening. Clearly, a price we paid for this focus is that we lack information about the experience of patients identified via screening; we excluded these patients as they are known to have a different disease-staging profile, and we lack data to control for this in analysis.
A limitation of the study is the lack of information available on potential confounders. For example, ethnicity (limited to ‘White British’ vs all others) was examined as a potential predictor but not included as it was not significant; an ideal analysis would have examined the impact of English proficiency, information that is not available in the CPES. Finally, this analysis is restricted to CPES respondents. We have no information about CRC non-respondents to the 2018 CPES survey, and how they rate their care; information from an analysis of earlier data across lung cancers suggests that CPES respondents tend to be younger, relatively socioeconomically advantaged and more likely to have received anticancer treatment.29
Finally, while this analysis of routinely collected data identifies potential candidates for intervention, much can be learnt using other research approaches. For example, randomised intervention studies to address possible shortcoming could confirm the modifiability of these experiences and the relationship between these changes and overall care rating. Such trials could be accompanied by qualitative studies (say interviews or focus groups) that probe a more nuanced understanding of patient experience and its relationship to care rating.
Conclusion
This study demonstrated the importance of diagnosis-related experiences in shaping overall rating of care for patients with CRC who were diagnosed through pathways other than routine screening. The primary analysis particularly highlights the importance of the number of GP visits patients had to make before being referred for a CRC diagnosis. This is a vital component for timely diagnosis of patients, especially for those who either do not fall within the routine screening age brackets or do not participate in the routine screening. Another crucial theme emerging from the evidence is unmet information provision needs. This study provides policy makers and relevant stakeholders with possible areas for intervention to improve the quality of diagnosis-related experiences.
Data availability statement
Data are available in a public, open access repository.
Ethics statements
Patient consent for publication
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Footnotes
Contributors SSO and GA developed the study and design. SSO conducted the analysis and wrote the first draft manuscript in consultation with GA and KL. All authors contributed to and approved of the final draft. SSO is responsible for the overall content.
Funding This study was supported by the Centre of Research Excellence in Implementation Science in Oncology, National Health and Medical Research Council of Australia (NHMRC grant number 1135048), which is administered by the Australian Institute of Health Innovation, Macquarie University.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.