Skip to main content
  • Research article
  • Open access
  • Published:

Improvement of psychometric properties of a scale measuring inpatient satisfaction with care: a better response rate and a reduction of the ceiling effect

Abstract

Background

The objective was to solve two problems of an already validated scale measuring inpatient opinion on care: 1) a high non-response rate for some items due to the "not applicable" response option and 2) a skewed score distribution with high ceiling effect.

Methods

The EQS-H scale ("échelle de qualité des soins en hospitalisation") comprised 26 items and 2 sub-scales of 13 items each, 'quality of medical information' (MI) and 'relationships with staff and daily routine' (RS). Three studies were conducted: a first mono-centre study (n = 552, response rate = 83.4%, self-completion of the scale the day before discharge) to construct a shorter version of the scale without the items with high non-response rate and maintaining those useful to ensure good internal validity (construct, convergent and divergent) and reliability; a second mono-centre study (n = 1246, response rate = 77.9%, self-completion of the scale before discharge) to confirm psychometric properties of the new version; a third multi-centre national study (n = 886, response rate 41.7%, self-completion at home 15 days after discharge) to test a new response pattern in order to reduce ceiling effect.

Results

Six items having a non-response rate >20% were deleted, increasing rates of exhaustive response to all items from 15% to 48%. Factorial analysis supported the evidence for removing 4 more items to ensure good internal validity and reliability of the new version. These good results (initial variance explained: 43%; Cronbach's α: 0.80 (MI) and 0.81 (RS)) were confirmed by the second study. The new response format produced a normalisation of the 2 scores with a large decrease in ceiling effect (25% to 4% for MI subscale and 61% to 8% for RS). Psychometric properties of the final version were excellent: the 2 subscales (8 items each) explained 66% of the variance in principal component analysis, Cronbach's α were respectively 0.92 (MI) and 0.93 (RS).

Conclusion

The new version of the EQS-H has better psychometric properties than the previous one. Rates of missing values are lower, and score distribution is normalized. An English version of this scale focused on quality of medical information delivered and on relationship with staff already exists, and this could be useful to conduct cross-cultural studies of health care service quality.

Background

The assessment of satisfaction with care among hospitalized patients is increasingly recognized as a major component of quality management. Continuous quality improvement, comparison of hospital performances, and demands for accountability are some of the reasons that lead hospitals to measure patient satisfaction. Numerous studies on patient needs and expectations have been conducted [17]. According to Fitzpatrick [8] and others, patient satisfaction is a component of healthcare quality which reflects healthcare professionals' ability to meet their patients' needs and expectations. Donabedian [9] showed that the measurement of patient satisfaction is also part of the care provision process, and, as such, enables identification of dysfunction in the organization of care, and evaluation of efforts to improve quality. Several authors have found a relationship between patient satisfaction and clinical results [1012]. Other studies, however, such as that by Barlesi [13] or by Vingerhoets [14] have not shown any impact of results of patient satisfaction surveys on the improvement of healthcare delivery. This issue of the use of results derived from measures of satisfaction as a tool to improve clinical performances has not been fully resolved, but authors agree that measurement is beneficial to patients (who are then viewed as partners in the care process) [15], and also to professionals (a confirmation of their professional skills) [16]. Finally, patient satisfaction is now one of the most common dimensions of performance on hospital dashboards. Patient satisfaction questionnaires have proliferated over the last decades as tools to measure health care from the patients' perspective. Nevertheless, in most cases, surveys have been criticized for their lack of a conceptual framework, and lack of valid and reliable instruments [17].

In France, measuring satisfaction has been mandatory since 1996 and several questionnaires have been developed over the last ten years [1824]. Among the existing French scales, the EQS-H scale ('Echelle de Qualité des Soins en Hospitalisation') is used to assess inpatient satisfaction with medical information and relationships with staff. The validation of the first version of the scale was published in 1999 [19, 20]. Although the EQS-H is convenient to use from the point of view of both hospitals and patients, its psychometric properties are compromised by a high rate of 'did not apply to me' responses (NA), analyzed as missing data, and by a skewed score distribution.

The main objective of this work was to optimize the psychometric properties of the scale by deleting items having a high rate of NA responses to increase scale stability, and by reducing ceiling effect to improve item response distribution. Overall, the aim was to make the questionnaire valid, reliable, easy to complete by all inpatients and suitable for quality of care improvement management.

Methods

The design of the research consisted in 3 studies

  • Study A (scale shortening) to select items to be deleted on the basis of NA rates and using psychometric analysis combining Principal Component Analysis (PCA), convergent and discriminant validities, and reliability evaluated by Cronbach's α coefficients.

  • Study B (replication phase) to confirm the psychometric properties of the new version of the EQS-H.

  • Study C to test a new response pattern designed to reduce ceiling effect.

Samples and study design (described in Table 1)

Table 1 Description of the designs of the studies

Studies A and B were mono-centre surveys carried out in the same conditions in April 2002 and in April 2003 in the teaching hospital of Nantes (France).

Study C was a multi-centre national survey conducted in October 2004 in 12 volunteering short-stay hospitals (teaching, general and private) taking part in a international performance assessment project co-ordinated by the World Health Organisation's pan-Europe project in Barcelona (PATH project: Performance Assessment Tools for Hospitals) [25]. Twenty hospital performance quality indicators were selected in several fields and a standardized evaluation of inpatient satisfaction was performed to assess the 'patient centeredness' dimension of the performance model.

Questionnaire

The questionnaire used in Study A was the initial 26-item EQS-H comprising 2 sub-scales: "quality of medical information" (MI) (13 items) and "relationships with staff and daily routine" (RS) (13 items) [19]. In the validation study, the variance explained by the 2 factors was 42.3% and Cronbach's α was respectively for MI and RS subscales 0.88 and 0.87. Each item was rated from 1 (not at all) to 4 (absolutely), and a "NA" response was entered into analyses as a missing value. Only 15% of the patients responded exhaustively to all items in the validation study. The items of each sub-scale were summed, and then sums were rescaled to cover a range from 0 to 100 (the highest score reflecting the greatest satisfaction). Patient scores could be computed when at least half of the items plus one were completed.

The questionnaire used in Study B was the short version of the EQS-H (16 items).

The questionnaire used in study C was the 16-item EQS-H questionnaire constructed from studies A and B, with the response choice pattern modified from the previous 4-point format to a 5-point scale with 3 positive choices (excellent, very good, good) and 2 negative choices (moderate, poor). This format is considered to be the best way to avoid a ceiling effect, often highlighted in satisfaction questionnaires [26, 27].

In line with previous studies and the literature [19, 24], socio-demographic, medical and hospital-stay characteristics in relation to patient 16-item EQS-H scores were explored: gender, age, mode of admission, perceived health status compared to admission, perceived health status compared to people of the same age, satisfaction with life in general.

Statistical analysis

Study A: Items with a 'NA' response rate higher than 20% were removed from the scale. An explanatory Principal Component Analysis (PCA) using a Varimax rotation on the correlation matrix was performed on the remaining 20 items. The number of factors was determined using the scree plot. Two criteria were used to attribute each item to one of the factors. First, a substantial loading on one principal component: like other authors [18], we chose coefficients >0.60 although the values generally accepted for the loadings are >0.40 [28]. Second, if an item loaded across several factors, it was attributed to the factor for which it maximized internal consistency measured by Cronbach's α. This strategy enabled removal of several items, for which neither a sufficient loading on principal components nor an adequate Cronbach's α could be obtained, yielding a robust shorter two-factor solution. The homogeneity of the dimensions was assessed using convergent validity (item correlations one with the other within a sub-scale greater than 0.40), and discriminant validity (correlation of items in one sub-scale with items in the other subscale less than 0.40) [29]. Correction for overlap was performed.

Study B: in order to confirm the internal validity and reliability of the 16-item EQS-H, we carried out a confirmatory PCA, convergent and discriminant analysis, calculation of Cronbach's α, and computed floor and ceiling effects.

Study C: the new format (5-Point scale) of the 16-item EQS-H scale was first compared to the initial response scale in terms of psychometric properties, mean scores, floor and ceiling effects. A two-factor solution confirmatory PCA was performed using a Varimax rotation on the correlation matrix. Two criteria were used to attribute each item to one of the factors: a substantial loading (>0.60) on one principal component, or, if an item loaded across several factors, it was attributed to the factor for which it maximized internal consistency measured by Cronbach's α. Convergent and discriminant validities were obtained, correcting for overlap.

Structural Equation Modelling (SEM) was performed to confirm factorial structure. SEM is a generalization of linear regression and factor analysis models [30, 31]. These models provide the simultaneous estimation of several multiple linear regressions. Variables in the regressions can be observed or latent. The latent variables are considered to be an underlying common factor that explains the pattern of correlations observed in the group of observed variables [32]. Several statistical indices enable verification of model fit and selection of the best-suited model. Since this statistical technique can prove to be unstable, it is recommended that several be used in order to choose the model that maximises certain criteria. The main indicators used for this are the RMSEA (Steiger's Root Mean Square Error of Approximation), the fit being considered good if <0.1 and very good if <0.05, the NFI (Bentler and Bonnet's Normed Fit Index), considered as good if >0.95, and the GFI (Goodness of Fit Index), considered as good if >0.85 [33, 34].

Finally, a general multivariate linear model was used to adjust the 16-item EQS-H global score on socio-demographic variables.

All study analyses were conducted using SPSS software (version 11) and SPAD. SEM was performed using SAS 8.2 and "PROC CALIS" procedure.

Results

Scale shortening procedure (Study A)

Six items had a frequency of 'NA' response higher than 20%. Four of them were related to patient autonomy: help for psychological problems (response rate = 23.0%), help with meals (48.6%) help with washing (33.9%), help with going to the toilet (38.0%), and 2 items concerned patients' relatives: involvement in information sessions with relatives (46.6%), information given to relatives (33.3%). These 6 items were the first to be removed in order to decrease the rate of missing values.

An explanatory PCA on the remaining 20 items made it possible to identify 2 dimensions based on the scree plot. The first two eigenvalues were 5.03 for the first component and 2.68 for the second. Four more items were removed: one item related to obtaining answers from doctors loaded on the two factors (0.36 and 0.38 respectively) and did not maximize Cronbach's α coefficient; one item concerning involvement in discharge from hospital was correlated to the 'MI' subscale in the 26-item EQS-H but had a low loading with the new 'MI' subscale (0.26) and did not maximize Cronbach's α coefficient. Lastly, Cronbach's α increased from 0.72 to 0.76 in the subscale 'RS' when two items concerning bedside behaviours on the part of staff and doctors were eliminated.

Finally, the short EQS-H scale comprised 16 items. Forty-eight percent of patients answered the items exhaustively (versus 15% in the initial 26-item EQS-H scale). An explanatory PCA showed a robust two-factor solution: 'MI' (9 items) and 'RS' (7 items) accounting for 42.1% of the variance (Table 1). Cronbach's α coefficients were respectively 0.83 and 0.82. Moreover, correlations between items within a given subscale were all higher than 0.40 and correlations between items and those of the other sub-scale were lower than 0.40.

Replication phase (Study B)

A confirmatory PCA on the two factors confirmed all the results obtained. The two-factor solution: 'MI' (9 items) and 'RS' (7 items) accounting for 42.9% of the variance (Table 2). All items had a very good loading on their own factor except the first item of the second factor (0.38) ('I could identify the doctor in charge of me') (Table 2). Cronbach's α coefficients were close to the first obtained (0.80 and 0.81 respectively). Convergent and discriminant validities confirmed the consistency of the 2 sub-scales.

Table 2 Results of PCA using varimax rotation in the 3 studies

However, the ceiling effect remained high, at 24.7% for MI dimension and 61.2% for RS dimension (Table 3).

Table 3 Comparison of the psychometric properties of the 2 formats of the 16-item EQS-H (Study B – 4-point Likert scale and Study C – 5-point Likert scale)

Testing a new response pattern (Study C)

Floor and ceiling effects

The new response format associated with a 5-point scale yielded a very marked decrease in ceiling effect accompanied by a normalisation of the scores (Table 3). Mean scores were respectively 59.2 (SD = 21.0) for 'MI' and 69.0 (SD = 19.8) for 'RS', close to the median (59.4 and 68.8). Skewness values were between -0.48 and -0.05. Only 4.3% and 8.4% of patients respectively still obtained a score of 100.

Psychometric properties of the final scale

The confirmatory two-factor PCA rotated using the Varimax procedure accounted for 65.5% of the variance for the 2 first principal components (54.5% for the first factor 'RS'(8 items) – eigenvalue: 8.73, and 11.0% for the second 'MI' (8 items) – eigenvalue: 1.77) (Tables 2 and 3). One item on the ability to recognize the doctor in charge loaded on "MI" factor, in contrast to the previous result. 'Cronbach's α coefficients were excellent: respectively 0.92 for 'MI', 0.93 for 'RS' and 0.95 for the 16-item EQS-H scale overall. Convergent and discriminant validity were good, all items had a correlation >0.40 with their own subscale, and correlations between items and those of the other sub-scale were lower than 0.40. Inter-subscale correlation was 0.67 (Table 3).

Structural Equation Modelling confirmed the existence of 2 latent factors ('MI' and 'RS') but the best characteristics were obtained with a hierarchical model including the 2 latent factors and a global satisfaction latent factor, bringing the 16 items together (Figure 1). Goodness of fit of the data was very good with RMSEA = 0.063, NFI = 0.954 and GFI = 0.943. All the structural coefficients were significant (p < 0.001).

Figure 1
figure 1

Structural equation model of the new version of EQS-H (N = 793). *Short names of items are given in Table 2.

The 16-item EQS-H overall score was associated with several adjustment variables in a general multivariate linear model. Scores were significantly higher for males (p = 0.019), for older patients up to 65 years (p = 0.002), for those who thought they had better health than people of the same age (p < 0.001), for those who thought they had better health status compared to admission day (p < 0.0001) and for patients who were more satisfied with their life in general (p < 0.0001).

Discussion

These three studies, carried out on large samples of subjects, made it possible to significantly improve the psychometric properties of the previously validated inpatient satisfaction scale EQS-H. The validation process demonstrated high added value after reducing and modifying the questionnaire, and the new form appears to be valid and reliable, and to contribute to the non-biased subjective evaluation of in-patient reported outcomes.

The shortening and validation strategies presented here follow most of the recommendations of 'good practice' for satisfaction scale validation [11, 35]. To begin with, our strategy was to promote higher response rates. Six items were initially removed from the scale, on account of a high rate of 'NA' responses. The EQS-H is an in-patient global satisfaction questionnaire, which should be applicable to most patients admitted to hospital units, whatever their autonomy. Badly impaired autonomy only concerns a few patients so that items relating to this aspect may not be relevant to the large majority of subjects. Reducing the length of the questionnaire, which involved some of these items, increased exhaustive response completion threefold. Low response rates, while entailing loss of data, can also introduce bias into survey findings because non-respondents may differ from respondents in ways that affect their evaluation of different aspects of care. As recommended by Coste [35], the development of the short EQS-H scale complied with two successive phases: the shortening process itself, which was performed via study A, and the validation process, conducted independently on another large sample of subjects (Study B). The replication phase strongly confirmed our findings.

Secondly, the objective was to reduce the ceiling effect highlighted in initial EQS-H scale in order to normalize the distribution curve. As suggested by Streiner and Ware [26, 36], we modified responses choices from a 4-point scale to a 5-point scale with 3 positive choices and no neutral (median) response choice. Patients who took part in the studies were generally highly satisfied with the quality of care [37] and these modifications in the response format provide better sensitivity. The normalization of the distribution also made it possible to improve the statistical validity of comparisons and to obtain better results in satisfaction score modelling when adjusted variables are tested. Following response pattern alterations, the ceiling effect disappeared.

Finally, the new EQS-H questionnaire is a self-report instrument comprising 16 items, covering two very important domains of patient satisfaction, 'Quality of medical information' (8 items) and 'Relationship with staff' (8 items). These two factors are related to interpersonal aspects of care, which are both predictors of patient opinion on care [38]. Donabedian emphasized that "the interpersonal process is the vehicle by which technical care is implemented and on which its success depends" [39]. There is consistent evidence across settings that the most important health service factor affecting satisfaction is the patient-practitioner relationship, including their primary role in information provision [4, 27]. Patient information has become crucial in health care because it is essential to enable the patient to take part, freely and in an enlightened manner, in medical decisions and resulting care provision.

Our results support good content, construct and concurrent validity for the new version of the measure. The new version of the EQS-H demonstrated excellent internal consistency (over 0.90). Items had strong loadings on the two factors identified by PCA and accounted for more than 65% of the variance. Convergent and discriminant validity were good. Concurrent validity was excellent. Socio-demographic variables related to scores are those usually described in literature. To confirm our results, Structural Equation Modelling was performed, and this strongly supports the possibility of calculating a global satisfaction score. The fact that high correlations exist between all these items and factors is not surprising, and helps to explain why the item related to the identification of the doctor correlates highly with the MI dimension in the two first studies and with the RS dimension in the last.

Nevertheless, this work entails several limitations: half of the questionnaires systematically present more than one missing value. Information concerning relatives is no longer explored. Professional help received in daily routine is limited to 2 items instead of 6. The 2 remaining items related to patient autonomy are the most important ones (pain relief and help for daily routine). However, depending on patient samples studied, these items could be part of the questionnaire but not be taken into account in the scoring. The response rate for the postal study was around 40% and it is possible that the representativeness of the sample could be biased, due to the loss of data from the non-respondents, although most authors accept this rate of response and consider that non-respondents are generally shown not to be significantly different from respondents in terms of satisfaction scores [20].

Conclusion

This work emphasizes the need to check and to refine psychometric properties of the questionnaires previously developed. The EQS-H is one of the well-known scales very often used to assess inpatient satisfaction with quality of medical and nursing care within hospitals. Items are clinically relevant in hospital setting and promoting its use in different inpatient clinical settings is already planned in our hospital in order to increase the usefulness of the tool for clinicians. After issue of a summary of results to both teams involved, highlighting priorities for improvement efforts, hospital staff screened areas for further investigations and substantial improvements were noted in several units, concerning such issues as privacy, pain and amount of patient information. However, the diffusion of the questionnaire does need to be backed up by a communication campaign, because results from patient satisfaction surveys aiming to improve healthcare delivery are still frequently under-used by healthcare teams and not widely publicised [40]. The actual impact of any corrective action taken on patient satisfaction in hospital has not been a consistent finding [41, 42]. Taking concrete action for improvement, for instance circulating informative documents or establishing the traceability of the information chain, seems easier than actually changing behaviours among healthcare professionals [42]. Finally, given the instrument's good psychometric properties as revealed in this study, further work is needed to confirm the excellent validity and reliability obtained. Complementary analyses using item response models to study the difficulty of items and their homogeneity in relation to the rest of the questionnaire could be useful, as could differential item functioning analyses, so as to study scores not solely as overall averages but also according to sub-groups, for instance healthcare departments, medical specialities or case-mixes. In addition, the dimensions explored by the EQS-H are not limited to the French healthcare system, and further scale validation in other countries and cultures is required, since it would facilitate cross-cultural studies of health care service quality. English, Spanish and Italian versions of the EQS-H satisfaction scale are already available (see in Additional file 1 English free access version of the questionnaire).

References

  1. Ross C, Steward C, Sinacore J: A comparative study of seven measures of patient satisfaction. Medical Care. 1995, 33: 372-377. 10.1097/00005650-199504000-00006.

    Article  Google Scholar 

  2. Hendriks A, Oort F, Vrielink M, Smets E: Reliability and validity of the Satisfaction with Hospital Care Questionnaire. International Journal for Quality in Health Care. 2002, 14: 471-482. 10.1093/intqhc/14.6.471.

    Article  CAS  PubMed  Google Scholar 

  3. Pettersen K, Veenstra M, Guldvog B, Kolstad A: The Patient Experiences Questionnaire: developement, validity and reliability. International Journal for Quality in Health Care. 2004, 16 (6): 453-463. 10.1093/intqhc/mzh074.

    Article  PubMed  Google Scholar 

  4. Larson C, Nelson E, Gustafson D, Batalden P: The relationship between meeting patients' information needs and their satisfaction with hospital care and general health status outcomes. International Journal for Quality in Health Care. 1996, 8 (5): 447-456. 10.1093/intqhc/8.5.447.

    Article  CAS  PubMed  Google Scholar 

  5. Jenkinson C, Coulter A, Bruster S, Richards N, Chandola T: Patients' experiences and satisfaction with health care: results of a questionnaire study of specific aspects of care. Qual Saf Health Care. 2002, 11: 335-339. 10.1136/qhc.11.4.335.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Hendricks A, Vrielink M, Smets E, Van Es S, De Haes J: Improving the assessment of (in)patients' satisfaction with hospital care. Medical Care. 2001, 39 (3): 270-283. 10.1097/00005650-200103000-00007.

    Article  Google Scholar 

  7. Gonzalez N, Quintana J, Bilbao A, Escobar A, Aizpuru F, Thompson A, Esteban C, San Sebastian J, De la Sierra E: Development and validation of an in-patient satisfaction question naire. International Journal for Quality in Health Care. 2005, 17 (6): 465-472. 10.1093/intqhc/mzi067.

    Article  PubMed  Google Scholar 

  8. Fitzpatrick R: Surveys of patient satisfaction: I – Important general considerations. British Medical Journal. 1991, 302: 887-889.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Donabedian A: Evaluating the quality of medical care. Milbank Memorial Fund Quarterly. 1966, 44: 166-206. 10.2307/3348969.

    Article  Google Scholar 

  10. Carr-Hill RA: The measurement of patient satisfaction. J Public Health Med. 1992, 14 (3): 236-249.

    CAS  PubMed  Google Scholar 

  11. Sitzia J, Wood N: Patient Satisfaction: A review of issues and concepts. Social Science & Medicine. 1997, 45: 1829-1843. 10.1016/S0277-9536(97)00128-7.

    Article  CAS  Google Scholar 

  12. Kane RL, Maciejewski M, Finch M: The relationship of Patient Satisfaction with Care and Clinical Outcomes. Medical Care. 1997, 35: 714-730. 10.1097/00005650-199707000-00005.

    Article  CAS  PubMed  Google Scholar 

  13. Barlesi F, Boyer L, Doddoli C, Antoniotti S, Thomas P, Auquier P: The place of patient satisfaction in quality assessment of lung cancer thoracic surgery. Chest. 2005, 128 (5): 3475-81. 10.1378/chest.128.5.3475.

    Article  PubMed  Google Scholar 

  14. Vingerhoets E, Wensing M, Grol R: Feedback of patients' evaluations of general practice care: a randomised trial. Qual Health Care. 2001, 10 (4): 224-8. 10.1136/qhc.0100224...

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Vuori H: Patient satisfaction – does it matter?. Qual Assur Health Care. 1991, 3 (3): 183-9.

    Article  CAS  PubMed  Google Scholar 

  16. Hearnshaw H, Baker R, Cooper A, Eccles M, Soper J: The costs and benefits of asking patients for their opinions about general practice. Fam Pract. 1996, 13 (1): 52-8. 10.1093/fampra/13.1.52.

    Article  CAS  PubMed  Google Scholar 

  17. Sitzia J: How valid and reliable are patient satisfaction data? An analysis of 195 studies. International Journal for Quality in Health Care. 1999, 11 (4): 319-328. 10.1093/intqhc/11.4.319.

    Article  CAS  PubMed  Google Scholar 

  18. Labarere J, Francois P, Auquier P, Robert C, Fourny M: Development of a French inpatient satisfaction questionnaire. International Journal for Quality In Health Care. 2001, 13 (2): 99-108. 10.1093/intqhc/13.2.99.

    Article  CAS  PubMed  Google Scholar 

  19. Salomon L, Gasquet I, Mesbah M, Ravaud P: Construction of a scale measuring inpatient's opinion on quality of care. International Journal for Quality in Health Care. 1999, 11: 507-516. 10.1093/intqhc/11.6.507.

    Article  CAS  PubMed  Google Scholar 

  20. Gasquet I, Falissard B, Ravaud P: Impact of reminders and method of questionnaire distribution on patient response to mail-back satisfaction survey. Journal of Clinical Epidemiology. 2001, 54: 1174-1180. 10.1016/S0895-4356(01)00387-0.

    Article  CAS  PubMed  Google Scholar 

  21. Pourin C, Tricaud-Vialle S, Barberger-Gateau P: Validation d'un questionnaire de satisfaction des patients hospitalisés. Journal d'Economie Médicale. 2003, 21 (3): 167-181.

    Google Scholar 

  22. Antoniotti S, Simeoni M, Clément A, Sapin C, Auquier P: Evaluer la satisfaction des patients hospitalisés : conceptualiser un indicateur quantitatif. Revue Epidémiologie & Santé Publique. 2000, 48 (S3): 82.

    Google Scholar 

  23. Brédart A, Razavi D, Robertson C, Brognone S, Fonzo D, Petit J, De Haes J: Timing of patient satisfaction assessment: effect on questionniare acceptability, completeness of data, reliability and variability of scores. Patient Education and Counseling. 2002, 46: 131-136. 10.1016/S0738-3991(01)00152-5.

    Article  PubMed  Google Scholar 

  24. Nguyen Thi PL, Briançon S, Empereur F, Guillemin F: Factors determining inpatient satisfaction with care. Social Science & Medicine. 2002, 54: 493-504. 10.1016/S0277-9536(01)00045-4.

    Article  Google Scholar 

  25. Veillard J, Champagne F, Klazinga N, Kazandjian V, Arah O, Guisset A: A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. International Journal for Quality in Health Care. 2005, 17 (6): 487-496. 10.1093/intqhc/mzi072.

    Article  CAS  PubMed  Google Scholar 

  26. Ware JE, Hays RD: Methods for measuring patient satisfaction with specific medical encounters. Med Care. 1988, 26 (4): 393-404. 10.1097/00005650-198804000-00008.

    Article  PubMed  Google Scholar 

  27. Crow R, Gage H, Hampson S, Hart J, Kimber A, Storey L, Thomas H: The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health Technology Assessment. 2002, 6: 1-245.

    Article  CAS  PubMed  Google Scholar 

  28. Nunnaly JC: Psychometric Theory. 1978, New York: McGraw-Hill, 2

    Google Scholar 

  29. Campbell DT, Fiske DW: Convergent and discriminant validation by the multitrait-multimethode matrix. Psychological Bulletin. 1959, 56: 81-105. 10.1037/h0046016.

    Article  CAS  PubMed  Google Scholar 

  30. Falissard B: Mesurer la subjectivité en santé: perspective méthodologique et statistique. Collection Evaluation et Statistique – Edition Masson. 2001

    Google Scholar 

  31. Loelhin J: Latent variable models: an introduction to factor, path, and structural equation analysis. 2004, Mahwah, NJ: L Erlbaum Associates, xi-317. 4

    Chapter  Google Scholar 

  32. Crowely SL, Fan X: Structural Equation Modeling: basic concepts and applications in personality assessment research. J Pers Assess. 1997, 68 (3): 508-531. 10.1207/s15327752jpa6803_4.

    Article  Google Scholar 

  33. Benthler PM: Comparative fit indices in structural models. Psychol Bull. 1990, 88: 588-606. 10.1037/0033-2909.88.3.588.

    Article  Google Scholar 

  34. Anderson JC, Gerbing DW: The effect of sampling error on convergence, improper solutions, and goodness-of-fit indices for maximum likelihood confirmatory factor analysis. Psychometrika. 1984, 49 (2): 155-173. 10.1007/BF02294170.

    Article  Google Scholar 

  35. Coste J, Guillemin F, Pouchot J, Fermanian J: Methodological Approaches to Shortening Composite Measurement Scales. J Clin Epidemiol. 1997, 50 (3): 247-252. 10.1016/S0895-4356(96)00363-0.

    Article  CAS  PubMed  Google Scholar 

  36. Streiner D, Norman G: Scaling responses. Health measurement scales A practical guide to their development and use. 1995, Oxford : Medical Edition Publication, 39-52. Second

    Google Scholar 

  37. Williams SJ, Calnan M: Convergence and divergence: assessing criteria of consumer satisfaction across general practice, dental and hospital care settings. Social Science & Medicine. 1991, 33: 707-716. 10.1016/0277-9536(91)90025-8.

    Article  CAS  Google Scholar 

  38. Cheng S, Yang M, Chiang T: Patient satisfaction with an recommendation of a hospital: effects of interpersonal and technical aspects of hospital care. International Journal for Quality in Health Care. 2003, 15 (4): 345-355. 10.1093/intqhc/mzg045.

    Article  PubMed  Google Scholar 

  39. Donabedian A: The quality of care: how can it be assessed?. JAMA. 1988, 260: 1743-1748. 10.1001/jama.260.12.1743.

    Article  CAS  PubMed  Google Scholar 

  40. Boyer L, Francois P, Doutre E, Weil G, Labarere J: Perception and use of the results of patient satisfaction surveys by care providers in a French teaching hospital. Int J Qual Health Care. 2006, 18 (5): 359-64. 10.1093/intqhc/mzl029.

    Article  PubMed  Google Scholar 

  41. Draper M, Cohen P, Buchan H: Seeking consumer views: what use are results of hospital patient satisfaction surveys?. Int J Qual Health Care. 2001, 13 (6): 463-8. 10.1093/intqhc/13.6.463.

    Article  CAS  PubMed  Google Scholar 

  42. Jorde R, Nordoy A: Improvement in clinical work through feedback: intervention study. BMJ. 1999, 318 (7200): 1738-9.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Pre-publication history

Download references

Acknowledgements

The authors would like to thank Departments of University Hospital of Nantes for their participation: Internal Medicine (Pr Planchon), Emergency Care (Pr Potel), Rheumatology (Pr Maugars), Cardiology (Pr Godin), Dermatology (Pr Stalder), Oncology (Pr Dabouis), Urology (Pr Glémain), Abdominal Surgery (Pr Visset), Orthopaedic Surgery (Pr Gouin), Traumatology (Pr Letenneur), Ophthalmology (Pr Vabres), Obstetric and gynaecology (Pr Philippe).

They also thank hospitals involved in PATH project: University hospitals of Nantes, Rennes, Rouen, St Louis (AP-HP), Hospitals of Aix-en-Provence, La Roche sur Yon, Le Mans, Ploërmel and Clinique de la Dhuys (Bagnolet), Polyclinique Francheville (Périgueux), Groupe St Augustin (Malestroit), Polyclinique St Côme (Compiègne). PATH was partially supported by a grant of the MoH (DREES).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Leïla Moret.

Electronic supplementary material

12913_2007_538_MOESM1_ESM.doc

Additional File 1: EQS-H questionnaire: English version. The file presents the English free access version of the questionnaire (DOC 28 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Moret, L., Nguyen, JM., Pillet, N. et al. Improvement of psychometric properties of a scale measuring inpatient satisfaction with care: a better response rate and a reduction of the ceiling effect. BMC Health Serv Res 7, 197 (2007). https://doi.org/10.1186/1472-6963-7-197

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1472-6963-7-197

Keywords