Reliability of the Canadian emergency department triage and acuity scale: interrater agreement

Ann Emerg Med. 1999 Aug;34(2):155-9. doi: 10.1016/s0196-0644(99)70223-4.

Abstract

Study objective: To determine the rate of interobserver reliability of the Canadian Emergency Department Triage and Acuity Scale (CTAS).

Methods: Ten physicians and 10 nurses were randomly selected to review and assign a triage level on 50 ED case summaries containing presenting complaint, mode of arrival, vital signs, and a verbatim triage note. The rate of agreement within and between groups of raters was determined using kappa statistics. One-way, 2-way analysis of variance (ANOVA) and combined ANOVA were used to quantify reliability coefficients for intraclass and interclass correlations.

Results: The overall chance-corrected agreement kappa for all observers was.80 (95% confidence interval [CI] .79 to .81), and the probability of agreement between 2 random observers on a random case was.539. For nurses alone, kappa=.84 (95% CI .83 to .85, P = .598), and for doctors alone, kappa= .83 (95% CI .81 to .85, P = .566). The 1-way, 2-way ANOVA and combined ANOVA showed that the reliability coefficients (84%) for both nurses and physicians were similar to the kappa values. A combined ANOVA showed there was a. 2-point difference with physicians assigning a higher triage level.

Conclusion: The high rate of interobserver agreement has important implications for case mix comparisons and suggests that this scale is understood and interpreted in a similar fashion by nurses and physicians.

MeSH terms

  • Analysis of Variance
  • Canada
  • Emergency Service, Hospital / standards*
  • Evaluation Studies as Topic
  • Humans
  • Observer Variation
  • Reproducibility of Results
  • Triage / standards*
  • Triage / statistics & numerical data