Article Text

Download PDFPDF
A systematic review of stroke recognition instruments in hospital and prehospital settings
  1. Matthew Rudd1,2,
  2. Deborah Buck1,
  3. Gary A Ford3,
  4. Christopher I Price1,2
  1. 1Stroke Research Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
  2. 2Northumbria Healthcare NHS Foundation Trust, Wansbeck General Hospital, Northumberland, UK
  3. 3Division of Medical Sciences, Oxford University, Oxford, UK
  1. Correspondence to Dr Matthew Rudd, Stroke Research Group, Institute of Neuroscience and Newcastle University Institute for Ageing, Newcastle University, 3-4 Claremont Terrace, Newcastle upon Tyne NE2 4AE, UK; matthew.rudd{at}


Background We undertook a systematic review of all published stroke identification instruments to describe their performance characteristics when used prospectively in any clinical setting.

Methods A search strategy was applied to Medline and Embase for material published prior to 10 August 2015. Two authors independently screened titles, and abstracts as necessary. Data including clinical setting, reported sensitivity, specificity, positive predictive value, negative predictive value were extracted independently by two reviewers.

Results 5622 references were screened by title and or abstract. 18 papers and 3 conference abstracts were included after full text review. 7 instruments were identified; Face Arm Speech Test (FAST), Recognition of Stroke in the Emergency Room (ROSIER), Los Angeles Prehospital Stroke Screen (LAPSS), Melbourne Ambulance Stroke Scale (MASS), Ontario Prehospital Stroke Screening tool (OPSS), Medic Prehospital Assessment for Code Stroke (MedPACS) and Cincinnati Prehospital Stroke Scale (CPSS). Cohorts varied between 50 and 1225 individuals, with 17.5% to 92% subsequently receiving a stroke diagnosis. Sensitivity and specificity for the same instrument varied across clinical settings. Studies varied in terms of quality, scoring 13–31/36 points using modified Standards for the Reporting of Diagnostic accuracy studies checklist. There was considerable variation in the detail reported about patient demographics, characteristics of false-negative patients and service context. Prevalence of instrument detectable stroke varied between cohorts and over time. CPSS and the similar FAST test generally report the highest level of sensitivity, with more complex instruments such as LAPSS reporting higher specificity at the cost of lower detection rates.

Conclusions Available data do not allow a strong recommendation to be made about the superiority of a stroke recognition instrument. Choice of instrument depends on intended purpose, and the consequences of a false-negative or false-positive result.

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


  • Contributors MR, CIP and GAF had the idea for the project. MR and DB designed and executed the search strategy, before independently reviewing hits and extracting data. CIP adjudicated where differences occurred. MR drafted the initial version of the manuscript, with DB composing tables. All authors contributed to subsequent redrafting of the manuscript for intellectual content.

  • Funding MR was funded by a Teaching and Research Fellowship from Northumbria Healthcare NHS Foundation Trust. GAF is supported by an NIHR Senior Investigator Award.

  • Competing interests GAF has been paid lecture fees for attending and speaking at workshops held by Boehringer Ingelheim. His institution has received research funding for stroke-related activities from Boehringer Ingelheim and grant assistance towards administrative expenses for coordination of Safe Implementation of Treatments for Stroke in the UK. GAF has also received funding from Lundbeck A/S in relation to participation in the steering committee for DIAS 3 and 4.

  • Provenance and peer review Not commissioned; externally peer reviewed.