Again, keeper studies can be identified using handy Rapid Critical Appraisal checklists consisting of a set of simple but important questions. Below are sample questions developed for use with quantitative studies that are applicable to most appraisal situations (it’s important to note that qualitative evidence if it’s relevant to the clinical question, should not be dismissed):

  1. Why was the study done? Make sure the study is directly relevant to the clinical question.
  2. What is the sample size? Size can and should vary according to the nature of the study. Since determining valid minimum sample size in a single study can be difficult, taking into account multiple studies is beneficial. The answer to this question alone should not remove a study from the appraisal process.
  3. Are instruments of the variables in the study clearly defined and reliable? Make sure the variables were consistently applied throughout the study and that they measured what the researchers said they were going to measure.
  4. How was the data analyzed? Make sure that any statistics are relevant to the clinical question.
  5. Were there any unusual events during the study? If the sample size changed, for example, determine whether that has ramifications if you wish to replicate the study.
  6. How do the results fit in with previous research in this area? Make sure the study builds on other studies of a similar nature.
  7. What are the implications of the research for clinical practice? Ask whether the study addresses a relevant and important clinical issue.

 The posts/references must be in APA format.  

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.

Williamson, PhD, RN

In July’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff
nurse, Carlos A., her hospital’s
expert EBP mentor, and Chen
M., Rebecca’s nurse colleague,
col lected the evidence to an-
swer their clinical question: “In
hospitalized adults (P), how
does a rapid response team
(I) compared with no rapid
response team (C) affect the
number of cardiac arrests (O)
and unplanned admissions to
the ICU (O) during a three-
month period (T)?” As part of
their rapid critical appraisal
(RCA) of the 15 potential
“keeper” studies, the EBP team
found and placed the essential
elements of each study (such as
its population, study design,
and setting) into an evaluation
table. In so doing, they began
to see similarities and differ-
ences between the studies,
which Carlos told them is the
beginning of synthesis. We now
join the team as they continue
with their RCA of these studies
to determine their worth to
practice.

RAPID CRITICAL APPRAISAL
Carlos explains that typically an
RCA is conducted along with an
RCA checklist that’s specific to
the research design of the study
being evaluated—and before any
data are entered into an evalua-
tion table. However, since Rebecca
and Chen are new to appraising
studies, he felt it would be easier
for them to first enter the essen-
tials into the table and then eval-
uate each study. Carlos shows
Rebecca several RCA checklists
and explains that all checklists
have three major questions in
common, each of which contains
other more specific subquestions
about what constitutes a well-
conducted study for the research
design under review (see Example
of a Rapid Critical Appraisal
Checklist).

Although the EBP team will
be looking at how well the re –
searchers conducted their studies
and discussing what makes a
“good” research study, Carlos
reminds them that the goal of
critical appraisal is to determine
the worth of a study to practice,
not solely to find flaws. He also

suggests that they consult their
glossary when they see an unfa-
miliar word. For example, the
term randomization, or random
assignment, is a relevant feature
of research methodology for in-
tervention studies that may be
unfamiliar. Using the glossary, he
explains that random assignment
and random sampling are often
confused with one another, but
that they’re very different. When
researchers select subjects from
within a certain population to
participate in a study by using a
random strategy, such as tossing
a coin, this is random sampling.
It allows the entire population
to be fairly represented. But
because it requires access to a
particular population, random
sampling is not always feasible.
Ca

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.

Williamson, PhD, RN

In September’s evidence- based practice (EBP) article, Rebecca R., our hypotheti cal
staff nurse, Carlos A., her hospi-
tal’s expert EBP mentor, and Chen
M., Rebecca’s nurse colleague, ra-
pidly critically appraised the 15
articles they found to answer their
clinical question—“In hospital-
ized adults (P), how does a rapid
response team (I) compared with
no rapid response team (C) affect
the number of cardiac arrests (O)
and unplanned admissions to the
ICU (O) during a three-month
period (T)?”—and determined
that they were all “keepers.” The
team now begins the process of
evaluation and syn thesis of the
articles to see what the evidence
says about initiating a rapid re-
sponse team (RRT) in their hos-
pital. Carlos reminds them that
evaluation and synthesis are syn-
ergistic processes and don’t neces-
sarily happen one after the other.
Nevertheless, to help them learn,
he will guide them through the
EBP process one step at a time.

STARTING THE EVALUATION
Rebecca, Carlos, and Chen begin
to work with the evaluation table

they created earlier in this process
when they found and filled in the
essential elements of the 15 stud-
ies and projects (see “Critical Ap –
praisal of the Evidence: Part I,”
July). Now each takes a stack of
the “keeper” studies and system-
atically begins adding to the table
any remaining data that best re –
flect the study elements pertain-
ing to the group’s clinical question
(see Table 1; for the entire table
with all 15 articles, go to http://
links.lww.com/AJN/A17). They
had agreed that a “Notes” sec-
tion within the “Appraisal: Worth
to Practice” column would be a
good place to record the nuances

of an article, their impressions
of it, as well as any tips—such as
what worked in calling an RRT—
that could be used later when
they write up their ideas for ini-
tiating an RRT at their hospital, if
the evidence points in that direc-
tion. Chen remarks that al though
she thought their ini tial table con-
tained a lot of information, this
final version is more thorough by
far. She appreciates the opportu-
nity to go back and confirm her
original understanding of the
study essentials.

The team members discuss the
evolving patterns as they complete
the table. The three systematic

Critical Appraisal of the Evidence: Part III
The process of synthesis: seeing similarities and differences
across the body of evidence.

This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation’s
Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach
to the delivery of health care that integrates

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,

FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.

Williamson, PhD, RN

In May’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff nurse,
and Carlos A., her hospital’s ex-
pert EBP mentor, learned how to
search for the evidence to answer
their clinical question (shown
here in PICOT format): “In hos­
pitalized adults (P), how does a
rapid response team (I) compared
with no rapid response team (C)
affect the number of cardiac ar­
rests (O) and unplanned admis­
sions to the ICU (O) during a
three­month period (T)?” With
the help of Lynne Z., the hospi-
tal librarian, Rebecca and Car-
los searched three databases,
PubMed, the Cumulative Index
of Nursing and Allied Health
Literature (CINAHL), and the
Cochrane Database of Systematic
Reviews. They used keywords
from their clinical question, in-
cluding ICU, rapid response
team, cardiac arrest, and un­
planned ICU admissions, as
well as the following synonyms:
failure to rescue, never events,
medical emergency teams, rapid
response systems, and code
blue. Whenever terms from a

database’s own indexing lan-
guage, or controlled vocabulary,
matched the keywords or syn-
onyms, those terms were also
searched. At the end of the data-
base searches, Rebecca and Car-
los chose to retain 18 of the 18
studies found in PubMed; six of
the 79 studies found in CINAHL;
and the one study found in the
Cochrane Database of System-
atic Reviews, because they best
answered the clinical question.

As a final step, at Lynne’s rec-
ommendation, Rebecca and Car-
los conducted a hand search of
the reference lists of each study
they retained looking for any rele-
vant studies they hadn’t found in
their original search; this process
is also called the ancestry method.
The hand search yielded one ad-
ditional study, for a total of 26.

RAPID CRITICAL APPRAISAL
The next time Rebecca and Car-
los meet, they discuss the next
step in the EBP process—critically
appraising the 26 studies. They
obtain copies of the studies by
printing those that are immedi-
ately available as full text through

library subscription or those
flagged as “free full text” by a
database or journal’s Web site.
Others are available through in-
terlibrary loan, when another
hos pital library shares its articles
with Rebecca and Carlos’s hospi-
tal library.

Carlos explains to Rebecca that
the purpose of critical appraisal
isn’t solely to find the flaws in a
study, but to determine its worth
to practice. In this rapid critical
appraisal (RCA), they will review
each study to determine
• its level of evidence.
• how well it was conducted.
• how useful it is to practice.

Once they determine which
studies are “keepers,” Rebecca
and Carlos will move on to the
final steps