Translating evidence-based recommendations for pressure ulcer prevention (PUP) into practice remains difficult for various reasons, including the research’s perceived quality, validity, and usability or the guideline itself. Additional stakeholder input and testing were required following an evidence-based PUP algorithm’s development and face validation testing. Wound care experts attending a national wound care conference and a regional wound ostomy continence nursing (WOCN) conference, as well as graduates of a WOCN program, were invited to participate in a mixed-methods quantitative survey with qualitative components, which the Internal Review Board approved. Participants were asked to comment on and rate the relevance and appropriateness of each of the 26 algorithm decision points/steps using standard content validation study procedures after providing written informed consent. All responses were kept private. The content validity index (CVI), descriptive summary statistics, and mean relevance/appropriateness scores were calculated. Thematic analysis was performed on qualitative comments that were transcribed. Of the 553 invited wound care experts, 79 (average age 52.9 years, SD 10.1; range 23-73) agreed to participate and completed the study (a 14 percent response rate). The majority (67, 85%) were female, registered (49, 62%) or advanced practice (12, 15%) nurses with more than ten years of experience (88, 92%). Medical doctors, physical therapists, nurse practitioners, and certified nurse specialists were among the other health professionals. Almost all (75, 95%) had received formal wound care education. The average score for the entire algorithm/all decision points (N = 1,912) was 3.72 on a Likert-type scale of 1 (not relevant/appropriate) to 4 (very relevant and appropriate), with an overall CVI of 0.94. (out of 1). The recommendation to provide medical-grade sheepskin for patients at high risk of friction/shear was the only decision point/step recommendation with a CVI of 0.70. Many constructive and substantive suggestions were received for minor changes, such as color, flow, and algorithm orientation. With the minor modifications, the high overall and individual item rating scores and the CVI support the validity and appropriateness of the PUP algorithm. Generic recommendations facilitate individualization, and future research should focus on construct validation testing.
ORDER WITH US AND GET FULL ASSIGNMENT HELP FOR THIS QUESTION AND ANY OTHER ASSIGNMENTS (PLAGIARISM FREE)
Since the Institute of Medicine’s1 (IOM) document Crossing the Quality Chasm highlighted the imperative for patients to receive care based on the best available scientific knowledge and that care should not vary illogically from clinician to clinician or from place to place, efforts to reduce barriers to implementing evidence-based protocols of patient care have continued unabated. Many review and observational studies have shown that implementing pressure ulcer prevention (PUP) care protocols, including standardization of PU-specific interventions and documentation, reduces their prevalence in acute care,2,3 long-term care,3-5, and home care populations. 6 However, knowledge translation of PU best practice recommendations into practice remains a challenge for a variety of reasons. 2 The perceived quality and usability of the research or guidelines themselves are significant barriers to their implementation. The processes used to develop guidelines have varied considerably. Committees or panels create most without being evaluated or tested by stakeholders, resulting in sometimes contradictory recommendations. 7 In a review8 of 5 wound care guidelines developed by and for physicians, it was discovered that many are difficult to evaluate by clinicians unfamiliar with guideline appraisal, and ratings for many development aspects needed to be higher. End-user concerns about the relevance or validity of guidelines can be addressed when they are developed using a process more closely aligned with recent IOM guidelines, such as soliciting input from relevant stakeholders through a rigorous established process to establish validity. 7 When large amounts of information can be captured in a step-by-step process or algorithm, usability improves. 9
A PUP algorithm was recently developed based on a systematic review of the literature and face validation by wound care experts.
10 Following a valid and reliable on-admission risk assessment, the end user is directed toward modifiable risk factors included in the Braden Scale score and interventions designed to address them (see Figure 1).
The authors created a 1-page algorithm with 26 distinct steps/decision points using a systematic review of published evidence from 2007 to 2013 and the Strength of Recommendation Taxonomy (SORT). Each step/recommendation was created with study quality ratings for all identified publications and the resulting recommendation strength. Face validation was performed among 12 wound care experts as part of the external review and validation process. Lynn12 and Waltz and Bausell13 reported on a process in which experts rated content for validity using a 4-point Likert scale (1 = not relevant/appropriate, 4 = very relevant/appropriate). The overall mean score of the algorithm was 3.6 (SD 0.8), with a content validity index (CVI) of 0.89 (out of a possible 1), indicating strong content validity. Qualitative feedback was analyzed, and minor changes were made to the algorithm. However, additional stakeholder input and testing were required to determine the algorithm’s content validity. This prospective, descriptive study aimed to examine the algorithm’s content validity with a larger sample size of interdisciplinary wound care experts.
Methods \sDesign. A mixed-methods, quantitative survey design with qualitative components was used to obtain content validation data for the PUP algorithm. Holy Family University’s Institutional Review Board (IRB) approved the study (Philadelphia, PA).
Setting and sample. Wound care providers were invited to participate in the study using convenience sampling methods. Sample inclusion criteria were relatively broad to encourage participation from a diverse range of providers. Among the criteria were the following:
A licensed health care professional or clinical researcher with wound care experience.
Fluent in English.
Willing to review the algorithm and provide input while maintaining confidentiality.
Wound care background entailed substantial (>5 years) experience and formal wound care education, preferably with wound care board certification. Participants received no monetary compensation. Participants included healthcare clinicians attending a national interdisciplinary conference, a regional conference for wound, ostomy, and continence nurses (WOCN), and graduates of a WOCN program. Data were collected over six months. A total of 553 medical professionals were invited to participate. Attendees were invited by email before the conference or by posting or personal invitation during the conference. Graduates of a WOCN program received an invitation via email. If they agreed to participate, they were emailed a copy of the consent form, and the algorithm and study instrument were mailed to them. To ensure the confidentiality of the actual survey responses, the consent form could be returned via email. Volunteers were asked to send the completed instruments to the researchers via regular mail within one week, but the mail was monitored for six months.
Considerations of an ethical nature. Study volunteers were asked to read a consent form and provide written informed consent per IRB approval procedures. The algorithm surveys were distributed or mailed after signed consent forms were collected. Participants were given one week to return consent forms. There was no way to identify participants or link consent to data forms. All consent forms and survey responses were collected and stored in the first author’s locked file cabinet.
Instrumentation. The data collection survey included the following:
A paper-pencil instrument with an 18-item demographic data form.
A content validation questionnaire with 26 statements matched to each of the PUP algorithm’s 26 decision points/steps.
A final segment with two open-ended questions asked for overall comments about the algorithm and the research process.
Participants were asked to read the statements related to each of the decision points/steps and rate their level of agreement with the relevance (appropriateness) of the item in the content validation survey. According to Lynn12 and Waltz and Bausell13, a 4-point rating scale was used: 4 = extremely pertinent and appropriate: 3 = relevant but requires minor changes; 2 = unable to assess relevance without revision; 1 = irrelevant/inappropriate. Participants were asked to add written comments about omissions, suggest changes to improve clarity/succinctness, present an alternative, and provide literature references for each statement.
Procedures for collecting data. Participants at the national meeting were directed to an adjacent room at a specific time by signage. Following a brief oral introduction about the algorithm development’s history and the study’s purpose, participants signed and returned the consent forms and received the algorithm and the survey response form. After reviewing the algorithm, participants provided validation ratings and narrative comments. Before leaving the room, all participants completed and returned the survey. Attendees at the WOC regional nursing conference and WOC nursing program graduates received the same written introduction and the consent form, algorithm, and data collection instrument. They were asked to complete and return them during the meeting or by regular mail. The data collection procedure took about 45 minutes. The time it took to complete the study for those who mailed the forms could not be verified, but no comments about excessive time were added.
Data examination. All variables were coded and entered into SPSS® Version 19.0 (IBM, New York, NY) for analysis. For each demographic variable, descriptive summary statistics were computed. Mean scores and the CVI were computed for each of the 26 algorithm components and the entire algorithm. CVI was calculated by categorizing content as either very relevant/relevant (ratings 3 and 4) or not relevant/unable to assess relevance (ratings 1 and 2). The CVI was calculated using the proportion of items rated 3 and 4; validity was indicated by a score greater than 0.70. (scale 0 to 1.0). 14,15
Using qualitative data reduction techniques, qualitative comments on individual decision statements/steps and overall processes were transcribed and thematically analyzed. The qualitative comments about individual decision steps were substantive, and the frequency count method provided meaning for the most frequently identified themes.
Results \sDemographics. Of the 553 providers invited, 79 agreed to participate and completed the study (a 14 percent response rate). Every common wound care-related discipline was represented.
Most participants (67, 85%) were female, with an average age of 52.9 years (SD 10.1; range 23-73), and 95% worked in the United States. Most were registered nurses (49, 62%) or advanced practice nurses (12, 15%). Medical doctors, physical therapists, nurse practitioners, and certified nurse specialists were among the other health professionals (see Table 1). The majority of participants (61, 77%) spoke only English and received their health care education in the United States (71, 90%); 70 (89%) had a baccalaureate degree or higher. Almost all participants (75, 95% of nurses) had received formal wound care education, and more than 75% were board-certified in wound care. Most participants had extensive experience in health care; 77 (92%) had ten or more years of experience. Geographically, the majority of participants (53, 68%) practiced in both urban (30, 38%) and suburban (46, 58%) settings. Only 13 (17%) of those polled worked in rural areas. Fifty-eight percent (nearly 75%) saw more than ten patients per week who had or were at risk for PUs. PUs (67, 89%), lower extremity ulcers (61, 77%), and diabetic foot ulcers (42, 53%) were the top three wound types most commonly managed (see Table 1).
A quantitative examination. The average item relevance/appropriateness score calculated for the entire algorithm/all decision points (1,912) was 3.72, with a CVI of 0.94 overall (out of 1). The only recommendation with a CVI of 0.70 was to provide medical-grade sheepskin to patients with activity/mobility limitations at high risk for friction/shear (see Table 2). Otherwise, quantitative data analysis and inherent decision processes support the algorithm components.
Analyze qualitatively. The PUP algorithm elicited positive and negative responses from respondents (see Tables 3 and 4). The qualitative analysis of overall comments yielded themes (see Table 3) concerning algorithm strengths such as simplicity, good color use, necessity, and flexibility. Other potential applications for patient education, quality improvement, individualization, and alignment with other guidelines or algorithms have been identified. Complexity, color confusion, omission of components, support surface clarification, education issues, and problems with specifics such as directions were among the opposing themes.
Because all participants provided written comments, a more quantitative approach to data analysis was required; it was decided to generate frequency counts by item to suggest more urgent themes (see Table 4 for items with a frequency count >5). The four items with the most comments received the most feedback. Twelve study participants indicated that the word usually should be removed from the “within 24 hours” statement for admission assessment and documentation. The timing and content of staff, caregiver, and patient education could have been better. Twenty-nine participants disagreed with the recommendation to use medical-grade sheepskin. Finally, 15 participants suggested that patients with suboptimal nutritional status should be advised to seek a dietitian consultation.
An earlier publication described the PUP algorithm’s development history through systematic review and face validation.
10 The current study provided data on content validation and an overview of strengths and areas for improvement. The rating scores (average score of 3.72 out of 4) and CVI (0.94 out of 1) of the PUP algorithm for use in adults were high, indicating that the components were appropriate for the instrument’s purpose. Only one practice recommendation received a low score: using a medical-grade sheepskin for patients with activity/mobility limitations and a high risk of friction or shear. This recommendation also received a low appropriateness score in the previous face validation study. 10 Ironically, this was one of a few recommendations based on the findings of several high-quality studies, with an overall A recommendation strength based on several, primarily Australian, studies. Participants in the study were concerned about this recommendation because most practitioners in the United States are unfamiliar with medical-grade sheepskin and may have only used synthetic sheepskins. Because the literature-based quality of the research underlying this recommendation is high (A-strength level of evidence) and the algorithm is applicable in other countries, the recommendation was retained in smaller print and with an “if available” footnote.
Algorithms, by definition, are ideal for specifying appropriate management strategies, communicating complex series of conditional statements, and assisting in translating research into clinical practice, but they are not exhaustive.
9 The qualitative comments from this and other content validation studies16 indicate a constant tension between clinicians needing simple, easy-to-follow directions and more details and guidance. On the one hand, study participants were pleased with the simple steps and were interested in algorithm pocket guides, but they would have preferred more details for several action steps.
Several minor algorithm changes were implemented after a thorough review of all quantitative and qualitative results. Concerns about information flow, design, and colors were focused primarily on the admission assessment regarding the current or recent history of limited mobility, which was preceded by “Not at risk and intact skin.” The latter was removed because it provided no helpful information and caused confusion (see Figure 1). This also allowed for a color change in the decision step/point, making it easier to identify as part of an admission assessment decision point.
Nineteen participants indicated that an admission assessment should be conducted within 24 hours, not “usually within 24 hours,” as initially stated. Because time recommendations in the literature and face validation study participant opinions varied, the original algorithm version did not have a time designation; thus, the “usually within 24 hours” recommendation. 10 However, because participants in the current study were less ambiguous about this statement, and because a risk assessment “at admission” has been shown to reduce the incidence of PUs17 and is now widely recommended in all health care facilities,18,19 the word usually was removed during the final algorithm revision.
Concerns about the timing of education were addressed by moving it to the top left corner as a visual reminder that risk and skin assessment education should come first. Finally, the colors were standardized, and the box shapes were modified to conform to standards 9,20 (see Figure 1).
It is important to note that the algorithm is generic regarding the need for more details and directions (e.g., type of moisturizer or high-quality foam or need to obtain a dietary consult). Although the algorithm’s information can only be changed by jeopardizing its validity, facilities interested in incorporating more specific evidence-based recommendations are encouraged to review the published evidence based on the recommendations for further refinement to suit their care protocols. 10
Rycroft-Malone et al. 21 reported tension between the standardization demanded of evidence-based practice and individualizing decision-making after completing a case study using ethnographic methods to examine decision-making in nursing. Incorporating nurses’ decision-making processes into the work environment context may be necessary for the use of protocols and guidelines. Giving nursing staff the option to personalize specific evidence-based intervention recommendations (such as types of high-density foam and protective barrier creams) may facilitate algorithm adoption and help standardize care.
Verbal comments from study participants were generally positive, particularly regarding the emphasis on and organization of modifiable risk factors. These observations echo the findings of a recent consensus study22 aimed at developing a theoretical model for identifying the etiological factors of PUs. The authors concluded that the local approach to risk reduction should be determined by production mechanisms (for example, those addressing pressure, friction, shear, and moisture) and modifiable etiological factors (less-than-optimal nutritional status).
To the authors’ knowledge, the algorithm is the first one targeting PUP in adults. It is strongly evidence-based because the decision points were based on the best available evidence10, and content validation ratings by stakeholders supported their appropriateness. Only the PU clinical practice guideline developed by the Association for the Advancement of Wound Care23, which includes PUP recommendations, has been formally content-validated. Furthermore, qualitative comments were generally positive and supportive. The few negative comments were used to tweak the algorithm’s structure to support its best usage.
The PUP algorithm was designed with adults in mind and cannot be recommended for use in pediatric or neonatal populations. Device-related PUs are not addressed as part of this rationale because they constitute a significant issue in pediatric skin care. Suspected deep tissue injury (DTI) was also excluded because the evidence base is limited, and a DTI is currently considered a PU. Although the previously reported evidence and face validation, combined with the current content validation study results, are essential steps in building evidentiary support for the algorithm’s use, more research is needed to test its utility and construct validity.
Another study limitation is the low invitation response rate (14%), which could lead to nonresponse bias. According to Shih and Fan24’s meta-analysis, reported survey response rates to regular and email invitations range from 7% to 89%, with an average of around 40%. Email reminders are less effective than regular mail reminders. According to the findings of a randomized, controlled survey 25, physician survey response rates are lower than nurse response rates. The current study’s primary recruitment method was email and email reminders, but other study design factors preclude comparisons of response rates. Potential participants for the national meeting had to be available during data collection times and interested in participating. Not everyone who registered — and was invited — attended the entire conference.
Furthermore, data collection took place during regular conference session hours. As a result, prospective study participants would have been unable to attend an educational session.
Similarly, many attendees expressed interest at the regional meeting, but time was limited, and the survey may have been misplaced, among other meeting-related paperwork. More consent forms (16) were collected at the meeting than completed mail-in surveys. More research is needed to determine the best method for collecting this study data.
A rigorous systematic review was used to develop and face-validate a PUP algorithm.
10 The current content validation study, which included 79 wound care experts, yielded similar results to face validation. Except for one of the 26 steps/items, all had a high CVI (average 0.94), indicating that the PUP algorithm is valid and appropriate with minor modifications. This is the first PUP algorithm based on systematic review, face validation, and formal content validation, as far as the authors know. The validation of constructs should be the focus of future research.
worksheet shall be completed with article