Front Page-Recovered.png

CASE STUDY: Pain Management
      & the Opioid Crisis

Software:

Articulate Storyline 360
Adobe Photoshop CC 2018
Audacity
TechSmith Camtasia
TechSmith Snagit
LucidChart
xapiapps
Smartsheets
Visme

Services:

Job role personalization
Project Management
Branching scenarios
Audio narration
Kirkpatrick levels 1-4
Custom Interactions
Infographics
Video Scenario Scripting & Production
Learner experience design
 

Project Details

Organization: Nebraska Medicine  |  Healthcare  |  2018

Project Type: Curriculum  |  Videos  |  Modules

Brief Background:  The Center for Medicare and Medicaid Services (CMS) updated their pain management questions on the HCAHPS survey. HCAHPS is a government patient satisfaction survey required by all hospitals that gives voice to the patient and affects government reimbursement to hospitals. The change in question wording moved the emphasis away from always having pain under control and always prescribing medication to treat it to setting expectations with the patient on how much pain they should reasonably expect to feel following their procedure and the use of non-opioid and non-pharmacological interventions.

Project Goal: Change how patient care providers thought about, discussed, and treated pain management and improve the hospital's pain scores on the HCAHPS survey.

Phase I: I was not part of this phase.This phase focused on immediate communication of upcoming changes and expectations. 

Phase 2, Formal Training: This phase was all about the curriculum and designing and developing the training that would utilize adult learning principles and motivation theories. The two target audiences who needed the same foundational and motivational information, but their interventions were completely separate. Note: While I designed & developed the formal nursing and part of the formal provider training, I was no longer involved with the project before the final formal training was launched.

Phase 3, Evaluation: In Progress. Partial participation. For this phase, I worked on designing evaluation methods that would align with Bloom's Taxonomy and Kirkpatrick Level 1-4. While I was part of the design and development of this phase, training was still ongoing and Level 3 and Level 4 wouldn't be analyzed for over a year at which time I would no longer be involved in the project. 

Primary Obstacles: The scope of this project targeted the majority of staff - approximately 7,000 employees and would span at least the next year. The implementation of this training would come on the heels of large compliance initiatives which had lead to learning fatigue. There were specific, measurable outcomes being monitored by leadership for positive improvement. 

 
Most of all, if training failed to change
behaviors, the ones most affected by its
failure wouldn't be the organization.

It would be the patients.
 

Title, Instructions, & Navigation Page Examples

 

Design & Development Solutions

Micro-learning Videos: The curriculum begins with a series of micro-learning videos that target specific intrinsic and extrinsic motivations, as well as the expectancy theory of motivation set out by Victor Vroom. The videos are 2-5 minutes in length and produced in conjunction with the marketing department. 

Some examples of the videos used:

  • an interview with a patient who tells her story about how her pain wasn't managed;
  • an interview with the physician with a high HCAHPS pain score and the process he uses; 
  • an acted simulation of that physician and a "patient" demonstrating his process; and,
  • an acted simulation of a nurse and a "patient" demonstrating the nurse's responsibility in setting expectations and personal goals and using non-pharmacological interventions. 

Branched Personalized Modules: Following the videos, learners choose a branch of modules depending on their job title. The modules provide a clearer understanding of the issues and objectives, current behavior vs. desired behavior, practical step-by-step how-tos, and details that answer the questions, "What's in it for me? Why does this matter to me?"

Infographic Job-Aids: One page printable and visual breakdown of the most important process and behavior take-aways.

There is no end to education. It is not that you read a book, pass an examination, and finish with education. The whole of life, from the moment you are born to the moment you die, is a process of learning.
— Jiddu Krishnamurti
 

Animation Examples


Evaluation Details

Kirkpatrick Level 1-4 Evaluations: This was the first time a large training intervention would be tracked and measured over a long period of time. I developed the evaluation of level 1 and level 2 with the education, while the design of level 3 and level 4 would be rolled out over 12 months. The results of most of the evaluations would be collected within the LMS.

LEVEL 1: REACTION. A short pre-post survey would launch automatically in the LMS when learners began the curriculum. It centered on the learner's personal assessment of themselves and their competency in the areas covered by the education. The post survey would ask corresponding questions to measure change in understanding and personal assessment. 

LEVEL 2: LEARNING. FORMATIVE knowledge checks were layered within the modules and appeared in different forms. There were obvious multiple choice questions in a drag-and-drop interaction, but there were also critical thinking questions that drew correlations between the behavior expectations in the education and the organization's core Mission, Vision, and Values. Most of the questions had immediate feedback, while others weren't directly graded. See EXAMPLE A.

  • The learner is asked to describe what pain is in their own words. There is no right or wrong answer, just an essay box where they describe it in emotional, physical, academic, medical, or personal terms. When they click submit, they are given a series of official definitions from evidence-based research groups. They then can compare what they wrote and what the official industry definitions are.

A SUMMATIVE knowledge check is given at the end that wraps everything up and concentrates on the behavior changes they must make in their everyday working life, rather than rote memorization of facts. 

LEVEL 3: BEHAVIOR. Partial participation - consulted on design and delivery. Behavior change would be evaluated at 3 months, 6 months, 9 months, and 12 months. The evaluation would take the form of personal assessments and knowledge checks texted to the learners through a mobile app.

LEVEL 4: RESULTS. Partial participation - consulted on design and delivery. These would be tracked and measured on the CMS HCAHPS scores throughout the year. A timeline of previous scores, question updates by CMS, and phase 1 and phase 2 implementation will be lined up with the scores to measure changes and correlate to phases of training. A desired score improvement would be decided by leadership. 

 

Evaluation Examples

Formative Knowledge Check

EXAMPLE A: Essay question

Formative Knowledge Check

EXAMPLE B: Yes/No based on a real-life scenario

Formative Knowledge Check

Example C: Yes/No based on a real-life scenario


Personal Reflection

It is rare to tackle a learning project where you can easily see the measures of success, the outcome, and the steps to get there. This was a massive project that covered more than a year's worth of work and still wasn't quite done when I moved on to new opportunities. Still, this project was a sincere pleasure to work on. Let me explain why.

The project made me very aware of how powerful data can be in learning. I believe there's a misunderstanding that evaluations are important to prove the legitimacy of training efforts and and their worth in time and cost. I don't really agree with this. I think that evaluations provide blueprints for what works and why and feeds our own intrinsic motivations. There's a theory called The Pygmalion Effect that basically states: setting out positive expectations will drive people to create positive outcomes, while negative expectations will drive negative outcomes. If we have actual proof of positive learning outcomes, we are then more likely to be influenced to repeat what worked and reinvent what didn't in order to keep achieving those outcomes.

Having no measurable proof of learning outcomes then creates a sort of Schrödinger's learning outcome - it's both positive and negative. Or neither. Either way, instructional designers then have nothing really motivating them to design and develop one way or the other. The result? We become machines of learning output. Order takers. Quantity over quality. Creating something, anything, just to check it off the list and move on to the next project. 

I believe in creating better learning experiences than that. In business terms, the ROI of that model of design and development stinks. We can do better.

More important than the chance to give the training team a gold star for successful numbers, evaluations tell us which learners need additional help and in what areas. Guaranteeing a highly reliable, confident, comfortable, well trained team should always be the real goal, not an idealized one. 

 
 

PORTFOLIO Navigation