#BeyondTheMean

What is Implementation Fidelity?

Updated: Jan 23


Welcome to #BeyondTheMean! Check out this post to see what this blog is all about.

Implementation fidelity is possibly one of the most overused and misunderstood phrases in education. Every year, sometimes three or four times a year, schools and districts across the nation implement new and flashy interventions designed to enhance instruction and improve educational outcomes. At the end of a long, beleaguered workshop, the faculty is reminded of the importance of implementing the new intervention with fidelity – right before the corporate trainer packs their backs and returns home.


So, what is implementation fidelity and why does it matter? In short, implementation fidelity means that you are performing a task the way it was designed to be performed. When we seek to implement a new activity in our schools, implementation fidelity helps to increase the odds that the intervention will have the desired effect. It is a key component of continuous improvement as it helps us to reach our goals faster.


Implementation fidelity matters for a couple of reasons. First and foremost, a well implemented intervention helps our kiddos. Isn’t that why we are all here in the first place? We want to build schools that are spaces where instruction is rigorous, and relationships run deep. If we aren’t taking time to ensure the fidelity of our implementation, we aren’t helping our kids. Its like throwing darts at a dartboard in a dark room. Some of them will stick, but most of them will go to the wayside.


Implementation fidelity also helps ensure that we get the biggest bang for our buck. Schools in the United States are notoriously underfunded. They simply don’t have pennies to waste. So, if you are implementing a new intervention in your school or district with a six figure price tag – it is worth your time and effort to ensure that everyone is properly trained, and that fidelity is properly monitored throughout the implementation of your new activity.


Before you begin to implement a new strategy, you should take time to clearly define what implementation fidelity will look like. In my experience, this is the step that most schools miss. I should be able to walk into any classroom implementing the strategy and ask the teacher “what does fidelity of this strategy look like here?” Each teacher should be able to give me the same answer. Defining fidelity is not always an easy task. If you are purchasing an intervention, the salesperson should be able to clearly define what fidelity will look like in your school. If they can’t, run.



If you are implementing a strategy on your own, you should allow the research on that strategy to inform your definition of fidelity. Teaching strategies are nebulous and while good teaching has many core principles, it can look very different in every building. Spend some time with the research literature and consider what fidelity will look like in your school. For a step-by-step process to examine research literature, check out this blog post.


You want to make sure that you are using research conducted in schools like yours and with kids like yours when defining fidelity. If you are working in a rural elementary school, popular strategies like reciprocal teaching, are going to look different in your classrooms than in the classrooms of an urban high school. Consider how the strategy was implemented in the research you are reading, and ask yourself these five questions:


  1. What did the teachers in this study actually do?

  2. What support did the teachers get before, during, and after implementation?

  3. What was the primary outcome of this intervention?

  4. Where there any secondary effects of this intervention that we need to be aware of?

  5. What is a realistic outcome if we implement this strategy in our school?

Once you have defined what fidelity means for your school, you should consider how you will monitor fidelity throughout the implementation period. Monitoring plans must be carefully designed and shared with the entire faculty. This ensures that everyone knows what is expected of them and increases the likelihood of success for your new intervention. Here are a few things to keep in mind as you design your monitoring protocols:

  • Create a checklist: Your implementation monitoring should begin and end with a checklist of essential tasks. Since you have already taken time to define what fidelity looks like, make a bulleted list of those essential components and physically check them off when you make your rounds.

  • Stick to a schedule: You must monitor implementation on a regular basis. That doesn’t mean that you are going to visit Mr. Martin’s class every Tuesday at 10:00. That is going to skew your sample. But you should have a schedule to ensure that every teacher working with the intervention is observed at regular intervals.



  • Schedule post walk-through meetings: Monitoring processes don’t work if they don’t come with feedback. Make sure you schedule time to sit with your teachers, share your checklist, and have a conversation about the intervention. Give your teachers time to think and respond. This is a processional, collegial conversation – not a summative evaluation. If something isn’t working, your teachers will be the first to know and you need to give them space to make you aware of it.

  • Keep your data: As you monitor, make sure that you are taking time to log the results of your walk-throughs on a spreadsheet. This way you can see change over time. If implementation is strong, you can start to scale back your monitoring and direct your attention to new processes. If the implementation is weakening over time, then you need to do some root cause analysis and think about how you can support your staff and improve fidelity.

Remember that implementation monitoring should be part of your continuous improvement processes, not part of your formal evaluation system. This is a common pitfall. For continuous improvement processes to thrive and lead to lasting change, you must take steps to ensure that monitoring processes exist in a space of trust and collaboration. That doesn’t mean that they lack accountability – but they should include room for evolution, learning, feedback, and change.


After spending a year monitoring fidelity, make sure you take some time to monitor the impact of the intervention as well. For a quick analysis of your data, consider uploading your spreadsheet into my free Distribution Analysis Tool. This tool, and the four others found in the Repository, will instantly and automatically summarize your data so that you can make data-informed decisions without spending hours in front of your computer.


If you found this post helpful, consider sharing it with a friend! Use the social share buttons at the bottom of this page to share this post with your network. Good luck on your journey friends, and let me know how I can help!