Monitoring Behavior Interventions

Updated: Jul 16

Welcome to #BeyondTheMean! Check out this post to see what this blog is all about.

Behavior events can be a powerful indicator when measuring the health of an educational system. Educational research – and frankly common sense – tell us that if students are not in the classroom due to exclusive behavior responses they cannot learn. Similarly, students cannot learn if behavior events are disrupting the learning environment or creating an environment that doesn’t feel safe or welcoming. In an effort to curb these negative influences, educators at every level work to implement behavioral interventions for students at high risk of negative outcomes.

This post isn’t about behavioral interventions. Rather, I want to talk to you about how you can tell if a behavioral intervention is truly working. Behavioral interventions are a lot of work. They generally require intensive record keeping, expensive incentive programs, and the addition of supplemental services such as counseling or community supports. It is important that educators take time to evaluate the impact of their behavioral intervention so that they can scale the ones that work and revisit the ones that don’t. This post will outline the a research process called “Single Case Design” and discuss how you can implement this type of rigorous program analysis in your classroom.

Getting Started

Before you can evaluate the impact of an intervention you must determine what direction you intend to go. This starts by clearly identifying the behavior that you want to see changed. It may be a disruptive behavior, such as talking out of turn or disrupting peers during silent time. It could be an aggressive behavior, such as physically touching or harming other students. Maybe it’s a social behavior, such as harassment or self-harm behaviors. Whatever it is, you need to clearly define it.

Next, you want to hit the books and do some research to determine which types of behavioral interventions are most effective for this particular behavior. There are dozens of reputable sources online that can provide you with this information. I always recommend that educators begin at the What Works Clearinghouse (WWC). This is a federally funded clearinghouse maintained by the US Department of Education. While it is far from complete, I like that it has a clear education mission and very rigorous review criteria.

Whatever your source, you should consider how closely aligned the research is to the unique situation you are experiencing. You should consider the personal background of the student, the level and type of school or classroom which will implement the intervention, the capacity of educators in the building to deploy the intervention, and whether or not your desired outcome aligns with the outcome of the study.

Setting Up the Data Collection

Once you have identified your behavior and chosen your intervention, you need to decide how you will monitor it. What metric will you use to tell if the intervention is having the desired effect?

Let’s set up an example that we can use throughout the rest of this tutorial. Imagine that you are working with a student who consistently talks out of turn throughout the class. This disruption is very frequent and has not been changed despite deployment of the established classroom management strategies. This student needs a little extra direction.

In this instance, there are several data points you could use to monitor the impact of your intervention. Some suggests include:

  • The number of times the student interrupts class during an established period of time.

  • The number of times the classroom management strategies (such as turning a ticket or moving a clip) are deployed during the day.

  • The number of times the student has to be removed from the instructional setting in order to continue with instruction.

Whatever you choose, your data point must be consistent and clearly defined. In research land, we call this the dependent variable. It is the variable that we think our intervention will change. Its outcome depends on the impact of the intervention.

Establish a Baseline

You will not know whether or not you have seen positive changes unless you know where you began. The next step in this process is to establish a baseline. Let’s say that our fictional student seems to have the most difficult during math class. To establish a baseline, you need to count how many times the student interrupts math class. In this instance, recommend you count every day for two weeks – so you end up with ten counts. You will want to set up your baseline monitoring so that you get a good picture of what your student is doing currently on a daily basis. Keep this information on a spreadsheet, where column A includes the date and column B contains the incident count.

Deploy the Intervention

Having established a baseline, it is time to deploy the intervention. Take care to deploy the intervention exactly as it was prescribed. You cannot expect meaningful results if you make changes to the intervention or only deploy a portion of the intervention. Carefully review the recommended intervention guidelines and ensure that you have deployed the intervention with fidelity.

As you deploy the intervention, continue to monitor the student’s behavior at the same interval in which you collected your baseline data. To keep with our earlier example, we are going to track the number of times the student disrupts math class every day for two weeks. It is very important that you continue to intentionally monitor student behaviors during the intervention.

Remove the Intervention

After a couple of weeks, you will likely have a feeling as to whether or not your intervention is working. At this point, we enter what researchers call the reversal phase. While it may feel backwards, you should remove the intervention from the student. Ideally, we want to deploy interventions that lead to long term behavioral changes that do not require daily monitoring and reinforcement forever. Remove the intervention and continue to monitor the behavior in the same manner as you did during the baseline and intervention stage.

Do Some Math!

Having gathered your baseline, intervention, and reversal data, it is time to do some math and see what kind of impact your intervention is really having. The first thing you should do is calculate the average number of behavior incidents during each of the three periods of time. If your intervention worked, you should see a lower number of behavior incidents during the intervention period when compared to the baseline period. When you compare the intervention period to the reversal period, you will learn whether or not your intervention has had a long-term impact on the student. This will help you determine how long the intervention needs to be deployed.

After comparing the averages, you should calculate the effect size. Effect size is a measure that helps to show the magnitude of a difference between sets of scores. The most common measure of effect size is called Cohen’s d. To calculate Cohen’s d, simply subtract one average from another and divide the difference by the pooled standard deviation. It is commonly considered that the results of a Cohen’s d test is small if it is around d=0.20, medium if it is around d=0.50, and large if it is around d=0.80.

Finally, you should create line graphs to visually compare the three periods of time. When you place the graphs next to each other you should be able to clearly and effectively see what is happening with your data.

If this analysis sounds burdensome, never fear! I have a tool to help you. My Intervention Analysis Tool will instantly calculate the averages and effect sizes and will create a visually appealing graph that can be saved as an image or copy and pasted into you reports.

Make a Decision

Just like any other continuous improvement effort, data monitoring doesn’t do you any good if you do not apply it to a decision. Take a look at your data and ask yourself