Using Data to Drive Continuous Improvement

As we addressed in our previous blog post, conducting evaluations of your technology initiatives is critical to make sure that you’re on target to meet your goals. Evaluations also form the basis of a cycle of continuous improvement – allowing your team to shift strategies and resources as needed, and realign initiatives to better a changing environment. In this post, we’ll dig deeper into how you can use your evaluation data to drive a successful continuous improvement process.

Map Out Your Goals

The process begins with mapping out what you hope to accomplish and graphically showing relationships between project goals, activities, outputs and outcomes (logic model). This process will help you define appropriate evaluation questions and the data you’ll need to measure processes (implementation) and performance (outcomes).

Some of your goals and activities may be process-oriented:

  • All classrooms will have access to high speed wireless internet
  • All teachers will have access to a library of Accessible Educational Materials

Other goals will focus on outcomes:

  • Use of (new reading software program) will lead to improved reading outcomes for students in Grade 3

As you develop your model, ask yourself:

  1. Is it meaningful – does this represent what we hope to achieve?
  2. Is it comprehensive – does this make sense? Are activities leading to desired outcomes?
  3. Is it feasible – is this doable? Are the activities and outcomes we’ve outlined tangible?

Develop Evaluation Questions

Broadly, there are two types of evaluations – formative (how is it going?) and summative (what did we accomplish?); each type has benefits and is a useful part of driving ongoing improvement in your school or district:

Formative

  • Catch implementation issues early and guide midcourse corrections
  • Evaluate implementation to better understand outcomes and improve program management
  • Collect baseline data for future evaluations and identify questions for further study

Summative

  • Help identify cause-and-effect relationships
  • Assess long-term impacts of your technology initiative
  • Provide data on change over time

As you develop your evaluation questions, think in terms of “sweet tweets”—at the end of the evaluation, what do you hope to say (or tweet) about your technology initiative?

Design Your Evaluation

Review your evaluation questions and ask:

  • What evidence will we need to answer those questions?
  • What evidence will others be looking for?

Information needed may vary, and may be qualitative, quantitative, or both. Collecting both types can give you a clearer picture of how well your initiative is working.

  • Qualitative data: narrative observations of the ways in which students are using technology in the classroom, or a description of an initiative.
  • Quantitative data: number of technology devices available (i.e., iPads), number of professional development hours provided, or how much time students spend using the software.

Develop a Plan for Collecting Data

As we noted in our previous blog post, you should identify the available data sources and what additional data you’ll need to collect. Use your logic model to guide your decisions about what type of data to collect. Important questions to ask, include:

  • What data collection tools will we need (surveys, etc.) and will we need to develop them?
  • When and how will we collect data? Will staff need training?
  • How will we ensure accuracy of data? Who will enter the data?

Draw Conclusions and Communicate Results

After your data is analyzed, time to draw conclusions and communicate them to your stakeholders! Return to your original goals and questions – what were you hoping to learn? List the data you have that corresponds to each measure and addresses each of your goals; now is when you’re rewarded for the hard work of the earlier stages!

As you plan for communicating outcomes with stakeholders, think about which audiences you’ll need to share information with and how often you’ll communicate. Think about who you need to share information with, how you’ll share it and what channels you’ll use to disseminate:

  • School board meetings
  • Newsletters
  • Emails to families
  • Press releases
  • Community forums and Q&A sessions

Keep Moving Forward!

The results of your evaluation are your opportunity to drive change and keep improving! The goal of evaluation is never to say, “Everything worked great!” and walk away – evidence and insight helps you move ahead and keep improving. Large scale change takes time, evaluation can help keep the momentum going.

The end goal of every evaluation of a technology initiative should be to drive continuous improvement—strengthen what’s working, shape what isn’t—and build a technology initiative that drives improved outcomes for SWDs.

If you have any questions or would like additional support for your school or district evaluation, feel free to reach out to us at powerup@air.org.

What's New on POWERUP?

AIR Informs Episode #6: Meeting the Needs of Students with Disabilities During COVID-19

Remote learning requires adjustment for all students, but students with disabilities face additional challenges during the COVID-19 quarantine. In the latest episode of AIR Informs, Allison Gandhi, managing researcher and director of AIR’s special education practice area, describes some of these obstacles and shares strategies to help students make the most of this time.