Making “data-driven decisions” with the right kind of data

In the past decade or so, there’s been a sharp rise in the amount of data readily available for teachers and schools to use in taking stock of student learning.  Much of this rise can be attributed to the higher stakes placed on students, teachers, and schools through accountability requirements tied to standardized tests and the proliferation of assessment tools now sold to schools to track progress along the way towards those requirements.  The rationale is that all of this information is then supposed to help schools see where they can improve so that they can make adjustments that will increase student learning—“data-driven decision making” as it’s commonly called.

The problem, though, in basing decisions on data gleaned only or even mostly on standardized test scores is that you can miss other data that can be just as critical to the outcomes you hope to see from and for students.  Over the past few years at ANCS, we’ve developed and have been using a “dashboard” to give a quick snapshot of the overall quality of our school.  The dashboard encompasses a range of important domains, from academics to organizational health to finances.  When we think about the specific domains that help us to assess the educational program, here is how we break them out:

  • Student Academic Performance
  • School Climate & Culture
  • Stakeholder Satisfaction
  • High School Readiness

Within each of these domains are metrics designed to capture data points that are aligned with the mission of our school—that alignment with mission is vital because it helps us to determine how well we are doing based on what matters most to our school, which isn’t always the same as what the state may require us to measure.  For example, while it is important to us (and to the state) about how many of our students pass the state standardized tests, it’s as important to us to know whether students are academically challenged and engaged in deep, meaningful learning, whether they feel safe to take academic risks, and how well prepared for high school our graduates feel they are—areas not covered by one test.  So we measure each of those things.

One tool we use to gather some of the data for this dashboard are short feedback surveys we administer a few times each year for students (grades 3-8), parents, and teachers/staff.  The quantitative and qualitative information yielded by these surveys is enormously helpful to the school’s leadership in determining how well we are doing and where we may have room for growth.  The data from surveys—just like the other data on our dashboard—is only a start and is best used in the context of a fuller discussion about what it might mean.  If, say, fewer students than we’d hope for respond affirmatively about a particular question, why might that be?  What does any other data indicate about the same issue?  How might we begin to address it?  Is there more we need to know before we take any action?

At our November board meeting, I presented a summary of our first feedback surveys of the year, administered in October.   The data in it suggests lots of strengths and a few areas for targeted improvement, and you can see the complete summary and highlights here.  It’s all a part of our effort to make “data-driven decisions” informed by the right kinds of data, from tests, from feedback, and from all the other measures that tell us how we are delivering on our mission.