Letter from Matt Underwood – October 10, 2012

Dear ANCS Families & Friends,

How do you measure the effectiveness of a school? That is a question that has been explored for as long as we have had schools, but, it has been especially prominent in national debate about educational policy in the past several years with an increased focus on “accountability” in various forms. The No Child Left Behind Act passed a decade ago made school-level standardized test scores the basis of categorizing schools and attaching consequences to those schools that did not make “adequate yearly progress”. More recently, the federal government’s “Race to the Top Program” ties increased federal funds to teacher and school evaluations that include standardized test scores and other data in determining “college and career readiness” performance of students. As much as these efforts try to boil “success” down to a single number or letter, assessing students or teachers or schools is substantially more complicated than that, a fact that has been on my mind these past few weeks.

I recently had occasion to visit Maynard Jackson High School where more of our students are heading these days, and, while there, several different teachers commented to me that students from our school were more “self-aware” and “intellectually-engaged” and “confident” than most other students. Observations like these are a testament to the work of our students (and their parents!) but are also reflective of the impact of our school’s educational program and teaching. Similar remarks this year about our students from teachers and administrators at Grady, Carver, and Decatur High Schools and from college of education professors who observe in classrooms at our school and many others gives me good reason to believe our work is benefitting students.

So I must admit I was a little surprised initially when I was recently informed that our middle school does not seem to be adding as much “value” to students as other middle schools in the Atlanta Public Schools. An APS-led initiative to measure the effectiveness of all of its schools is leading the district to generate a “value-added” score for each school that purports to control for a variety of factors among students–poverty, disabilities, transiency–in order to show how many months of learning students gain at one school as compared to other schools. On the face of it, such a score would seem to give a good measure of a school’s teaching and make it easy to put all the scores into a chart and say that one school is better than another school. How then could we somehow be falling short in adding value according to one measure when other sources seem to be telling us quite the opposite?

A recent major study on the use of value-added assessments in schools gives a good explanation as to why I was seeing this disconnect. An easy-to-read article that summarizes some of the key findings of this study–written by an expert on educational accountability–can be found here (and I highly encourage you to read it), but one line from the article captures the most important point: “Value-added models provide important information, but that information is error-prone and has a number of other important limitations.” Though a value-added score tells us something about our school, what it tells us is limited and, like all measurements, subject to errors.

APS calculates its value-added scores solely by using student performance on the subject area tests of the CRCT. On the spring 2012 CRCT, the percentage of students meeting or exceeding the standards in grades 6-8 in Reading, English/Language Arts, and Math surpassed that of the district in all areas except for 6th grade math, and the differences ranged from anywhere from 5% to 15% higher depending upon the test. Despite evidence that warns against using student performance on tests in science and social studies that are not vertically scaled in determining a school’s value added score, the APS model lumps all the tests together. Tracking student progress in reading comprehension through the Reading CRCT is understandable given that the skills of reading comprehension can be scaled over time, but to determine how many months of learning a student has gained by comparing his performance on the 7th grade social studies CRCT (which focuses on facts related to world history) to that of the 8th grade social studies CRCT (all about Georgia history) makes no sense. Compounding this problem for our school is that by orienting our teaching towards helping students to develop essential skills in scientific problem solving, critical thinking, and historical analysis we do not engage in a more memorization-based approach to curriculum and instruction that might lead to higher science or social studies CRCT scores.

Indeed, the main limitation of value-added scores as they are currently being calculated is that they measure only one type of “value” to the exclusion of others. Aside from the inherent problems of using only the results of one test taken on one day as the main indicator of a student’s learning or how much value has been added by his teachers or school (What if a student has a bad day? If students learn “tricks” for guessing correct answers, is the test really measuring learning? What happens if students [or teachers] cheat? and so on…), a reliance on standardized test scores does not capture the enormous benefits of all of the other essential features of the educational program at our middle school. Is there value in asking students to revise their work when it does not meet the standards and then reflect on their growth from this process? Do students gain from spending time preparing for and then presenting a portfolio of their learning to a public audience to make the case for their promotion to the next grade level? How about learning Spanish, creating works of art, or even the interactions during recess? Does our advisory program help students to better navigate the turbulence of adolescence?

We know each of these elements of our school adds value to our students because our students, teachers, parents, and alumni tell us so in surveys, in informal conversations, and in our observations. But, if a “value-added” score does not take these into account, should we reduce or cut these parts of our work so that we can allocate the bulk of our time and resources to the one (and, only, apparently) piece of data that matters to get a higher score? I think I know what the answer would be.

Since experience gives me little reason to believe that those who make decisions about such matters will do more than pay lip service to finding methods of assessing student learning and growth beyond multiple choice standardized tests, it is my work–all of our work, really–to capture and communicate the impact of the ANCS educational program in quantifiable and tangible ways as much as possible. We have some of these pieces already in place–annual surveys, student-led conferences, portfolios and exhibitions–and we are working to better document each of them and to search for new ways of tracking the value we “add” to students. All students–whether at ANCS or elsewhere–are complex individuals with differing strengths and weaknesses. They deserve ways of assessing their performance (and, by extension, the performance of their teachers or schools) that acknowledge just how unique they are as human beings and recognition that there is value in many skills and knowledge that cannot be shown by filling in a bubble.

Sincerely,

Matt Underwood
Executive Director & Middle Campus Principal