“Create a creature that has never before existed on this earth using only the materials you’ve been given. Give it a name, an origin story, and present it to the class.”
This is one of the first assignments I gave in my fine arts survey class. Students got a paper bag filled with a Dixie cup, cotton ball, paper clip, and a few pieces of tape. (It doesn’t really matter what’s in the bag — it matters more that the students realize they can use the bag itself.)
This exercise asked kids to build their creative muscles. They had to be imaginative, create something novel, and — by having them present their creatures to the class — become comfortable expressing their thoughts and feelings in front of a group of peers. The students were then evaluated on their creativity and imagination, both with the materials presented to them and in telling their stories.
While some students embraced the silliness immediately, there were always more who felt deeply uncomfortable and confused. “This is stupid! What’s the point?” Even after I explained it, the idea that there was no rubric and no right answer made many students feel unsettled. The teacher I replaced had given tests with fill-in-the-blanks like “Red + Blue = _.”
Students’ discomfort with a focus on process as opposed to product is a result of the school system through which they’ve traveled on their way to my classroom. In that open-enrollment public school in New Orleans, testing, attendance, and letter grades determined a school’s performance score, which, in turn, affected enrollment and thus funding. Many of my students had attended schools that packed the day with academic classes in order to help kids catch up. Administrators assumed that, based on their socio-economic statuses and the quality of the schools they had previously attended, these students needed as much help as they could get. They were treated as buckets to be filled.
In a typical quantitative assessment system, students learn to associate their performance with letter and number grades: A, B, C, D and F, corresponding to numbers ranging from 0–100. While such measurements are most well-suited to situations in which there is a clear correct answer (such as math or science), they are also used in evaluating performance in the humanities. As long as something can be marked as right or wrong, or assigned a point value, it can be assessed quantitatively.
Conversely, qualitative evaluation is subject to interpretive criteria. Narrative report cards, in which teachers give detailed, prose feedback on learning outcomes, are the qualitative counterpart to traditional GPAs.
Some teachers have adopted instructional methods like project-based learning in an effort to simulate real-world experiences, and these approaches require qualitative evaluation. When a Spanish teacher tests fluency by having a conversation in Spanish with a student, when a law professor assesses students’ thorough reading of a case brief by cold-calling them to interpret it and cite precedent, or when a computer science teacher asks students to code a working website, authentic outputs demonstrate proficiency.
These assessments are difficult for students to fake, but for instructors they take time, require a large knowledge-base, and call for much more planning on the front end. For these reasons (among others), quantitative evaluation dominates the vast majority of schools, especially underserved ones.
There has been a longstanding debate over which methods measure student outcomes most effectively. This year’s influential opt-out campaign in New York drew public attention to this argument when parents galvanized support for teachers whose jobs were tied to success on exams that were suddenly much more difficult with the adoption of Common Core State Standards.
According to The New York Times, at least 165,000 children in New York, or one out of every six eligible students, sat out at least one of the two standardized tests this year, more than double the number in 2014.
Many, though, believe that consistent testing data is crucial for holding teachers and administrators in public schools to high standards. U.S. Education Secretary Arne Duncan reinforces this conviction, citing the need for regular assessments to measure the gap in achievement between white and minority students. The widespread cheating scandal in low-income Atlanta schools, however, complicates this claim and calls into question the credibility of such testing methods.
As an under-experienced, overworked 9th-grade English teacher, I gravitated toward quantitative assessment. It was the easiest thing to plan, delegate to a teaching assistant, and take care of on the weekends. Give me an answer key, and I’ll give students checks or X’s. It was reading their papers and journals and providing thoughtful comments that I found most difficult to manage. Still, deep down, I knew that written feedback had a far greater effect on my students’ learning than marking off boxes on an answer key.
When I started teaching art, it was clear that quantitative assessment simply would not work. There are no right answers in fine art. I used self-evaluation, giving students a rubric that asked them to consider how they performed across a variety of metrics: composition, design, growth, progress, creativity, problem-solving, care, effort, and work habits. In a longer narrative section, I asked them to reflect on the process and the final grade they thought they deserved. I gave my rating alongside this and wrote my explanation in response to theirs.
This collaborative evaluation, in combination with a peer critique, was how my students assessed their relative success on a given project. It was a huge struggle for some — they just wanted to know what they were supposed to do to get an A. Some were visibly uncomfortable at the thought that they could grade themselves. “What if I just give myself all 5’s (the highest score)?”, they asked. I told them that this was a system based on trust and mutual respect, and if that’s honestly what they felt they deserved, I respected that and would share my thoughts with them.
I believe that this structure, coupled with projects that pushed students to define their voice and allowed them to express themselves openly, led to the transformations I saw over those years. One parent I spoke with, whose sons attended a gradeless school, said, “Your worth is not the grade or the competition.” She also happens to be a professor at Duke University, and adds, “The students [here] are stressed and strung out … [They’re] high achievers [who] came in with 4.5 averages and haven't a clue about themselves except on the competition scale.”
If our goal is to educate kids and create citizens who can adapt, think critically, seek knowledge, and be self-sufficient, a system of qualitative assessment is superior to quantitative. The time and resources involved, though, often preclude its implementation in low-income public schools.
Examples of Non-Traditional Schools
St. Ann’s School is a private K–12 school in Brooklyn Heights, NY. Its website declares its “commitment to education for its own sake,” its willingness to accommodate differences in learning, and its freedom from “the encumbrances of formal grading.”
This year, 41 percent of its 80-student graduating class were accepted to Ivy League schools, and 21 students will attend these elite colleges and universities. St. Ann’s is well-known for its written reports in place of traditional letter grades and GPAs; a few other elite private schools in NY also follow suit. Drawing conclusions from these outcomes is complicated, however, by the socio-economic privileges of the student body at such schools. We cannot know with certainty whether these learners are more likely to succeed as a result of this evaluation system or the access they have to other educational and financial resources.
Expeditionary Schools, a network of charter instutitions with 165 schools across 33 states, offers a compelling counterpoint. This network follows a project-based learning model in which students engage with interdisciplinary, in-depth study of compelling topics relevant to their communities. Assessment comes in the form of cumulative products, public presentations, and portfolios. Many of their schools have 100 percent acceptance rates to 4-year colleges and universities. On a visit to one Washington D.C. Expeditionary School in 2009, President Obama said it is “is an example of how all our schools should be.”
What You Can Do
If your child's school doesn't use qualitative assessments, you can try these four suggestions:
Act as a qualitative assessment ambassador to the school’s PTA (if there is one) to find other parents committed to this principle. Make this concern clear to the school administration in the hopes that they are amenable to your concerns, but also prepare for them not to change their evaluation practices quickly.
Work with teachers individually (setting up regular meetings and lines of communication) to gain insight into the criteria for their grading system.
Have conversations with your child about what her grade means.
Make sure students are involved in courses that, by definition, require qualitative evaluation, such as the arts.
My Final Thoughts
I became so frustrated with a school system that valued scores over the creative individuality of each student that I left. I started Young Creative Agency, a youth design studio in which creative teens are paid an hourly wage to work on real client jobs as apprentices to an experienced, professional graphic designer. These project-based, real-world learning and work experiences enable kids both to develop 21st-century skills and earn a paycheck. Nonetheless, I’m convinced that schools still need to change.
American education has largely looked the same for hundreds of years, despite a digital revolution that has transformed many ways in which our society functions. It is time to confront the demands that the future will place on our children, and to make school a place where they learn how to shape the world around them.