I have just spent some time this week with school council looking at our year 5 student results for this years national tests [NAPLAN]. I publish the performance data of year 3 and 5 students on these tests each year in the school newsletter. This group, when compared to previous year 5 cohorts, have not performed as well as other groups [the percentage of students who are 12 months or more ahead of benchmarks dropped from 40% to 25% in most areas of English and Mathematics and the percentage of students below benchmarks increased from 3% to 7% on average] and school council wanted to know how the school should respond and if this was a pattern or trend. Whilst many schools across the state would have been happy with the results they were below our trend data.
I spent some time looking at the data and followed this group’s achievements from prep to year 5. Some things were obvious: e.g. the size and composition of the group has changed [smaller and 3:1 ratio of boys to girls], a number of girls have transferred to independent schools some on scholarships, the group had numbers of students with patterns of absences and lateness to school particularly in prep and years 3 and 4 – more than previous year 5 groups in the last few years, the drop was most noticeable in student writing and spelling [the more visible areas of learning], the group has had small to average class sizes over the 6 years ranging from 20 – 25, and the small group of students below benchmark have had some additional interventions in year 1 [reading recovery] and year 4 [additional intervention teacher support] with some subsequent gains in outcomes.
Some questions were raised about this group being in the top year of multi aged classes in year 2 and 4 as having an impact [ suggested lowering of standards] however other groups of year 5 students results who had been in the top year of multi aged classes actually increased during this time – so no obvious pattern there.
I made a series of recommendations including some more additional support teachers for under performing students, trial the use of software for teachers to mark attendance roles and therefore having instant access to patterns and trends data in attendance and some specific focii in the instruction of writing and spelling, to name of few.
The teachers have indicated that student outcomes have improved throughout the year since the NAPLAN tests in May. At this weeks leadership meeting we began to reassess our list of assessment tasks with a real focus on assessment for learning as opposed to the dominance of assessment of learning at half year and end of year.
The tendency is of course to want to take further more dramatic action for this group. My attention is drawn to a recent article in the Age on December 1st about test results dominating school curriculum. I have copied most of the article written by Patricia Buoncristiani a former Victorian school principal who taught for many years in the United States as it struck a cord with me:
Widespread high-stakes standardised testing is sending the US education system spiralling to the bottom. We need to seek out our own solutions and not follow them down. New York schools chancellor Joel Klein’s focus on accountability is improving test scores in New York but is it improving the education of the children they serve?
I retired three years ago after serving four years as a school principal in Virginia. Before that I was a principal here in Victoria. The demographics of my US school were typical of many urban schools – a majority of families living below the poverty line and 95 per cent of my children were African-American. Test scores when I arrived were deplorably low. The year after I left we had dragged scores up to the acceptable, accreditation level. The school was considered to be a success story. In spite of this I left frustrated, dismayed and angry.
How did we raise these scores? In a centrally managed system the curriculum was tightened relentlessly until teachers taught only what was going to be tested. The school district produced pacing guides and every teacher, in every school, was required to conform to the specific content and timing of curriculum specified in those nine-week guides. The side effect of this was that in struggling schools like mine there was no time for creativity or for responsiveness to the emerging interests of children. All that mattered was getting through the nine-week pacing guide before the nine-week assessments tested that content.
Results for individual students and individual teachers were scrutinised by administrators trained in “data disaggregation” and pressure was brought to bear on any teacher who had not covered the required curriculum in the required time span. A significant amount of time was spent teaching children “test-taking skills”, a set of skills that would serve the school district well as it helped increase test scores, but would do nothing for children who would probably never face four-point multiple choice questions in the world outside the school.
They were taught how to read questions accurately – a useful skill – as well as how to maximise their chances if they had to guess. Struggling students, who were increasingly disengaged by this approach of teaching the curriculum rather than teaching the children, were required to attend additional half-day classes during vacation periods. In the nine-week period leading up to the large-scale statewide testing, students would also be required to attend after-school classes beginning as young as grade two.
State-wide testing was carried out under intense security. Teachers and principals were under huge pressure because scores would become public knowledge and schools and school districts would be publicly compared. There was considerable temptation for some to interfere with the process by either assisting students taking the tests or fiddling with returns.
Stress levels among children soared. I recall intervening in a fifth grade class when the teacher foolishly told children they may need to re-take the test because of a possible irregularity. When I arrived in the room I was met by hysteria, one child sitting rocking and banging her head against a wall and another tugging lumps of hair out of her head while the distressed teacher sat on the floor holding a third girl and trying to calm her down.
In my first year I was surprised by the final results and asked my supervisor how they were calculated. She laughed and said even she didn’t understand that, so it was best I didn’t waste my time trying to. What we did understand was that a great deal of statistical manipulation was carried out to ensure that each school got “the best result possible”.
The financial cost of the testing was also an issue. The unfortunate result was that the curriculum began to be determined by what could be tested by multiple choice questions, which can be easily electronically scanned. Content predominates and processes such as creative thinking are overlooked because they are too hard to test. How do you test a student’s ability to think divergently with a four-point question?
My frustration grew out of my inability to respond to the individual, divergent needs and abilities of the students and teachers in my school. My dismay was fed as I watched the curriculum become narrower and narrower as fearful administrators and teachers focused on teaching only what was likely to be tested.
Anger began to rise in me when I saw how my economically disadvantaged students were becoming pawns in a numbers game. While they needed to learn that school was an exciting place that engaged their interests, explored the world outside their limited experiences and nourished their developing passions, they were being taught how to pass tests. It was time I left.