You are here

Post date: Thursday, 29 September 2016

TIMSS and the progress challenge

By Rose Hipkins

This is the sixth in a series of posts about making progress in science. Last week I turned my attention to science at the primary school level. I drew on processes developed for NMSSA, which assesses achievement at year 4 and year 8. My focus was the challenge of equating a specific assessment measures with levels in the curriculum.

TIMSS is an internationally standardised assessment programme that includes primary-level science and so my plan for this week is to put this assessment measure under the spotlight and to draw some brief contrasts with NMSSA.

TIMSS stands for Trends in Mathematics and Science Study. It is the second of the international measures in which NZ invests, and which hits the news from time to time. The programme assesses students at year 5 and year 9. Like PISA the intent is to influence governments. TIMSS belongs to the International Association for the Evaluation of Educational Achievement (IEA). The IEA has a different influencing agenda to the higher-profile OECD. If this comparison interests you, here is one paper that debates the relative merits of PISA and TIMSS.

TIMSS aims to measure differences between each participating nation’s intended, taught and achieved curriculum, with a view to influencing how curriculum policy is delivered in participating nations. We’ve got a challenge right there. Here’s how I represented that challenge at SCICON.

The TIMSS model assumes that a nation’s science curriculum is clearly prescribed. But there isn’t one way to create an ‘intended’ curriculum from the science learning area. The structure is designed for flexibility, so that teachers can build learning experiences that are more than the sum of the parts. As part of this flexibility, there is no requirement to cover every achievement objective – just to build a ‘balanced programme’ over the course of students’ school years.   

The assessment framework for TIMSS emphasises curriculum content. However near the end the framework document does specify that assessment tasks will be designed to allow students to demonstrate their cognitive skills in application and reasoning, in addition to recall. Compared to PISA, the TIMSS framework does not as obviously match the ‘citizenship’ emphasis given to science in NZC. We need to bear this in mind when considering the implications of international comparisons of overall science achievement in TIMSS (these tend to not be particularly favourable to New Zealand).  Trends over time, combined with the contextual information gathered by TIMSS, also summarised on the page hyperlinked here, provide the more valuable feedback from this investment because they provide evidence for making systems-level changes.  

Keeping these reservations in mind, let’s consider whether TIMSS can make a useful contribution to the progress question at the level of individual students and classes. The scale used to sort students into achievement bands is similar in structure to the PISA scale discussed in the second and third blogs in this series. The most obvious difference is that it only has four broad achievement bands whereas PISA has six. Other differences become more evident when a specific detail is pulled out, as I did for both causal reasoning and argumentation when I considered PISA.    The slide below shows the detail I pulled out from the TIMSS scale on page 20 of the 2010/2011 Year 5 New Zealand report.   

This summary slide from my SCICON talk* shows the last sentence of each level of the TIMSS scale. All the other details at each level are about content knowledge and this sentence identifies ways students can use that knowledge.

* The goal post clip art is from Shutterstock and the increasing size is meant to signal that the goal is getting bigger/ harder .

Note how many different things these descriptors cover compared to the more focused sets of statements I was able to pull out from the PISA framework. Still there are some quite useful signals of expectations here, given that this is for Year 5. Like the argumentation research introduced in the third blog in this series [hyperlink], this scale raises some interesting questions about the trajectory of progress. For example (how) should we align the ‘advanced’ level of this framework, with its one mention of argumentation, with the lower levels of the detailed argumentation framework that I discussed several weeks ago?

It’s also interesting to compare this scale with the NMSSA scale that I discussed in last week’s blog. NMSSA covers Year 4 and Year 8 so this one should fall somewhere in between, albeit skewed towards the Year 4 expectations. I’ll repeat that detail below for ease of comparison.

Compared to TIMSS I think this part of the NMSSA scale has a much more participatory feel. It positions students as active-meaning makers, as well as decoders of meaning made by others. But does this difference matter? Arguably it does if we want students to learn science in ways that help them meet the overarching aim of informed and active citizenship, as specified in the learning area summary statement in the front part of NZC. 

Assessments that allow students to actively demonstrate their learning are more challenging than pencil and paper tests to organise. They are also more difficult to fairly compare across different contexts. But if we mean what NZC says – i.e. we really do intend to use science learning to help students get ready to be active informed citizens by the time they leave school - then we do need to think carefully about what we want them to be able to show they can do with their learning along the way.  The science capabilities were introduced as curriculum ‘weaving’ materials with this specific challenge in mind. And so next week I’ll turn my attention to the preliminary set of capabilities published on TKI. How might we tell if students are making progress in their capability development, given that this is a comparatively recent way of thinking about the purposes for science learning at school? 

As I approached the end of this post, it occurred to me that it I could have written very similar comments about the suites of NCEA achievement standards to those I have written here about TIMSS. Like TIMSS, most NCEA standards for science subjects have a predominant focus on content, with varying degrees of emphasis on aspects of critical thinking used to differentiate student performances. Like TIMSS, very few of these NCEA standards are written in such a way that they reflect the bigger ‘citizenship’ intent of NZC that comes from weaving the NOS strand and the contextual strands together. Does this matter? (It will be obvious if you have followed the whole series of posts that I think it does.)  What should or could we do about it?  

 

Comments

I've never posted to a professional blog before. Good on ya Rose for generating enough interest. Thank you Rose for sharing your insights and thoughts into this question of assessment to show progress in science learning. You bring a great deal of experience and clarity. I like that you are using examples that remind us of the important role of the key competencies and capabilities of science (and citizenship) rather than just content knowledge. For me, what makes it so difficult is that it seems to come down to a few simple questions that aren’t easy to give a single answer to - what’s important to monitor, what is the purpose of this monitoring, and for whom is this information gathered? Those three pieces can come together in many ways and the outcome of what to focus on will be influenced by the answer to each one of the three questions. This leaves many schools feeling quite shaky about making decisions about ‘What’, ‘how’ and ‘when’. Secondary often says that NCEA directs much of their decisions, but what about all of the students who don’t do science in Y’s 12, 13? What is happening to help them be ‘critical, informed, responsible citizen” around science topics? Can we assume that they will have what they need by the end of Y11? Primary wonder about what should be expected at different developmental steps. I’ve seen some very good cross school monitoring and gaging but none of the schools feel 100% confident they have focused on the thing that matters. They worry they will let the students down. It leaves me with a question. As schools practice and trial in this space for a few more years will these be easier to answer? Should we just admit to the pain and discomfort of new learning (for all of us) and trust our professionalism to shake out what matters and when/how to assess? Can we be patient while we gather and share enough coordinated, good trials and data to see what matters and why? As I write that it feels quite scary but it also feels necessary. Keep us thinking….

I thought I did this the other day but I don't see it so am trying again. Apologize if this is a double up Thank you Rose for sharing your insights and thoughts into this question of assessment to show progress in science learning. You bring a great deal of experience and clarity. I like that you are using examples that remind us of the important role of the key competencies and capabilities of science (and citizenship) rather than just content knowledge. For me, what makes it so difficult is that it seems to come down to a few simple questions that aren’t easy to give a single answer to - what’s important to monitor, what is the purpose of this monitoring, and for whom is this information gathered? Those three pieces can come together in many ways and the outcome of what to focus on will be influenced by the answer to each one of the three questions. This leaves many schools feeling quite shaky about making decisions about ‘What’, ‘how’ and ‘when’. Secondary often says that NCEA directs much of their decisions, but what about all of the students who don’t do science in Y’s 12, 13? What is happening to help them be ‘critical, informed, responsible citizen” around science topics? Can we assume that they will have what they need by the end of Y11? Primary wonder about what should be expected at different developmental steps. I’ve seen some very good cross school monitoring and gaging but none of the schools feel very confident they have focused on the thing that matters (usually means external measures). They worry they will let the students down. It leaves me with a question. As schools practice and trial in this space for a few more years will these be easier to answer? Should we just admit to the pain and discomfort of new learning (for all of us) and trust our professionalism to shake out what matters and when/how to assess? Can we be patient while we gather and share enough coordinated, good trials of various approaches (I know who will do this and how will it be funded?) to see what matters and why? As I write that it feels quite scary but it also feels that it respects our responsive curriculum while also offering a sense of shared finding out.

Add new comment

Community guidelines

Please refer to the NZCER blogs guidelines for participation on NZCER blog posts.