You are here

Post date: Wednesday, 24 August 2016

What do we mean when we ask if students are "making progress" in science?

Thank you for coming back to our science blog. You get a tag-team handover this week - I am Rose Hipkins and I’m picking up from my colleague Ally Bull. My plan is to build on her thoughts and questions while turning the focus to an issue that I know is worrying a lot of teachers right now. I’ll be musing about how we might determine if – and how - students are making progress in their science learning. I’ve adapted the coming sequence of posts from a talk I gave at SCICON – the biennial conference for NZASE members – which took place in Lower Hutt in the 2016 July school holidays.

As I move around New Zealand in the course of my work I talk to a range of secondary science HODs. Over and over I’ve heard the same concerns in recent months. Senior leaders are putting pressure on science leaders to generate data that reports on students’ progress against the science achievement objectives and curriculum levels of New Zealand Curriculum (NZC). The general intention seems to be that this data should be reported to the school’s Board of Trustees, and be available to be shown to ERO if requested. My primary colleagues report the same pressures in primary schools.

I question both the validity and consequences of this practice. I can understand why science leaders give in to this pressure but I see this as a retrograde move. My aim in this series of posts is to give teachers some ammunition to push back. If we take the future-focused intent of NZC seriously, we need to evolve new assessment and reporting practices. That could begin by asking more critical questions about today’s common practices and expectations. So here are several questions from me to get the ball rolling. I hope some readers will feel inspired to respond by adding to my arguments – or by posing counter-arguments.   

1. (How) does the structure of the science learning area of NZC model progress? Do we agree with the assumptions that underpin this organisational structure? Why or why not? 

If we focus on the individual contextual strands, then the science learning area of NZC appears to be organised as a collection of concepts at each curriculum level. This structure implies that students make progress as they come to understand more difficult science ideas. On one level this makes intuitive sense (especially given our own learning experiences as students). However there are many fishhooks if we equate evidence of such understanding with progress per se. Here are just a few:

  • Many concepts can be understood at varying levels of depth and by students of different ages. We know that their allocation to different curriculum levels is grounded in past experiences that tell us which ideas are comparatively easier, and which are foundational to other more complex concepts. But there is inevitably a degree of convenience in the parcelling up of ideas to spread them evenly across the years of schooling. The choice of 8 curriculum levels was itself an administrative convenience when the 1990s curricula were designed. This structure doesn’t represent something universally true and ‘real’ about students and their progress.       
  • If science concepts are counterintuitive, students may well ‘go backwards’ before they can again move forward in developing conceptual understanding. This possibility was richly illustrated in the world-leading Learning in Science (LISP) project but we seem to have forgotten some of the critical insights of that work in the face of pressures to describe progress in more linear and unproblematic ways.  
  • Different concepts specified at the same curriculum level may be relatively easier or harder for different students depending on their interests and experiences. The act of aggregating test results from different units of work (whether organised by concept or by ‘topic’) to make one overall judgement of progress ignores this inconvenient complication by treating all test results as being of equal and equivalent worth for every student.  
  • NZC is ‘seamlessly’ divided into curriculum levels – does that mean that progress is also seamless – i.e. we need to look for progress in the same sort of ‘thing’ (however we conceive of that) at every level of schooling? Ally Bull puts in interesting counter-argument when she suggests that we should foreground different purposes and types of outcomes at different stages of schooling. 

 

2. (How) should the Nature of Science strand be included in any models and indicators of progress?

The Nature of Science (NOS) strand is described as the ‘overarching, unifying’ strand of the science learning area. The achievement objectives (AOs) included in its four sub-strands are supposed to be woven together with AOs from the contextual strands when designing units of work.  Does this mean that these NOS AOs add new and different content, or does it mean that they change the nature and focus of the learning? Whichever way this question is answered there are implications for the sorts of evidence that might tell us students are making progress.

Readers familiar with the science capabilities initiative will know that this approach models ways in which the NOS strand changes the nature and focus of science learning so that the overall learning experience for students supports them towards achieving the same overarching goal for all students. This goal is explicitly spelled out by NZC as follows:

In science students explore how both the natural and physical world and science itself work so that they can participate as critical, informed, responsible citizens in a society in which science plays a significant role.    

This NZC essence statement adds yet more questions to the challenges to be addressed when considering ways to capture progress. Just for starters, how would we know that students are making progress towards being “critical informed and responsible citizens”? Does evidence that they have mastered selected concepts tell us anything meaningful about how they might or might not take that new learning forward into their futures? And if we say ‘no’ (given the many obvious uncertainties and acts of faith entailed in seeing this as a possibility) then what different sorts of evidence might we look for? I’ll come back to this question in next week’s post when I introduce PISA as an international measure of progress in science learning. 

3: What use could/should be made of any measures of progress generated from classroom-level assessments? 

The first two questions are essentially curriculum questions. This one is different. The focus here turns to consequences of the choices made – who the data matters for, in what ways and why.

Several years ago a large-scale consultation exercise called the Gordon Commission sought to harness the very best of assessment expertise to understand assessment challenges and to make recommendations about future directions for assessment in the USA.  There’s a lot of food for thought in the series of papers published as a result of the Commission’s work. One paper, written by Andrew Ho from Harvard University, makes a valuable distinction between assessing for measuring purposes and deploying the results of those measurements for influencing purposes. The good intentions of the former, he says, can all too easily drift into inappropriate uses when an ‘influencing’ agenda comes into play.  This distinction brings to mind a whole new set of sub-questions concerning the agenda being served when progress is ‘measured’ (by an expedient means) and then reported in response to pressure from school senior leaders:

  • What happens to the data? Will it be used to make a difference for teaching and learning and if so how?  If not, is the measuring exercise involved worthy of valuable time and effort, both for students and for teachers?
  • If the data is primarily generated for influencing purposes, who is being influenced to believe what? Is the measurement strategy a valid one in the light of the claims now being made? Do the data actually mean anything worth noting and acting on?
  • How does the measurement exercise position and influence individual students? Are they made aware of the level assigned to their supposed progress? If yes, what impact might such ‘labelling’ have on their views about their achievement in science and their abilities as learners?    

Another recent large-scale commission, this time in the UK, was so concerned about these questions that they unequivocally recommended to the UK government that classroom-based assessment SHOULD NOT be inappropriately converted to precise levels to be used for school accountability. In fact they explicitly debunked the myth that OFSTED (their equivalent of ERO) expects this. Instead, they emphasised the importance of using assessment data to improve teaching and learning directly. It’s worth getting and reading this report if you are looking for strong counter-arguments to push back against inappropriate data aggregating and reporting practices in your school.  

None of the above should be taken to imply that there is nothing we can do to meaningfully describe and measure progress in science learning. On the contrary there are some potentially useful models on which we might draw. The issue is that progress is not one linear and straightforward phenomenon and it certainly cannot be retrospectively construed from a disjointed series of topic tests. Over the series of blogs I hope to provide some insights that could help us to confront the challenging questions and complexities I have raised. For now, the last word goes to Robert Mislevy, another highly regarded assessment expert and contributor to the Gordon Commission on assessment in the USA.  In his paper he explains why we simply can’t expect to find easy and quick answers, or one miracle assessment tool, that will do the job of measuring learning progress on our behalf:

“The bad news is that we must stop expecting drop-in-from-the-sky assessment to tell us, in 2 hours and for $10, the truth plus or minus two standard errors.” (Mislevy, 2014)

Comments

Rose, this is just as refreshing second time around. I found your address at SCICON hugely affirming as we work to unfetter ourselves from the assessment machine. We are doing well with our junior learning programme where we can be creative and responsive in how we collect evidence of learning success, keeping it clearly linked to student-specific goals. Once we meet the NCEA monster at Year 11 of course things change... Looking forward to your next contribution.

Thank you for laying out this argument so clearly against the simplistic notion that there is a linear progression through a set of hierarchical science concepts. However the progress idea is a train that is picking up momentum. The investment approach to education (ie invest early to prevent having expensive outcomes - keeping people in prison, paying unemployment benefits, etc) seems to require that schools report "progress". Ministry people are all speaking about measuring progress as a way of strengthening schools' accountability for the progress of all young people, especially the so-called priority learners (although this particular deficit term seems to have been dropped). I gather that the idea is that measuring progress of each child will provide more transparency for boards, parents and presumably ERO and MoE. The difficulty (I think) is that this is a political discourse. What you are arguing so well Rose is based on educational thinking. How do we work across this divide, because if we all stick to the discourse we know, we will all talk past each other.

Thanks for the thoughtful responses Terry and Gael. I’m so glad you both raised gnarly issues – we need to keep this conversation real! The new NCEA book will be out in around two weeks Terry – it’s a good time to take stock of what is happening in the senior secondary school and why. Gael I totally agree that we need to confront the drive for reporting progress in ways that keep the focus on the educational arguments. It seems fairly clear that the gathering momentum for ‘accountability’ in all aspects of public life (not just education) won’t be going away any time soon. Onora O’Neill wrote a very prescient series of Reith Lectures about this dilemma in 2002. How right she was – I still go back to them from time to time. Since the pressures are unlikely to go away I think we need to stay inside the ‘progress’ tent (albeit on our own educational terms). Chucking stones from outside could just result in educators being side-lined and having to put up with what we get told to do. That’s why some of our NZCER research team are collaborating with the MOE curriculum group to explore the whole progress question in a much broader curriculum framing. It won’t help in the short term but our aim is to provide more positive, constructive support for teachers’ work (and of course for engaging, purposeful student learning) over the longer term. Do keep the responses coming as over the next few weeks I float some possible areas where we might productively explore progress in science. Rose

Add new comment

Community guidelines

Please refer to the NZCER blogs guidelines for participation on NZCER blog posts.