You are here

Post date: Wednesday, 31 August 2016

PISA and the progress question

By Rose Hipkins

In the first blog in this series I questioned the educational value of cobbling together reports of students’ progress in science from measures that lack coherence, such as a string of ‘unit’ tests. Instead, I suggested, we should think carefully about the sort of progress the curriculum indicates as important, and then ponder how we might measure that with at least some validity.

This week I’m hoping to start an ongoing conversation about the second half of the challenge – how we might measure what we say we value. My plan is to look across various assessment programmes to see how they model progress. I’ve chosen the OECD’s PISA programme (the Programme for International Student Achievement) as my starting point for two main reasons. First, there has been a huge intellectual and financial investment in developing PISA’s assessment frameworks. We might as well learn what we can from such expertise. Second, PISA claims to assess competencies to participate meaningfully in society –those that 15 year olds have developed across the years of their education and will take into their post-school years. Their focus, admittedly, is on the world of work (they are after all the Organisation for Economic Development) but there is a specific science literacy focus to the science framework developed for the PISA assessments.  

This focus on what students can do with their science knowledge when it is applied to life contexts and challenges is not too far from the vision spelled out in NZC’s essence statement for science – specifically that students will be able to use their science learning to “participate as critical, informed, responsible citizens in a society in which science plays a significant role” (NZC, p.17). The assessment framework for PISA brings together the elements shown in the figure here.      

With the structure of the science learning area of NZC in mind, one especially noteworthy aspect is the inclusion of epistemic knowledge. That’s knowledge about how we know what we know and what makes science trusted knowledge about the natural world – i.e. the sorts of ideas that are included in the Nature of Science strand of NZC.

So far so good but this still doesn’t help us with the burning question of what ‘progress’ might look like. This is the framework for designing assessment tasks, not for judging student work. For the latter, we need to turn to the draft reporting scale for the 2015 assessment round, which foregrounded science (it’s one of the last sections in the framework document).

No doubt the criteria in this six-level scale will be updated in the light of what students actually did in 2015 but we won’t see that detail until the results are published in December this year (2016). Some cautions are in order before we pull out selected detail from the scale:

  • This is a horizontal scale, designed to stratify students of the same age (age 15). If we use indicators from the scale as a progress model, we are making the assumption that the differences that show at age 15 represent a potential sequence of achievement over the years before age 15 – i.e. we effectively turn it into a vertical scale.
  • The scale has many different components and any one student is likely to be at different places on the scale for some of these components. As used in PISA, an overall statistical judgement is made about where each student sits. However for our purposes it could be useful to pull out specific components and consider them in isolation from the overall mix. That’s what I do next. I’ve chosen causal thinking as a specific focus.     

 

Above is one of the slides I used in my SCICON talk.* I’ve pulled out statements that describe cause and effect thinking at four different levels of the scale (as indicated in brackets – 6 is the highest level).

* The goal post clip art is from Shutterstock and the increasing size is meant to signal that the goal is getting bigger/ harder  

I left out some levels so that the slide could be easily read from the back of a big room. Even so, there is a clear sense of progress in these statements. Would you see this as a good candidate for reporting? Have (Why) should we value cause and effect thinking as an intended outcome of a unit of work?

My answer to the latter question would go as follows. There is research to show that cause and effect thinking is an important foundation for subsequent science learning. This research argues that such thinking should be developed as a priority in primary school science. (Story 9, from the Key Competencies and Effective Pedagogy resource on TKI, illustrates how a new entrant teacher might begin developing this sort of critical thinking in very young children.)

Clearly a sizeable number of students reach age 15 without developing the ability to think beyond ‘very simple’ causal relationships or we would not see this criterion at the lowest level of the PISA scale.  Yet at this level being critical about causal claims is very important when socio-scientific issues are being considered. (One of the resources for the science capability Engage with Science models a way to introduce this element into discussions about student work that illustrates Achieve/ Merit/ Excellence  level answers to an NCEA achievement standard.)

Dubious or plain wrong causal claims are one of the hallmarks of pseudoscience so it is not difficult to make the case that learning to be more discerning about them is a useful capability for informed citizenship. Check out these amusing examples in the context of earth and space science. One of the challenges of course is that demonstrating this ability in one assessment context does not necessarily imply a willingness to do so in the messy uncertain contexts of real life. However it does seem reasonable to suggest that the more practice students have – in many different contexts and at varying levels of intellectual challenge - the more likely they will be to take critical causal thinking away from school as one possible tool in their ‘citizenship’ toolkit and be disposed to use it.

So there’s my case. It will be clear that I think this is an important outcome to develop over the years of school (and hence worth tracking for evidence of progress and/or ability to apply in a range of contexts). Do you agree? Why or why not? What opportunities and challenges do you see?

Next week I’ll pull a second detailed sub-scale out of the overall PISA scale. My plan is to use PISA as a starting point to take a closer look at what making progress in argumentation might look like as a science learning outcome.

Add new comment

Community guidelines

Please refer to the NZCER blogs guidelines for participation on NZCER blog posts.