You are here

Determining meaning for key competencies via assessment practices

Rosemary Hipkins
Abstract: 

Some schools have expended considerable effort to create assessment rubrics as part of building an initial understanding of the key competencies in The New Zealand Curriculum (Ministry of Education, 2007). Key competencies can be interpreted within a relatively traditional skills-based framework, or they can be seen as a vehicle for transforming schooling to better meet students’ learning needs for the 21st century. Premature assessment decisions could prevent some schools from developing the latter understanding, but rethinking assessment within a sociocultural framework could help give powerful effect to the key competencies in the enacted curriculum.

Determining meaning for key competencies via assessment practices

Rosemary Hipkins

Abstract

Some schools have expended considerable effort to create assessment rubrics as part of building an initial understanding of the key competencies in The New Zealand Curriculum (Ministry of Education, 2007). Key competencies can be interpreted within a relatively traditional skills-based framework, or they can be seen as a vehicle for transforming schooling to better meet students’ learning needs for the 21st century. Premature assessment decisions could prevent some schools from developing the latter understanding, but rethinking assessment within a sociocultural framework could help give powerful effect to the key competencies in the enacted curriculum.

Introduction

As New Zealand schools take up the challenge of determining what it will mean to give effect to The New Zealand Curriculum (NZC) (Ministry of Education, 2007), the change of direction signalled by its overall structure and newer features is becoming more apparent. The key competencies have generated a lot of debate, in part because they are an obviously new element of the curriculum framework. Questions about what key competencies will look like in practice have particular salience for school leaders, who know they will be accountable for the manner in which their school takes up and implements the new curriculum policy.

In this period of exploration, the widely debated question “Should we assess key competencies?” doubtless originates from important issues of accountability and motivation: assessment of key competencies could be seen as visible evidence that a school is implementing NZC and taking its newer features seriously; teachers may be expected to value, and hence pay more attention to, key competencies if they know they have to assess and report on them; and students may well value key competencies if they know they are going to be held accountable for demonstrating them.

My conversations with a wide range of New Zealand’s school leaders over the past two years have raised questions for me about the tendency to expend considerable effort to create assessment rubrics for the purpose of building an initial understanding of the key competencies. This paper discusses why schools may approach initial professional learning about the key competencies in this manner and sets out some reservations about doing so. It suggests that school leaders and teachers need to get to grips with the nature of key competencies—where they came from and what curriculum purpose they are intended to serve—before attempting to determine what evidence of their impact on learning might look like.

The key competencies originated in an Organisation for Economic Co-operation and Development (OECD) project called DeSeCo1 (OECD, 2005). This project set out to define a small number of generic competencies that all individuals need in order to lead a satisfying life in a well functioning society (Rychen & Salganik, 2003). The five developed for NZC are based on this DeSeCo work, but they were modified as a result of policy debate and community consultation (Rutherford, 2005). Although the DeSeCo project originated in economic concerns and a quest to find parameters that could be compared across different national contexts in the OECD’s educational testing programme (Rychen, 2004), the potential for key competencies to transform teaching and learning soon became apparent to curriculum makers, whose concerns were more to do with fostering citizenship and skills for learning in the complex, heterogeneous societies of the 21st century (Reid, 2006). This transformative potential is clearly signalled in the following definition, produced by one of the leaders of the DeSeCo project:

A competence is defined as the ability to meet a complex demand. Each competence corresponds to a combination of interrelated cognitive and practical skills, knowledge and personal qualities such as motivation, values and ethics, attitudes and emotions. These components are mobilised together for effective action in a particular context. This definition represents a demand-oriented or functional approach, placing at the forefront the manifold demands individuals encounter in the context of work and everyday life. It is holistic, in the sense that it integrates and relates demands, individual attributes and context as integral elements of a complex performance. (Rychen, 2004, section 4)

This emphasis on actually using learning in real-life contexts—not just acquiring school knowledge and skills—stands in contrast to traditional curriculum approaches. It has implications for curriculum and pedagogy, as well as for assessment. However, when key competencies are seen as the replacement to the essential skills of the previous curriculum documents, it is possible to make a more skills-based reading of their very brief descriptions (Ministry of Education, 2007, pp. 12–13) and hence miss their transformative potential.2 This more limited way of interpreting the key competencies is, I believe, more likely if assessment questions are asked too soon in the process of getting to grips with their nature and curriculum intent.

The emphasis on actually using learning in real-life contexts—and on paying attention to the complex manner in which knowledge, skills and personal qualities such as values and emotions are mobilised differently in different contexts—points to a sociocultural framing of learning (Carr, 2004). From a sociocultural perspective, assessment practices can be seen as conscription devices, conveying to students, teachers and parents what really matters in the learning context (Cowie, 2005; Cowie & Carr, 2004). Thus the manner in which assessment questions are answered will help determine what the key competencies come to mean in practice.

From a sociocultural perspective, learning is situated, distributed, mediated and participatory. In this paper I will draw on these four characteristics of learning to determine the likelihood that the use of assessment rubrics will support a full unfolding of the transformative potential of the key competencies as the curriculum is given effect in schools. I begin by sketching two ways of proceeding when using assessment rubrics as an exploratory device for learning about key competencies.

Making sense of key competencies by building rubrics

The following descriptions of assessment practices in two hypothetical schools are a distillation of observations from a number of school sites. They demonstrate how meaning for key competencies could be constructed when assessment rubrics are developed as part of early curriculum exploration.

Early explorations in School A

The staff of School A began their exploration of the key competencies on a teacher-only day. They studied the definitions in NZC and then, in small groups, brainstormed ideas about what each key competency may look like as students went about their usual daily learning in the school. They then drew on their existing ideas to create overarching rubrics, one per key competency, that encapsulated their essence as they saw it. The rubrics that resulted echoed the essential skills of the previous curriculum. The criteria for managing self, for example, related to taking personal responsibility for classroom behaviour, and listed aspects such as being on time, having the necessary books and equipment and paying attention in class. Participating and contributing focused on effort and interaction. The four levels created for each rubric were differentiated by the use of adjectives such as seldom, sometimes, mostly or always. Creating these rubrics was hard work and took some hours, moving between small-group and whole-staff discussions at various stages.

Most teachers felt very pleased with the result and believed that the key competencies would be a good thing for their students’ learning. Most took the time to explain the collective thinking of the staff to the students and displayed the rubrics in their classrooms. They worked on developing a shared language for talking about key competencies and reminded students of their expectations when necessary. When it came to reporting time, the teachers used the new rubrics to assess each student on each key competency. They selected the level they thought best matched the student’s overall behaviour and attitude and put that down on the report. (Parents could read the rubrics printed on the back of the report so they knew what these numbers meant.) A few teachers were worried that students’ behaviour close to report time might not fairly represent their overall competence. They decided to create a schedule of sampling points at which they would note down the rubric-level behaviour they saw at that time, and then average these “scores” at report time.

Exploring one competency at a time in School B

The curriculum leaders in School B decided that they needed to take things slowly and explore one key competency per term. The staff began much the same way as the teachers in School A by brainstorming what they thought the focus competency may entail. These initial thoughts were supplemented by readings carefully chosen by the principal to introduce new dimensions for their consideration. When they felt they had developed their own understanding of the focus competency, each teacher shared and extended their thinking in conversation with the students in their class. Together they developed ideas about what demonstration of the competency may look like. The teachers then pooled all this thinking and came up with a set of three or four rubrics for the competency. These were designed to build shared understanding of different dimensions of competency and to use a common language of learning that students would recognise regardless of the class they were in. These rubrics were then displayed prominently on classroom walls.

Together, teachers and students looked for instances where the focus competency was being demonstrated in different learning episodes and in wider school contexts. They discussed these, and once a student had justified their claim to the teacher they could place a sticky flag with their name and a brief description at the relevant position on a rubric. The teacher collected and dated these flags at the end of the week, creating a running record of each student’s awareness and demonstration of the competency. The students were also encouraged to record their own reflections in their learning journals and to set goals for the next steps they may take in strengthening this competency. All these records formed the basis for reporting to parents. At the end of the term the teachers revisited their experiences of the key competency and updated their rubrics for ongoing reference.

The introduction noted that the manner in which assessments are carried out will help develop the meaning the key competencies come to hold. However, as Joe Kincheloe recently observed, “we don’t see bumper stickers proclaiming meaning happens” (Kincheloe, 2004, p. 12). It is highly likely that the meaning-making dimensions of processes adopted for assessing and reporting on key competencies will be invisible to most teachers, students and parents unless they are supported to inquire critically into the intent, nature and impacts of the assessments they enact. NZC provides guidance on monitoring the “development of key competencies”, which many will doubtless read as implicating assessments of them. This guidance is that schools need to “clarify the conditions that will help or hinder the development of competencies, the extent to which they are being demonstrated, and how the school will evaluate the effectiveness of approaches intended to strengthen them” (Ministry of Education, 2007, p. 38). This is the clarification exercise I now attempt for my hypothetical schools, with sociocultural learning theory as my exploratory framework.

Exploring these practices within a sociocultural framework

In this section the four selected sociocultural characteristics of learning are briefly explained and then two or three implications for assessment are weighed against the case study descriptions for the two hypothetical schools to suggest ways each could move their thinking forward. The considerable overlap between the four factors made this a challenging exercise. Where to put specific points when they potentially could go in one of several locations? This same dilemma applies to the key competencies: they, too, are holistic. There is no one “right way” to make the cut in complex contexts, but what follows is an attempt to make sense of my personal reservations about what is currently happening in many schools.

Learning is situated

Learning happens in contexts that include other people, artefacts of various sorts and a history that will entail particular ways of being and working (how we do things here), as well as some sort of agenda for the activity taking place (what this learning is really about). Thus, activity, culture and context all have an impact on what is and can be learnt (Brown, Collins, & Duguid, 1989), and unless these aspects are taken into account, the validity of assessments will be compromised (Moss, Girard, & Haniford, 2006). As well as being situated in place, learning is situated in time. This requires assessment to take account of “continued competency growth” (Reid, 2006, p. 54), not just knowledge acquisition at a point in time. Taking account of context, culture, activity and changes over time is likely to require substantive change to the types and forms of assessment that are typically selected in schools.

For example, in the practice described at School A, there seems to be an assumption that competency resides in individuals separate from the contexts in which they demonstrate it (Delandshere & Petrosky, 1998; Hipkins, 2007). It is, perhaps, seen as more akin to a personality trait—a bit like being “bright”. However, school itself is a context, with its own ways of being and doing (Brown et al., 1989). Students whose social experiences outside school are more closely aligned with those inside school are more likely to prosper there (Carr, 2008a) because they are more likely to already know what sort of learning behaviours and ways of being their teachers expect them to demonstrate. This validity issue is exacerbated when competency is unilaterally determined and judged by individual teachers, not necessarily on the grounds of any specific and transparent evidence (Moss et al., 2006). If the assessment practice described is carried out in a secondary school, students are likely to have conflicting judgements about their competency made by different teachers. School B does rather better in dealing with these challenges because the teachers are making concerted efforts to demonstrate the aspects of competency they are looking for, to acknowledge when they see these in action and to involve students in discussions about their developing competencies.

The development processes followed in both schools could plausibly rest on a more traditional skills-based reading of key competencies and hence leave the teaching and learning of “content” untouched. In that case, the key competencies become a carefully described add-on, essentially enlisted in the service of co-option to willing compliance with the status quo. This could lead to improved learning for some students, but the restricted role misses the transformative potential of the key competencies to create richer links between school learning and the lives and concerns of all the students and their communities. Also, all the key competencies will be in use in any specific context, and this holistic demonstration may not be easily disentangled into parts. When one key competency is directly in focus, all the others will be in the shadows. Separating them for assessment purposes will miss the dynamics of their interactions (Reid, 2006). School B is better placed to confront and explore these challenges than School A because the key competencies are already being developed in a learning context. The challenge for School B may be to broaden their explorations to contexts beyond the classroom, and to consider how separating the competencies and breaking each into several constituent parts may lead them to miss something important in the whole demonstration of learning. The sum could very well be greater than the parts, but it will take considerable exploration for the full scope of subject-specific competencies in action to become apparent.

Competency develops over time as learners experience a range of different learning tasks in the same or new contexts, requiring them to draw on what they know and can do in somewhat different ways. This is the strengthening process referred to in the NZC definition cited above (see Carr, 2006 for a more detailed discussion of this point). Competency has dispositional elements because learners need to be ready, willing and able to use what they know and can do, but what these might look like is not always clear because the transfer of learning is a “fuzzy” concept, about which there is still much to learn (Carr, 2008a). School A, with its snapshot approach to reporting, appears untroubled by such uncertainties. Nevertheless the “sampling” discussion among some of the staff suggests a way to open up debates about the twin impacts of context and time on learning progress. School B has developed an informal way of tracking competency development over time, and the involvement of the students in the assessment of their growing competency and subsequent goal setting is likely to strengthen the validity of judgements made. This is a useful platform on which to build as they consider the further implications of locating learning and assessment in a sociocultural framework.

Learning is distributed

Learning can be seen as stretched over the various resources of the situation and hence entails multiple interactions between the learner and these other dimensions (Carr, 2006). Learners need to recognise the purposes to which resources can be put, when it is appropriate to deploy these and how to do so increasingly skillfully (Carr, 2004, 2006). Here making progress could be seen as relating to the number and complexity of the resources the learner co-opts and the possibilities they can see for using these, singly and in combination, to extend their repertoires of practice (Carr, 2006). Learning that is both situated and distributed is necessarily episodic, and progress happens “in pieces” rather than on a smooth and unified trajectory (Carr, 2008b). The assessment implications here are far reaching. For example, the teacher and students will each bring a personal set of experiences, assumptions, knowledge and motivations to any one assessment event, so that what each student demonstrates does not necessarily constitute unproblematic evidence of learning or the lack thereof (Cowie, 2009).

An obvious issue here is that students learn as they interact with each other and with the teacher. Learning experiences can be designed so that it becomes impossible to tell who contributed what to the outcome reached. (Sumara and Davis, 2006, for example, provide a rich account of this dynamic of emergence.) Assessment, especially when it is used for reporting purposes, traditionally focuses on individuals. When group dynamics are taken into account, this is typically done by according the same mark or grade to all members of a team, regardless of differences in the nature and qualities of their individual contributions. I want to ask a different question to the fairness objection typically raised in this situation: How may a universal group assessment apply to competencies that are simultaneously owned and enacted differently by each individual, yet developed in ways that are supported (or not) by interactions with others? Think, for example, of the group that achieves a stunning piece of work, but at the expense of sidelining the least competent from making a contribution. School B, in particular, may be ready to embrace this type of dilemma, but there is unlikely to be any one simple answer.

A related issue for teachers to consider is that the dynamics of the classroom environment itself can enable or constrain students’ ability to avail themselves of the resources on offer. For example, Zembylas (2007) discusses what he calls the “emotional ecologies” of any learning context. He lists a range of types of emotional knowledge that will be in play: individual (e.g., attitudes and beliefs about learning, emotional self-awareness); relational (e.g., caring, empathy, knowledge of others’ emotions); and sociopolitical (e.g., knowledge of power relations and of appropriate pedagogy in a specific context). These components of emotional knowledge occur concurrently, overlap and interact such that they are impossible to identify and separate in the moment. They are, Zembylas says, meaningless unless understood in a specific context, which includes a history of how they came into being. The case studies in his paper illustrate how transforming one’s own emotional knowledge is hard personal work, with a strong metacognitive component. Assessment processes that use generic rubrics to report on a narrow range of behaviours observed at one point in time are unlikely to even notice what he calls these “emotional ecologies” in action, yet emotions are likely to be key to the “learning to learn” outcomes signalled by the vision and principles of NZC (Ministry of Education, 2007, pp. 8–9).

Learning is mediated

In any learning situation the various resources and cultural tools available, and the routines and social practices present, may or may not be deployed in ways that “assist the learner to get on with the job” (Carr, 2008a, p. 42). This draws attention to the provision of adequate and appropriately supported “opportunities to learn” (Hipkins, 2006), which also have implications for the validity of assessments (Moss et al., 2006). Where a school wishes to carry out assessment as a means of demonstrating its early progress towards the implementation of NZC, attention may be more appropriately focused on the quality of teaching, including the establishment of supportive learning environments and the use of contexts with meaningful links to real-life concerns, rather than hastening to judge the competencies of individual students. The teachers in School B could compare their teaching experiences to build collectively on their growing awareness of the approaches and strategies that best support students to demonstrate their personal competency. By contrast, the teachers in School A are unlikely to see the benefit of this type of collective professional learning as long as the focus remains on the description of generic sets of student behaviours.

Traditionally, the manner in which activity, culture and context mediate learning in school contexts has been largely ignored. Learning is tacitly assumed to happen in more or less the same way for all students if the teacher is doing a good job. The above discussion suggests that a focus on developing competencies will require teachers to develop a more nuanced view of learning dynamics. For example, when learners are required to act on their learning, those who feel competent are more likely to be able to show that they are competent. Since everyone is more comfortable in some contexts than others, and what feels risky is a very personal assessment, any one context may support some students to demonstrate their competency while inhibiting others. The generic rubrics developed in School A are unlikely to lead staff to confront this dilemma. By involving students in conversations and self-reflection about their growing competency, the teachers in School B could well develop a more acute understanding of what is personally challenging for each learner, and indeed for themselves as learners about learning.

From a situated perspective, the tools and materials available to hand can enable or constrain both thinking and overt action. Brown et al. (1989) use the familiar example of trying to figure out how to use a new gadget—instructions in one hand, gadget in the other. Drawing on relevant experience could also help, as could someone showing you what to do. It is the combination of resources to hand that will mediate for or against ultimate success. As Brown et al. put it, the environment takes some of the cognitive load when complex problem solving is required in real-life contexts. So long as rubrics remain focused on the teacher assessing the learner in isolation from potential mediating influences, the best we could hope for is that these dynamics may at least come into the peripheral vision of the assessor as the judgement is being made.

Learning is participatory

A learner who is willingly getting on with the job is doing things that have meaning for them. If they are to become stronger, more resilient learners they need to put in the effort to master demanding new skills and ideas—in other words, to stretch their “learning muscles” (Claxton, 2008). Learning can also be seen as identity construction, where assessment processes may help or hinder the ways in which learners experience themselves as competent and are recognised as competent by others (Cowie, this edition; Cowie & Carr, 2004). Authenticity is often emphasised here, with learning seen as emerging from contexts that have compelling meaning for students and their communities, and that have the potential to open up into bigger picture frames and concepts (Bolstad, Roberts, Boyd, & Hipkins, 2009).

One challenge posed by this quality of emergence is that rubrics tend to be anticipatory rather than retrospective. When they are constructed before the learning to be assessed takes place, they cannot take account of the situated, contextualised, participatory and emergent nature of the learning. The relationships and connections that emerge and evolve over the course of the learning experience are unlikely to be an assessment focus, and without this attention to context and action, instances when powerful new connections open up for individuals or groups could simply fade away unnoticed.

Another challenge is that participation in a real-life context may require different actions of different individuals contributing to a collective whole. To illustrate, Roth and Desautels (2004) argue that those who come to support the speakers at a citizens’ meeting, to consider the views raised and indicate where their support lies, are participating just as authentically as those who speak. With no audience there is no meeting. There is a dilemma here for schools. I suspect that the more active role is likely to be judged as evidence of superior engagement and/or competency. In that case, any attempt to construct a rubric to assess this type of collective participation would likely result in a virtuous hierarchy, with leaders judged more competent than other participants. To redress this injustice, the level of personal stretch for each and every individual would need to be taken on its own merits, and would likely become apparent during a retrospective conversation at the immediate conclusion of the learning episode in question (or even during it). Anticipatory rubrics are not necessarily completely out of place here, but they could only ever be used to set goals and focus learning conversations.

Where to from here?

For some time now I have felt concern when shown—usually with considerable pride—examples of rubrics schools have constructed to explore and bestow meaning on key competencies. How best to respond has perplexed me, because the exercise has clearly been educative for the teachers involved and has sometimes led to immediate classroom benefits for some students. The rubrics have a plausible appeal within a familiar skills-based pathway for taking up key competencies. However, framed within this well trodden pathway, the key competencies are likely to emerge as being limited in scope and difficult to reconcile with either the transformative messages of the other front parts of NZC (vision, values, principles) or with the revised and updated learning areas at the back of the national curriculum document. Findings from the Curriculum Implementation Studies (Hipkins, Cowie, Boyd, & McGee, 2009) point to a danger that these schools and teachers could reach a stuck point, beyond which the potential of the key competencies will not open up to them, no matter how conscientiously they attempt to assess and report on them. Some teachers may well conclude that the whole exercise has been much ado about not very much, and the current opportunity to move curriculum thinking forward will have been lost.

An obvious starting point to address this dilemma is to ensure that teachers are supported to explore the full potential of the key competencies including drawing their attention to the alternative reading of their curriculum intent, as briefly outlined in the introduction. If this course of action is to appear plausible and practical to the majority of teachers, a range of rich examples that illustrate the difference that key competencies could make to the learning of traditional content and skills will be needed. Innovative teachers who are already conducting such explorations could be brought together to share their learning and shape materials to help their colleagues move forward (Begg, 2008). It is hard to see how accounts of competency described by generic rubrics could support such rich conversations. Nevertheless, I have attempted to show how experimentation with rubrics, particularly when they are used in classrooms and in conversation with learners, could point reflective teachers towards a deeper understanding of the dynamics of learning for strengthening competencies.

Involving students in assessing their own learning progress gives them practice at developing their sense of what counts as good work, thus strengthening their learning to learn abilities while allowing for their personal learning goals to evolve over time (Cowie, this edition; Cowie & Carr, 2004). Cowie and Carr see the ability to realistically self-assess as a marker of growing competency, and say that even very young children have demonstrated their ability to do this if what they are attempting has meaning and purpose for them. Some teachers are already beginning to construct assessment backwards. At the end of a unit they and the students co-construct a documented record of the manner in which the highlighted key competency has played out during the course of the learning, and they review the types of learning outcomes that have been achieved. The records of learning they create are situated, emergent, distributed and mediated. While no two contexts are exactly alike, the shape and flavour of these records (whether demonstrations, videos, narratives, portfolios, blogs, learning walls, real-world actions or some other type of reporting) could hold valuable signposts that, if shared more widely, would move assessment thinking forward and help us achieve the transformative changes intended by the developers of the key competencies.

References

Begg, A. (2008). Curriculum options: Being, knowing and exploring. Curriculum Matters, 4, 1–6.

Bolstad, R., Roberts, J., Boyd, S., & Hipkins, R. (2009). Kick starts; Key competencies: Exploring the potential of participating and contributing. Wellington: NZCER Press.

Brown, J., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32–42.

Carr, M. (2004). Key competencies/skills and attitudes: A theoretical framework. Unpublished background paper prepared for the Ministry of Education.

Carr, M. (2006). Dimensions of strength for key competencies. University of Waikato. Retrieved February 2008, from http://nzcurriculum.tki.org.nz/curriculum_project_archives/references

Carr, M. (2008a). Can assessment unlock and open the doors to resourcefulness and agency? In S. Swaffield (Ed.), Unlocking assessment: Understanding for reflection and application (pp. 36–54). London and New York: Routledge.

Carr, M. (2008b). Zooming in and zooming out: Challenges and choices in discussions about making progress. In J. Morton (Ed.), Making progress—measuring progress. Conference proceedings (pp. 3–18). Wellington: NZCER Press.

Claxton, G. (2008). What’s the point of school? Rediscovering the heart of education. Oxford: One World Publications.

Cowie, B. (2005). Student commentary on classroom assessment in science: A sociocultural interpretation. International Journal of Science Education, 27(4), 199–214.

Cowie, B. (2009). Reflections on the consequences of classroom assessment: Insights from a sociocultural perspective. Assessment Matters, 1, 49–65.

Cowie, B., & Carr, M. (2004). The consequences of socio-cultural assessment. In A. Anning, J. Cullen, & M. Fleer (Eds.), Early childhood education: Society and culture (pp. 95–106). London: Sage.

Delandshere, G., & Petrosky, A. (1998). Assessment of complex performances: Limitations of key measurement assumptions. Educational Researcher, 27(2), 14–24.

Hipkins, R. (2006). The nature of the key competencies: A background paper. Retrieved from http://nzcurriculum.tki.org.nz/references

Hipkins, R. (2007). Assessing key competencies: Why would we? How could we? Retrieved from http://nzcurriculum.tki.org.nz/implementation_packs_for_schools/
assessing_key_competencies_why_would_we_how_could_we

Hipkins, R., Cowie, B., Boyd, S., & McGee, C. (2009). Themes from the curriculum implementation case studies: Working paper. Retrieved from http://www.tki.org.nz

Kincheloe, J. (2004). Introduction: The power of the bricolage. In J. Kincheloe & K. Berry (Eds.), Rigour and complexity in educational research: Conceptualizing the bricolage (pp. 1–22). Buckinghamshire: Open University Press.

Ministry of Education. (2007). The New Zealand curriculum. Wellington: Learning Media.

Moss, P., Girard, B., & Haniford, L. (2006). Validity in educational assessment. In J. Green & A. Luke (Eds.), Review of research in education 30: Rethinking learning: What counts as learning and what learning counts (pp. 109–162). Washington: American Educational Research Association.

OECD. (2005). The definition and selection of key competencies: Executive summary. Paris: Author. Retrieved from http://www.pisa.oecd.org/dataoecd/47/61/35070367.pdf

Reid, A. (2006). Key competencies: A new way forward or more of the same? Curriculum Matters, 2, 43–62.

Roth, W. M., & Desautels, J. (2004). Educating for citizenship: Reappraising the role of science education. Canadian Journal for Science, Mathematics and Technology Education, 4, 149–168.

Rutherford, J. (2005). Key competencies in the New Zealand curriculum development through consultation. Curriculum Matters, 1, 209–227.

Rychen, D. (2004). An overarching conceptual framework for assessing key competences in an international context: Lessons from an interdisciplinary and policy-oriented approach. Retrieved from http://www.cedefop.europa.eu/etv/Upload/Projects_Networks
/ResearchLab/ResearchReport/BgR1_Rychen.pdf

Rychen, D., & Salganik, L. (Eds.). (2003). Key competencies for a successful life and a well-functioning society. Cambridge, MA: Hogrefe and Huber.

Sumara, D., & Davis, B. (2006). Correspondence, coherence and complexity: Theories of learning and their influences on processes of literary composition. English Teaching: Practice and Critique, 5(2), 34–55.

Zembylas, M. (2007). Emotional ecology: The intersection of emotional knowledge and pedagogical content knowledge in teaching. Teaching and Teacher Education, 23, 355–367.

Notes

1 Defining and Selecting Competencies.

2 This is a discussion I intend to develop elsewhere. Space does not permit here.

Acknowledgements

I am indebted to my NZCER colleagues in the key competencies team, especially Sally Boyd and Ally Bull. Their critique was helpful as I struggled to bring this article to coherent fruition.

The author

Rosemary Hipkins is a chief researcher at the New Zealand Council for Educational Research, with specific responsibilities in the area of research-practice links. She is interested in how the OECD key competencies (as translated into The New Zealand Curriculum) might help transform teaching and learning, subsequently changing the focus of what is assessed. Rosemary has led and participated in a number of research projects related to the interpretation and translation of key competencies into practice.

Email: Rosemary.Hipkins@nzcer.org.nz