You are here

The challenges school administrators face in building assessment literacy

Louis Volante and Lorenzo Cherubini
Abstract: 

This study explored elementary and secondary school administrators’ perspectives on their attempts to build assessment literacy—an understanding of the principles and practices of sound assessment. Using a semistructured interview format, administrators were asked to share successes and challenges with various types of assessment. Transcripts revealed an imbalance between formative and summative assessment practices and a variety of attitudinal, structural and resource factors that constrain administrators’ ability to foster changes that align with recent assessment research. The implications of the findings are discussed in relation to instructional leadership, capacity building and educational reform.

The challenges school administrators face in building assessment literacy

Louis Volante and Lorenzo Cherubini

Abstract

This study explored elementary and secondary school administrators’ perspectives on their attempts to build assessment literacy—an understanding of the principles and practices of sound assessment. Using a semistructured interview format, administrators were asked to share successes and challenges with various types of assessment. Transcripts revealed an imbalance between formative and summative assessment practices and a variety of attitudinal, structural and resource factors that constrain administrators’ ability to foster changes that align with recent assessment research. The implications of the findings are discussed in relation to instructional leadership, capacity building and educational reform.

Introduction

Given the current emphasis on student achievement, it seems imperative that both formative and summative forms of student assessment are equally promoted so that authentic improvements in student learning can be realised (Stiggins, 2008). To accomplish this goal, teachers must be knowledgeable about assessment and understand how to use a range of different methods effectively to promote student learning. Assessment methods include traditional paper-and-pencil measures (test, quiz, essay and so on) and more authentic performance-based activities (giving a speech, building a model, observation and questioning and so on). Unfortunately, teacher education both within universities and school districts has been inadequate and little attention has been directed towards addressing this apparent lack of knowledge in the North American context (Popham, 2009a; Volante, 2010). For example, the vast majority of faculties of education within Ontario—Canada’s largest province—do not require teacher candidates to complete a specific course in assessment and evaluation. The predominant mindset is that student teachers will develop the requisite assessment skills in curriculum courses (i.e., mathematics, science, English)—which often leads to significant gaps in their understanding of formative and summative assessment (Klinger, 2009; Volante & Fazio, 2007).

Moreover, detailed research studies that examine assessment as a multifaceted construct and isolate various types of assessment expertise are relatively sparse—particularly as they relate to school administrators (i.e., principals and vice-principals). Since school administrators in North America are considered instructional leaders, their perspectives on assessment expertise, along with their own self-efficacy in this area, are critical for promoting school improvement (Noonan & Renihan, 2006). The main focus of this research study was to examine key challenges school administrators face in building assessment literacy—particularly in teachers’ understanding and use of various purposes of assessment—within their respective elementary and secondary schools.

Models of assessment and instruction

Recent reforms across the Western world have led to new conceptions and models of assessment and evaluation. For example, Black, Harrison, Lee, Marshall and Wiliam (2004) report on assessment for learning in the classroom. Through a series of projects, they have shown that secondary teachers can teach well and also get good test scores. They emphasise such things as questioning techniques, feedback without grades, peer assessment, self-assessment and the formative use of summative tests as instructional strategies. In essence, teachers create learning environments where students and teachers are active assessors during classroom instructional strategies.

In the United States, Stiggins (2004) calls for new ways to think about assessment since high-stakes tests without supportive environments harm struggling students. For him, the answer is a balance between standardised and classroom assessment in a synergistic system. At the same time, Popham (2009b) argues that all teachers need to be assessment literate and what teachers must know about large-scale testing is readily understandable. In Canada, Earl (2003) extended the work of Black et al. (2004) and Stiggins (2004) to advocate for synergy among assessment of learning (summative), assessment for learning (formative) and assessment as learning (the assessment is not graded but acts as a metacognitive learning tool). The latter is a subset of assessment for learning and occurs when students personally monitor what they are learning and use the feedback from this monitoring to make adjustments, adaptations and even major changes in what they understand.

Greater emphasis on assessment for learning often requires teachers to resolve tensions between what they value versus what they are able to practise in a sustained manner (see James & Pedder, 2006). For school administrators, the complexity of transitioning teachers from policy change to implementation is especially complex; while teachers are striving to motivate students and are seeking the most effective means to help them learn, the use of assessment for learning requires a shift in the balance of power from teacher to student so that they can take control of their own assessment and learning. School administrators are critical to such change because they are commissioned to author provincially mandated School Effectiveness Frameworks that must account for both large-scale and classroom assessment data. Unfortunately, research on both sides of the Atlantic has suggested that such a shift is difficult given the existing cultures in classrooms (see Hayward & Spencer, 2010; Volante, 2010; Webb & Jones, 2009). Despite the previously mentioned challenges, there appears to be a growing recognition that the reform of schools and classroom assessment strategies are intimately connected and the ability to promote diverse formative assessment strategies is particularly important for school success (see Harlen, 2005; Wilson, 2008).

Context

Educational assessment in Ontario more or less falls into two categories: classroom assessment and large-scale assessment. Classroom assessment is primarily the responsibility of teachers. However, as the literature suggests, school principals and vice-principals in Ontario serve as instructional leaders and are the primary stakeholders responsible for student achievement and school success. In this context, school administrators often receive support from other resource personnel such as English as a Second Language and special education teachers.

The provincial large-scale assessment office is known as the Educational Quality and Accountability Office. This office is responsible for the creation, administration and scoring of standardised criterion-referenced tests in Grades 3, 6, 9 and 10. The Grades 3 and 6 tests focus on reading, writing and arithmetic. The Grades 9 and 10 tests focus on arithmetic and literacy, respectively. It is worth noting that there is no formal requirement to use classroom assessment data (also referred to as curriculum-embedded assessment) for accountability purposes in Ontario—unlike some jurisdictions in select parts of the United States, United Kingdom and Australia (see Wilson, 2004).

Ontario mandates school board improvement plans that contain a strong emphasis on large-scale assessments as a gauge of educational quality in both elementary and secondary schools (Volante & Ben Jaafar, 2008). In their analysis of 62 Ontario school board improvement plans developed in 2003–4, van Barneveld, Stienstra and Stewart (2006) found that only 31 percent actually made reference to classroom data. Research suggests that this trend of favouring large-scale assessment data for driving school improvement planning continues (Volante, Cherubini, & Drake, 2008). This is despite the fact that the province recently introduced a School Effectiveness Framework that explicitly notes the importance of assessment for, as and of learning. Thus, the present study was conducted in a context that emphasises large-scale assessment over teachers’ classroom assessment for accountability purposes.

During the time of this study, the Ontario Ministry of Education developed a preliminary policy document in 2008 related to assessment, evaluation and reporting for elementary and secondary schools entitled Growing Success: Assessment, Evaluation, and Reporting: Improving Student Learning (Ontario Ministry of Education, 2008). This document led to a more comprehensive framework entitled Growing Success: Assessment, Evaluation, and Reporting in Ontario’s Schools: First Edition, Covering Grades 112 (Ontario Ministry of Education, 2010), which became official policy in September 2010. One of the key objectives of this broad framework is to help sharpen teachers’ professional judgement in the areas of classroom assessment and evaluation. As a result of this document, common goals for schools include improved student learning, maintenance of high standards and the formation of better mediums of communications between students, teachers, administrators and parents. Together, large-scale assessment, classroom assessment and the Growing Success policy document strive to build consistency of assessment and evaluation practices across schools within Ontario.

Conceptual framework

In this study, we used the conceptual framework described by Earl (2003). This framework guided the development of research instrumentation and data analysis within the study. An important aspect of this work is that it permits an examination of different assessment purposes—formative and summative. Administrators’ perspectives of formative assessment (i.e., assessment for and as learning), summative assessment (i.e., assessment of learning) and the challenges faced in building assessment literacy are all explored. Earl’s framework offers a recognised conceptualisation of assessment and guided discussions and interviews with the school administrators. As well, the framework provides one way of understanding opportunities and constraints that are relevant to policy makers and district staff interested in capacity-building initiatives.

Methodology

Participants

A purposeful sampling method was used to select nine participants in the present study. This small sample formed the first phase of a larger longitudinal study funded by the Canadian federal government that is ongoing. The rationale behind using a purposeful sample was primarily to examine the phenomena in both elementary and secondary schools, which have different organisational structures. For example, teachers in elementary schools in Ontario are typically organised in divisions (i.e., primary division includes Grades K–3, junior division includes Grades 4–6 and intermediate division includes Grades 7–8) while secondary schools are organised in departments—each with their own subject focus (i.e., mathematics, science, social science, English and so on). Two other criteria used to inform the selection process were geared at securing new versus experienced administrators, and male versus female participants. Overall, these criteria were used to enhance the robustness of the sample so that no one group was overrepresented.

In total, nine administrators were interviewed: three elementary principals; one elementary vice-principal; two secondary principals; and three secondary vice-principals. Teaching experience ranged between eight and 26 years, with a mean of 15.8 years. Subject experience at the secondary level included English, mathematics, science, social science and guidance counselling. Subject experience at the elementary level included special education, English as a Second Language, French as a Second Language, French immersion and all of the primary and junior grades with the exception of junior kindergarten (4-year-olds), senior kindergarten (5-year-olds) and Grade 1 (6-year-olds). Administrative experience ranged between 1.5 and 8 years, with a mean of 5.6 years. The administrators had between 16 and 32 years of educational experience. Five of the administrators were male and four were female. Although a sample of nine participants is not large, it is in line with many other qualitative studies in the Canadian context. Nevertheless, caution should always be exercised when extrapolating findings from a small sample to a larger group.

Data collection

The semistructured interviews of approximately 60 minutes involved a set of lead questions. There was enough flexibility for participants to go beyond the questions as they chose. The interviews were conducted in a location of mutual convenience, audiotaped and transcribed. In each case, the interviewer strove to develop an environment of trust and support. Participants were assured of confidentiality and were asked to review their completed transcripts to correct any errors, omissions or responses they would like stricken from the data set. The interview protocol included six sections that asked a range of questions related to the following categories: education/teaching background; general assessment knowledge; assessment for learning; assessment as learning; assessment of learning; and assessment literacy development. Sample questions are included in the appendix. Each of the questions was accompanied by a set of probes designed to elicit detailed responses. For example, participants were probed on issues related to subject- and grade-level experience, current student population and professional development experience in assessment and evaluation, when answering the first question on their teaching and administrative experience.

Data analysis

Analysis of the interview transcripts followed a constant comparison method (Creswell, 2008). Codes were assigned to each line directly in the margins of the transcripts. Entries with codes having similar meanings were merged into a new category. This process was repeated for each of the remaining transcripts. Codes from the first transcript were carried over to the second transcript, and so on. This allowed the researchers to note trends across participants. Validity of the research findings was determined through triangulation of the data by both researchers, participant review of the transcripts and the inclusion of discrepant information through multiple reviews of the coded data (Creswell, 2008).

Results and discussion

The results and the discussion section is organised on two interrelated thematic strands: (1) successes and challenges with assessment; and (2) addressing assessment challenges within schools.

Successes and challenges with assessment

As previously noted in the methodology section, administrators were asked to comment on the challenges they face in building assessment literacy in themselves as well as in supporting the development of teachers. The former issue is particularly important since it is difficult to expect administrators to assume the role of instructional leaders when they report significant gaps in their own understanding of assessment and evaluation. Indeed, the results suggested that administrators reported low self-efficacy in assessment. Generally speaking, the self-reported assessment literacy ratings of administrators on a 10-point scale ranged from poor (i.e., 3) to slightly above average (i.e., 8). Nevertheless, these numerical ratings mask the commonality of administrators’ self-efficacy in providing instructional leadership in assessment and evaluation. That is, virtually all of the administrators interviewed acknowledged they had, at best, a basic understanding of sound assessment practices.

Perhaps administrators provided an inflated numerical rating to “save face” and present themselves as competent instructional leaders. Consider the following responses:

I mentioned it at the beginning [my] misunderstanding of assessment pieces and evaluation, what pieces do you turn into evaluative pieces ... I get a chance to talk to all department heads and they enlighten me quite a bit on what they do in their departments, and that for me is learning, that for me is professional development on assessment and evaluation. (Secondary principal)

I would rate myself at a 7 because information on assessment is constantly evolving and methods of assessment are being introduced—Running Records, Primary Benchmarks, quick assessments, Developmental Reading Assessment (DRA), Comprehension Attitude Strategies Interests Assessment (CASI) ... I try to keep up as best I can and count on the expertise of the classroom teachers who receive in-depth instruction and in-servicing on each one. (Elementary principal)

The responses above also suggest that it is classroom teachers, not the administrators, who are the instructional leaders in assessment within schools. The latter suggests that dissemination of assessment expertise is likely a bi-directional relationship that is both top-down from administrator to teacher and bottom-up from teacher to administrator. This model is aligned with more egalitarian paradigms such as distributed leadership, which recognise the central importance of classroom teachers (Lashway, 2003; Timperley, 2005b).

Further probing revealed that a lack of training and professional develop­ment in assessment and evaluation was the primary reason for administrators’ relatively low self-efficacy. Consider the following responses:

Professional development, experience and assessment. I want to say ‘Not much’ … I’ve had more discussions about assessment, been around discussions on assessment, but not had PD development for assessment. I know that I’m not anything close to an expert. I guess I have an understanding of the issues, but I don’t think I have any answers or any clear view of where it should go. (Secondary vice-principal)

There’s not necessarily a program as part of the principalship or anything like that that’s directed specifically at assessment. We’re picking up pieces as we go along with a variety of in-services here and there in addition to all the other responsibilities. (Elementary principal)

These responses, along with the justification for specific ratings, suggest that administrators require more formalised training and ongoing professional development opportunities in assessment and evaluation.

In the absence of assessment and evaluation training for school administrators, instructional leadership may be absent or naturally default to the teacher in charge (teachers in charge assume the role of principal when an administrator is absent), which presumably becomes problematic in schools with high staff turnover or relatively new or novice teachers. Consider one of the responses of the secondary principal who candidly admitted he had limited experience:

If I post a job at this school people from the other schools do not apply … our student body and the background of those students in terms of their social and economic wellbeing and so on is not attractive. As a result, the perception of the school is not attractive … every single position has been filled with someone fresh out of teachers’ college. What experience do they have with curriculum? Minimal. Are they all coming from the same teachers’ colleges? No. Are they all being taught the same assessment and evaluation methodologies? No. So, factoring all those things, it mitigates against good design principles. (Secondary principal)

The respondent above also suggested that “needy” schools with at-risk student populations may be more susceptible to poor assessment practices. The latter is congruent with broader literature that links socioeconomic status with instructional quality (Darling-Hammond, 2001; Hargreaves & Fink, 2006; Lee, 2004).

In terms of supporting teachers, only a small minority of administrators could offer successes in this area and the vast majority of these supports focused on structural changes such as reductions in the exam schedule, developing a common planning time for teachers or consistency in evaluation weighting across courses. The following is a sample of the responses that speak to the issue of success and challenges with assessment:

From an administrative standpoint we’re interested in having consistency from course to course … the way we’re approaching assessments is the same in every classroom, so you don’t have discrepancies, you don’t run into those problems when parents and kids say: ‘Hey, here’s what we’re doing in one class, here’s what we’re doing in another, I’m still doing a 35 percent or 40 percent essay’ … from the administrative side that’s probably kind of what we’ve been looking at. (Secondary vice-principal)

Interestingly, one of the explicit aims of the recently released Growing Success policy document (Ontario Ministry of Education, 2010) is to build consistency in assessment practices across the province. Thus, the present responses lend support to the potential utility of this type of resource.

Only a couple of administrators could offer a success that suggested a substantive change in assessment practices—which is not surprising given their own reported difficulties in this area. For example, one administrator indicated:

In our school, we emphasize it in our divisional meetings [primary division: Grades 1–3; junior division: Grades 4–6; intermediate division: Grades 7–8], we go straight to the learning skills. I think it’s actually starting to sink in to the parents, because we’re seeing more and more parents who are coming in to talk to us and they’ll say ‘You know what, I don’t really care about the grade anymore, I just want them to gain the learning skills and that’s what’s important to me.’ So it’s starting to trickle out there that those are the things we should be focusing on and looking at the whole child. (Elementary vice-principal)

This response suggests the beginning stages of a paradigm shift towards placing greater emphasis on assessment for learning over the traditional focus on assessment of learning.

There was no shortage of assessment challenges reported by administrators within their respective schools. Significant staff turnover in needy schools coupled with relatively inexperienced teachers, lack of understanding of how assessment informs instruction, resistance from veteran teachers to self-assessment, initiative overload from the Ontario Ministry of Education and the relentless pace of curriculum revisions, were just some of the many challenges that were frequently cited by administrators. Many administrators also reported that teachers were relying too heavily on particular forms of summative assessment—a finding that confirms teachers’ lack of attention to assessment for learning. Consider the following responses:

Some of the things that I hope that they understand … the different methods that they use for assessment and evaluation be as broad-based as possible … we can’t just test, test, test. Now some subjects are more inclined to that than others, but you cannot depend just on tests and exams. (Secondary principal)

We are trying to get them thinking away from the assessment of learning and more that constant assessing throughout the curriculum … it’s getting them to realize that there’s a variety of different assessment strategies, and opening their minds to that. (Elementary vice-principal)

These challenges were echoed by virtually every administrator, particularly as they responded to more specific questions on formative and summative assessment.

Formative assessment

Administrators reported their teachers were not consistently using various formative assessment practices that align with improvements in student learning and achievement. Consider responses to probes related to five key formative practices: questioning techniques; feedback without grades; peer assessment; self-assessment; and the formative use of summative tests (see Black et al., 2004; Black & Wiliam, 1998):

Questioning techniques, feedback without grades is something that we’re working on. I know teachers have tried to implement peer assessment, but there’s still reluctance there and the same with the self-assessment. I think there are still people that have a hard time letting go. Kind of giving that ownership to the kids ... and again, that’s something that to a certain extent, it’s in the approach. Right? If you say to the kids they have to do a self-assessment and you don’t put much value in it then they’re not going to either. (Secondary vice-principal)

I don’t think self-assessment is anywhere … I don’t think it’s happening. I was shocked at the percentage of students who didn’t have an understanding of self-assessment … they think it’s going to go towards their evaluation, final mark. (Secondary vice-principal)

Getting into the habit of giving them [students] the feedback, and making the corrections so they’re not hung up on what did you get … not grade-driven … having teachers not give levels for individual assignments … they don’t have to mark everything. Once the teachers move away from that, perhaps the kids can too. (Elementary principal)

These responses align with previous research that suggests greater use of assessment for learning often creates tensions around power, values and the sustainability of change (Hayward & Spencer, 2010; James & Pedder, 2006; Volante, 2010; Webb & Jones, 2009).

The previous responses also suggested that administrators had difficulty addressing implementation issues with particular formative assessment practices—a finding that is also consistent with their perceived low self-efficacy. In particular, student self-assessment that is aimed at promoting metacognition was virtually nonexistent within their schools. The latter finding suggests that assessment as learning may require particular attention and support within Ontario schools. Further study with a broader sample could help address the degree of attention required.

Summative assessment

Administrators’ comments suggested that they experienced much more success with summative assessment, particularly the use of provincial assessment scores for school improvement purposes. This finding is not surprising since the use of large-scale assessment data for school improvement planning is mandated within the province of Ontario by the provincial assessment body—the Education Quality and Accountability Office. Nevertheless, there seemed to be a distinct difference between elementary and secondary administrators’ use of large-scale assessment, with the former demonstrating a growing sophistication in how schools are using external test scores, sometimes in connection with other forms of summative assessment, to spur changes to pedagogy and curriculum within elementary schools. Consider the following responses:

We use our assessment data to see which kids are at risk … for example, in the genuine student profile they look at the accommodations checklist or language student reading survey or running records, recording checklist, rubrics, comprehension questions ... and then they store the data on the student profile, almost like the data wall, but it’s on the individual basis … it’s [the data wall] used for communication for divisional meetings on how to move the kids forward, the ones that aren’t on level 3 [provincial standard]. (Elementary principal)

When we got the EQAO [Education Quality and Accountability Office] results, we did the item analysis. We identified the areas we were weak in. Had the staff meeting … we found out that writing was one of our areas of concern. Then we went back to the writing traits, sat down as a grade and discussed the strategies that we were going to put in place to improve that. (Elementary vice-principal)

A number of elementary administrators mentioned the term data walls to characterise how they were making use of a variety of summative assessment measures. In Ontario, data walls are summaries of individual students’ assessment results that are used by administrators and school improvement teams to facilitate data-driven decision making. Research overwhelmingly supports this relationship between prudent data use and school improvement (Sutherland, 2004; Timperley, 2005a; Wohlstetter, Datnow, & Park, 2008).

At the secondary level, a minority of administrators suggested they had been somewhat successful in getting teachers to make increased use of culminating assignments versus traditional tests and/or exams, in select subject areas (visual arts; design and technology; and applied-level classes, which are those geared at non-university-stream students). Consider the following responses:

I think I’ve been successful in promoting the idea that evaluation doesn’t necessarily need an exam especially for students. I mean if it is about application let’s do culminating activities that work. Let’s do practical-type examinations rather than the paper and pencil stuff. (Secondary principal)

We’re very successful when it comes to using the culminating activities near the end of the semester. Each department has a different approach and we’ve had a lot of success with those particular activities … The nice thing about that culminating assessment is that it allows the students to use the resources that they’ve accumulated over the semester. (Secondary principal)

These comments suggest that classroom-based summative assessment techniques are beginning to reflect a greater emphasis on authentic forms of student assessment, which is widely supported in the research literature (Brookhart & Durkin, 2003; Gronlund, 2003; Van Duinen, 2006).

Despite these previously noted improvements, many secondary administrators also openly acknowledged that the subject/departmental focus of teachers makes it difficult to promote a shared vision of effective assessment practices within their schools. Consider the following responses:

The reality on the ground is that each department is sort of responsible for its own policies in that regard [assessment and evaluation] and heavily dependent on the subject area type … there is pressure from the board through the curriculum consultants to broaden assessment [strategies] as much as possible away from a reliance on quizzes and tests and paper and pencil things to the use of portfolios, presentations and group work, self-evaluation, peer evaluation … they [teachers] tend to be more driven by subject than an overarching vision of what assessment and evaluation should look like generally. (Secondary principal)

The general challenge in a high school environment … you’re dealing with teachers that really like to take ownership of their own subject area and tend to kind of hang on to it, they’re a little bit resistant to change. Especially when it’s something they think is really important. (Secondary vice-principal)

Overall, administrators in the present study seemed at a loss for how to develop some consistent assessment practices while still respecting the department affiliations of their staff. This challenge is also consistent with previous research in the Canadian context (see Duncan & Noonan, 2007) and the broader literature which has likened secondary school departments to silos (Sisken, 1990). Thus, building assessment literacy in Ontario’s secondary schools may be a more challenging task given their organisational structure.

Addressing assessment challenges

The most prominent suggestion for building assessment literacy and addressing the challenges currently facing their schools related to the creation and maintenance of learning teams or communities where teachers have time and space available to discuss assessment—both classroom-based and large-scale. Virtually every elementary and secondary administrator noted the importance of this approach and the logistical difficulties in bringing this to fruition. Consider the following responses:

Well we’ve just started doing within our family of schools, professional learning communities around assessment; looking at different practices in all of the schools … of course this has been helpful. (Elementary principal)

We have reflective practice and dialogue with staff around alternate forms of assessment—moving away for the teach/test model. From the dialogue comes shared best practice. (Elementary principal)

These responses align with the general literature that provides a strong justification for learning teams or communities as a way for teachers to regularly share ongoing assessment-related successes and challenges (Birenbaum, Kimron, Shilton, & Shahaf-Barzilay, 2009; Griffin, Murray, Care, Thomas, & Perri, 2010; Stiggins, 2008). Given the current findings, it would also seem imperative that great care be devoted when forming these groups so that there is a healthy mixture of new and experienced teachers, as well as those from different elementary divisions, secondary departments and ethnically diverse schools.

Despite the clear direction for promoting enhanced assessment practices within schools, administrators noted a litany of constraints affecting their ability to accomplish this goal. The most prominent concern centred on insufficient time—both for administrators to spur changes in assessment and teachers to work on assessment-related development. Consider the following responses:

You hear about or you read about these administrators that turn schools around and I’m thinking ‘How do they find the time, what do they do, what was it about them that they were able to find the time? What structures have they put in place to be able to make change and what did they do?’ … You can do this [interview] again in two more years and find out that I’m not farther ahead because nobody has any time. (Secondary vice-principal)

It seems to me that the consistent theme that comes back from teachers is ‘We don’t have time to sit and to chat.’ So for us as administrators, we try to provide that opportunity through meetings and such. (Elementary vice-principal)

The previous comments suggest a number of administrative challenges related to structural barriers are impeding the refinement of assessment expertise within schools. Although school administrators in Ontario are generally considered instructional leaders they may not possess the necessary time or knowledge—as previously suggested—to assume this role.

Administrators also noted a lack of resources, support and direction from board personnel and the Ontario Ministry of Education as significant constraints in their attempts to promote enhanced assessment practices. Consider the following responses:

The ministry officials have to put their money where their mouths are. They have to be willing to free people up, and to say that if this is so important, we are willing to release you twice a week to actually dialogue with your colleagues on what you’re doing with assessment and why your assessment piece is really good … these conversations have to occur. (Secondary vice-principal)

Teachers aren’t dialoguing enough with one another … We should be having our instructors of a Grade 11 business course or a Grade 11 law course, where there might only be one section in the school, discussing with colleagues at other schools, so we can compare notes on assessment … I don’t think the board is indicating interest or belief or support for these endeavors. Senior staff is saying it is a valuable exercise. But it’s up to the individual schools to facilitate this themselves … it’s very hard to facilitate. (Secondary vice-principal)

What would really be helpful is if the ministry came out with an assessment document. Just like we have a curriculum policy document, we should have an assessment policy document, which we don’t. It’s come out with bits and pieces. (Elementary principal)

Clearly, administrators are seeking coherent policies from the Ontario Ministry of Education to spur changes in assessment practices. As previously mentioned, a comprehensive policy framework was released after the completion of this study. Thus, the present findings underscore the importance of this resource.

Implications and next steps

These findings highlight a number of administrative challenges in promoting enhanced assessment practices within a small sample of Ontario’s schools. Of course, further study across a range of school districts would help bolster the validity of the results. Nevertheless, the findings shed light on some of the key challenges that administrators are likely to face in their efforts to build assessment literacy. These challenges can be characterised as attitudinal, structural and resource-based in nature. That is, assessment reform is generally impeded by belief systems of teachers that tend to favour traditional summative assessment methods or those practices that are entrenched within various secondary school departments. Similarly, structural characteristics as well as resources—both cognitive and financial—are limited within schools. This relationship is evidenced by some of the key findings, such as lack of opportunities for secondary teachers to meet, teacher competence in using multiple assessment methods, resources to engage in learning teams or communities, and the low self-efficacy around assessment and evaluation reported by administrators. Taken together, the findings suggest targeted interventions are required at the policy, district and administrative training levels to foster improvements at the classroom level.

The present findings are particularly timely, providing support for the recent dissemination of a provincial assessment policy framework. Although the Ontario Ministry of Education previously provided some suggestions for assessment and evaluation, the process was often cumbersome for both teachers and administrators since a variety of curriculum documents needed to be consulted. Perhaps the investment in this one stand-alone document, which is currently being supported through various inservice professional development workshops across the province, may provide a significant return in efforts to build horizontal (from ministry to district and district to school) and lateral (from district to district and school to school) capacity around consistent practices. As suggested, in the absence of such direction, teachers fall back on assessment practices that are comfortable—but not necessarily aligned with improved student learning and performance. Nevertheless, this new policy framework alone will be insufficient, particularly at the secondary level, if structural barriers are not adequately addressed.

The ministry, districts and individual schools should also consider how resources could be targeted towards the development of well-organised and well-guided learning teams. In many cases, much of the infrastructure is already in place to facilitate assessment literacy development. For example, the New Teacher Induction Program (NTIP), School Effective­ness Framework, literacy coaches within schools and other provincial initiatives could be modified to highlight the salience of various formative and summative assessment practices, with an emphasis on the former. The latter may improve the assessment knowledge of new and experienced teachers as well as administrators. Perhaps a consolidation of particular ministry initiatives would reduce the feeling of being overwhelmed and free some important time to discuss the effectiveness of assessment practices within and across schools.

Ontario must also do a better job in the preparation of school administrators if they are to assume the role of instructional leaders within schools. The tensions reported in relation to subject/departmental focus underscore the need for dynamic leaders who are able to enact positive changes. In a similar vein, school administrators in Ontario must all work as teachers for a period of 5 years before they can assume the role of principal. Thus, Ontario faculties of education need to ensure that their teacher candidates and future administrators are well versed in assessment of, for and as learning. Unfortunately, only two out of 18 teacher education programmes in Ontario offer a separate course in classroom assessment. As previously noted, most programmes embed assessment into “teachable” subject areas such as mathematics, science or English. The limitation of this design is that not all faculty members have expertise in assessment and evaluation and, therefore, the assessment content is not infused properly. Collectively, the present findings remind us that a concerted effort by the ministry, universities, districts and school administrators is required to build and sustain assessment literacy within schools.

Acknowledgement

The research reported in this article is funded by the Social Sciences and Humanities Research Council of Canada (SSHRC).

References

Birenbaum, M., Kimron, H., Shilton, H., & Shahaf-Barzilay, R. (2009). Cycles of inquiry: Formative assessment in service of learning in classrooms and in school-based professional communities. Studies in Educational Evaluation, 35(4), 130–149. doi:10.1016/j.stueduc.2010.01.001

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2004). Working inside the black box: Assessment for learning in the classroom. Phi Delta Kappan, 86(1), 9–21.

Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.

Brookhart, S. M., & Durkin, D. T. (2003). Classroom assessment, student motivation, and achievement in high school social studies classes. Applied Measurement in Education, 16(1), 27–54.

Creswell, J. W. (2008). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (3rd ed.). Upper Saddle River, NJ: Merrill Prentice Hall.

Darling-Hammond, L. (2001). Apartheid in American education: How opportunity is rationed to children of color in the United States. In T. Johnson, J. E. Boyden, & W. J. Pittz (Eds.), Racial profiling and punishment in US public schools: How zero tolerance policies and high stakes testing subvert academic excellence and racial equity (pp. 39–44). Oakland, CA: Applied Research Center.

Duncan, C. R., & Noonan, B. (2007). Factors affecting teachers’ grading and assessment practices. Alberta Journal of Educational Research, 53(1), 1–21.

Earl, L. (2003). Assessment as learning. Thousand Oaks, CA: Corwin.

Griffin, P., Murray, L., Care, E., Thomas, A., & Perri, P. (2010). Developmental assessment: Lifting literacy through professional learning teams. Assessment in Education: Principles, Policy & Practice, 17(4), 383–397.

Gronlund, N. E. (2003). Assessment of student achievement (7th ed.). Boston: Allyn & Bacon.

Hargreaves, A., & Fink, D. (2006). The ripple effect. Educational Leadership, 63(8), 16–21.

Harlen, W. (2005). Teachers’ summative practices and assessment for learning—Tensions and synergies. The Curriculum Journal, 16(2), 207–223. doi:10.1080/09585170500136093

Hayward, L., & Spencer, E. (2010). The complexities of change: Formative assessment in Scotland. Curriculum Journal, 21(2), 161–177.

James, M., & Pedder, D. (2006). Beyond method: Assessment and learning practices and values. Curriculum Journal, 17(2), 109–138.

Klinger, D. (2009, April). Developing a curriculum for assessment education. Paper presented at the American Educational Research Association conference, San Diego, CA.

Lashway, L. (2003). Distributed leadership. Research Roundup, 19(4), 1–7.

Lee, J. (2004). Multiple facets of inequity in racial and ethnic achievement gaps. Peabody Journal of Education, 79(2), 51–73.

Noonan, B., & Renihan, P. (2006). Demystifying assessment leadership. Canadian Journal of Educational Administration and Policy, 56. Retrieved from http://www.umanitoba.ca/publications/cjeap/articles/noonan.html

Ontario Ministry of Education. (2008). Growing success: Assessment, evaluation, and reporting: Improving student learning. Toronto: Queen’s Printer for Ontario.

Ontario Ministry of Education. (2010). Growing success: Assessment, evaluation, and reporting in Ontario’s schools: First edition, covering Grades 112. Toronto: Queen’s Printer for Ontario. Retrieved from http://www.edu.gov.on.ca/eng/policyfunding/growSuccess.pdf

Popham, J. (2009a). Assessment literacy for teachers: faddish or fundamental? Theory into Practice, 48(1), 4–11.

Popham, W. J. (2009b). Instruction that measures up: Successful teaching in an age of accountability. Alexandria, VA: ASCD.

Sisken, L. S. (1990). Different worlds: The department as context for high school teachers. Washington, DC: Office of Research and Improvement. (ERIC Document Reproduction Services No. ED338592)

Stiggins, R. (2004). New assessment beliefs for a new school mission. Phi Delta Kappan, 86(1), 22–27.

Stiggins, R. J. (2008). Student-involved assessment for learning (5th ed.). Upper Saddle River, NJ: Prentice Hall.

Sutherland, S. (2004). Creating a culture of data use for continuous improvement: A case study of an Edison Project School. American Journal of Evaluation, 25(2), 277–293.

Timperley, H. S. (2005a). Instructional leadership challenges: The case of using student achievement information for instructional improvement. Leadership and Policy in Schools, 4, 3–22.

Timperley, H. S. (2005b). Distributed leadership: Developing theory from practice. Journal of Curriculum Studies, 37(4), 395–420.

van Barneveld, C., Stienstra, W., & Stewart, S. (2006). School improvement plans in relation to the AIP model of educational accountability: A content analysis. Canadian Journal of Education, 29(3), 839–854.

Van Duinen, D. V. (2006). Authentic assessment: Praxis and power. International Journal of Learning, 12(6), 141–148.

Volante, L. (2010). Assessment of, for, and as learning within schools: Implications for transforming classroom practice. Action in Teacher Education, 31(4), 66–75.

Volante, L., & Ben Jaafar, S. (2008). Profiles of education assessment systems worldwide: Educational assessment in Canada. Assessment in Education: Principles, Policy & Practice, 15(2), 201–210.

Volante, L., Cherubini, L., & Drake, S. (2008). Examining factors that influence school administrators’ responses to large-scale assessment. Canadian Journal of Educational Administration and Policy, 84. Available at http://www.umanitoba.ca/publications/cjeap/

Volante, L., & Fazio, X. (2007). Exploring teacher candidates’ assessment literacy: Implications for teacher education reform and professional development. Canadian Journal of Education, 30(3), 749–770.

Webb, M., & Jones, J. (2009). Exploring tensions in developing assessment for learning. Assessment in Education: Principles, Policy & Practice, 16(2), 165–184.

Wilson, M. (Ed.). (2004). Towards coherence between classroom assessment and accountability: 103rd yearbook of the National Society for the Study of Education, Part II. Chicago: University of Chicago Press.

Wilson, N. S. (2008). Teachers expanding pedagogical content knowledge: Learning about formative assessment together. Journal of In-Service Education, 34(3), 283–298. doi:10.1080/13674580802003540

Wohlstetter, P., Datnow, A., & Park, V. (2008). Creating a system for data-driven decision-making: Applying the principal-agent framework. School Effectiveness & School Improvement, 19(3), 239–259.

The authors

Louis Volante is Associate Professor at the Faculty of Education, Brock University, Ontario, Canada.

Email: louis.volante@brocku.ca

Lorenzo Cherubini is Associate Professor at the Faculty of Education, Brock University Ontario, Canada.

Email: lorenzo.cherubini@brocku.ca

Appendix: Sample interview questions

Please provide a brief history of your teaching and administrative experience.

How are formative assessment methods/data utilised within your school?

Can you describe formative assessment practices and/or situations that have been particularly successful in your school? Problematic?

How do teachers in your school encourage students to use assessment information to monitor their learning?

How are summative assessment methods/data utilised within your school?

Can you describe summative assessment practices and/or situations that have been particularly successful in your school? Problematic?

What has been the biggest impediment to improving your staff’s assessment literacy?

Overall, how would you rate your level of assessment literacy on a scale from 1 to 10? Why?