You are here

Inductive assessment approach: An open-ended and exploratory method for assessing students’ thinking competence

Scott Lee
Abstract: 

Whether and how to assess key competencies is a subject of debate and constitutes a main challenge in the implementation of key competencies within the New Zealand curriculum. The use of rubrics in criteria-based assessment may not take into consideration the situated and emergent nature of learning because the assessment criteria are constructed before task performance occurs. An inductive assessment approach can be employed to discover what the learners know, understand, or can do, without focusing on predetermined aspects of students’ performance. This article aims to discuss, with examples, how the technique of thinking conversation can be used to elicit information on students’ competence and how these data can be analysed to discover emergent and unanticipated aspects of students’ performance for further structured assessment or teaching intervention. A complementary inductive and deductive approach, where interpretation of data is facilitated by comparison to theoretical models of task performance, is also discussed.

Inductive assessment approach: An open-ended and exploratory method for assessing students’ thinking competence

Scott Lee

Abstract

Whether and how to assess key competencies is a subject of debate and constitutes a main challenge in the implementation of key competencies within the New Zealand curriculum. The use of rubrics in criteria-based assessment may not take into consideration the situated and emergent nature of learning because the assessment criteria are constructed before task performance occurs. An inductive assessment approach can be employed to discover what the learners know, understand, or can do, without focusing on predetermined aspects of students’ performance. This article aims to discuss, with examples, how the technique of thinking conversation can be used to elicit information on students’ competence and how these data can be analysed to discover emergent and unanticipated aspects of students’ performance for further structured assessment or teaching intervention. A complementary inductive and deductive approach, where interpretation of data is facilitated by comparison to theoretical models of task performance, is also discussed.

Introduction

The increasing complexity and international competition in knowledge-intensive economies since the 1980s led governments in OECD countries to undertake educational reforms that included a focus on developing key competencies such as thinking and problem-solving skills. In New Zealand, the key competencies framework was introduced into the national curriculum by Ministry of Education in 2007 to develop capabilities that are considered desirable and necessary for living and lifelong learning (Ministry of Education, 2007). The key competencies in the New Zealand curriculum include thinking, using language, managing self, relating to others, and participating and contributing (Ministry of Education, 2007). Whether and how to assess key competencies is a subject of debate and constitutes a main challenge in the implementation of key competencies (Boyd & Watson, 2006a; Hipkins, 2006). The purpose of this article is to describe how an inductive assessment approach—defined here as the employment of an open-ended and exploratory method to collect data using the technique of thinking conversation, along with inductive analysis of the data to generate insights into students’ performance—can be useful in discovering and assessing emergent and unanticipated aspects of students’ competence in their task performance. This article discusses the technique of thinking conversations to elicit data on students’ thinking and the subsequent inductive analysis carried out to make sense of the data. The aim of the discussion is to demonstrate how the inductive assessment approach can provide information on students’ thinking competence for teaching and assessment purposes.

Key competencies are complex in nature, and how they are demonstrated could vary significantly according to context (Boyd & Watson, 2006b). Hipkins (2005) contends that the key competencies framework implies that the development of key competencies occurs in contexts which are challenging and meaningful to students, and which require them to actively engage in problem solving. She argues that assessment of key competencies should be carried out by performance assessment in “a real context” because of their holistic nature (p. 6).

Performance-based assessment tasks—especially those that promote complex problem solving and thinking in naturalistic contexts—can be messy, involve uncertainty and complexity, and present difficulties for interpretation and assessment (Schmidt & Plue, 2000). Teachers’ sense-making of assessment data has been identified as a key challenge in the implementation of assessment in the classroom (Even, 2005). In particular, the diversity and unpredictability of the processes and products of learning in learner-centred classrooms pose a challenge to classroom assessment (Evertson & Neal, 2006). The need for an interpretative tool has led to the use of rubrics, usually developed with the help of a particular taxonomy or framework, to evaluate students’ task performance in criteria-based performance assessments. Rubrics can be used as a guide or criteria to note a demonstration of a particular skill and may involve the assignment of a score, or rating the performance, according to a constructed range of proficiency in certain skills or sub-skills. In New Zealand, the use of rubrics is a common feature in the assessment of key competencies in the classroom (Hipkins, 2009). A parallel may be drawn with deductive analysis in qualitative research, where data are analysed according to an existing framework (Patton, 2002). From this perspective, criteria-based performance assessment may be considered as a deductive assessment approach. The goal of a deductive assessment approach is often to determine whether students know, understand, or can do a predetermined thing (Pryor & Crossouard, 2005).

Hipkins (2009) contends that the use of rubrics in criteria-based assessment may not take into consideration the situated and emergent nature of learning because the assessment criteria are constructed before task performance or class activity occurs. Rubrics can guide assessors to look for features of task performance that are similar to or different from the description contained in the assessment criteria and ignore the “patterns and connections that give it [the task performance] meaning and integrity” (Delandshere & Petrosky, 1998, p. 21). In recent years, researchers and educators have begun to question both the role of the teacher as the sole authority on effective practices of task performance and the common assumption that students’ nascent knowledge and thinking processes are unproductive and ineffective (Callahan, 2012; Empson & Jacobs, 2008; Maskiewicz & Winters, 2012). These researchers and educators argue for the need to:

1)attend to and understand students’ reasoning

2)recognise that the kinds of strategies and reasoning employed by children often differ from those used by adults or experts

3)appreciate that the content knowledge, process skills, and strategies that students bring to the classroom can be valid and valuable in terms of their contribution to learning and task performance

4)allow students to engage in tasks in the many different ways that make sense to them, instead of learning practices directed by the teacher

5)assess and respond to students in an adaptive and supporting manner grounded in a deeper understanding of children’s emergent knowledge and thinking competence.

An inductive assessment approach may be helpful to discover what the learners know, understand or can do—including aspects of their competence that may not or cannot be anticipated in advance. Such an approach parallels inductive analysis in qualitative research:

Inductive analysis involves discovering patterns, themes, and categories in one’s data. Findings emerge out of the data, through the analyst’s interactions with data, in contrast to deductive analysis where the data are analysed according to an existing framework. (Patton, 2002, p. 453)

The aim of this article is to discuss, with examples, how an inductive assessment approach can be used to evaluate students’ thinking competence in the classroom. The discussion draws on the findings of a study into how students’ thinking competence can be assessed through observations that incorporate informal conversations with students in the natural classroom setting.

Methods

The site of the research study reported in this article was a state-run decile 91 primary classroom in New Zealand comprising 28 children, aged 7 to 8 years old, and their teacher. The participants comprised 18 girls and 10 boys, including 24 of European origin, one Chinese, one Malay, two Indians, and one Máori. The school was exploring key competencies set out in the New Zealand school curriculum. The classroom was chosen as a research site because the teacher had a keen interest in promoting students’ thinking skills and had previously carried out research in this area as part of her master’s degree. Her approach to promote thinking skills in the classroom involved daily inquiry-based activities which lasted typically between 2 to 3 hours in the morning. Students either worked individually or co-operatively in groups of twos on topics assigned by the teacher. Problem solving was a key feature of these projects. For example, a project on healthy lunch menus entailed students identifying required resources, analysing information, creating different types of healthy lunch menus, and documenting their work. Students were expected to apply their reading, writing, mathematical, and visual and oral communication skills in these projects. The support provided by the teacher involved mainly provision of materials and resources, access to the Internet, responses to students’ questions, and suggestions about areas to explore.

Consent was obtained from all the children, their primary caregivers, the teacher and the school administration. The researcher conducted a total of 16 visits to the school over a period of 3 months to collect data. The visits were carried out either once or twice a week, with each visit lasting up to half a day. Data in this research study were gathered primarily through observations of students that incorporate informal conversations while the children were engaged in their activities or conversations about the work that they had just completed in the natural classroom setting. The observations and conversations, lasting between 10 to 30 minutes each, were videotaped for later transcription. Data gathering generated approximately 25 hours of video recording. The use of video recording was found to be essential to facilitating data analysis because it allowed for subsequent reviews of the data to generate insights that did not emerge at the time of the observations and conversations with the children.

Analysis of data was undertaken using an interpretive approach (Hatch, 2002). Initial descriptive coding (Miles & Huberman, 1994) of the transcripts was undertaken to discover the types of thinking skills exercised by the children, such as inferring, analysing, and interpreting. Close reading of the data was carried out to identify initial insights into the children’s thinking which were then recorded in memos. The memos were studied for salient interpretations and the data were reread and coded in places where the interpretations were supported or challenged. The objective of this second level of coding was to identify patterns or themes in the data (Miles & Huberman, 1994). Patterns identified included types of cognitive processes, impact of process structure on task performance, and complementary roles of domain knowledge and thinking skills in the children’s task performance. A draft summary of the interpretations was reviewed with the teacher so that the interpretations could be checked against the memos, reflective journals, and transcribed data.

Inductive assessment approach

The inductive assessment approach presented in this article involves three steps:

1.eliciting information on students’ competence, including process skills and domain knowledge, using the technique of thinking conversations

2.facilitating analysis by representing information in some form, such as narrative or graphical representation

3.analysing information to discover aspects of students’ performance that may be useful for further evaluation or teaching intervention.

Thinking conversations, which constituted the primary method of data collection in the study reported in this article, are defined as informal conversations that focus on eliciting students’ thinking in which:

the adult questioner positions herself or himself as an inquirer and listens to understand, rather than to test or correct

questions are asked to gain information and not to assess against a predetermined set of criteria

the objective is to explore students’ thinking and identify aspects for subsequent analysis, evaluation (using more targeted assessment tools for example), or intervention

the focus is on learning about students’ thinking processes through the conversation, rather than specific outcomes—including identifying emergent and unanticipated aspects of students’ thinking competence.

The goal of thinking conversations is to achieve an understanding of others’ thinking and draw out the reasoning, solutions, ideas, or goals that are inherent within the task performance. The key difference between a thinking conversation and other forms of interaction between the adult and the children is that questions are posed to elicit information about children’s thinking, rather than to find out what they know, to determine what they have learned, or to communicate information to them.

Lee (2012b) demonstrates that insights into emergent and unanticipated aspects of students’ thinking competence can be obtained through thinking conversations that involve listening to what the students have to share about their own reasoning. Thinking conversations engage learners in thinking as they talk about their work and the strategies they employed in task performance. Useful data can be gathered in the natural classroom setting through thinking conversations either after the event or while the students are engaged in their activities. The former may be considered as a kind of informal reflective dialogue and the latter a form of observation that incorporates informal interviews. Thinking conversations can be conducted with a small group of students or individually.

Two examples are discussed to illustrate how a classroom discussion and a class project provided the opportunity for the researcher to elicit insights into the students’ thinking competence. The first example involves a thinking conversation with three children that occurred after an observed event. During a whole-class discussion on a story about a group of children and a bull in a paddock, John commented that bulls were attracted to red and would charge at someone wearing that colour. After the class discussion, the researcher approached John, who was seated next to Dan and James, to ask how he knew that bulls were attracted to red. The following conversation describes the ensuing discussion with the three children:

Researcher: How do you know bulls are attracted to red?

Dan: They don’t. They are not attracted to it. They hate it.

James: They destroy the red.

John: Because they don’t like that sort of thing in their eyes and like ...

Researcher: How do you know?

John: Facts (smiles).

Researcher: How do you know these are facts?

John: And then bulls hate red blankets and they do like that (using both arms to do a forward and backward action as if he was waving a piece of cloth) and the bulls just run after them.

Dan: I know something for a fact. Bulls have these palettes behind their eyes which are red and they have like we have tissue that’s connected to our eyes. Their tissue is all red and sometimes they charge red because they hate seeing red. Dogs see black and white in their eyes, bulls see red sometimes in their eyes—their palettes have like explode—burst. When their palettes burst, they get wild.

People’s evaluation of claims or arguments can be either evidence based or theory based where reference is made to prior beliefs or theories (Zimmerman, 2007). The discussion with the children shows that they offered their theories and beliefs as justification for their claim that bulls are agitated by the colour red. Dan and John claimed that their theories were facts. The thinking conversation brings to the surface not only misconceptions and false beliefs that the teacher might wish to address through further assessment or teaching intervention, but also underscores the need for development of critical thinking among the children. Important aspects of critical thinking to develop are the abilities both to make belief subservient to evidence by using evidence to guide opinions or beliefs (van Gelder, 2005), and to recognise the need for evidence in what is asserted to be true (Watson & Glaser, 1980). These aspects of the children’s competence emerged through the conversation with the children and could not have been anticipated before the class discussion.

Thinking conversations can also be conducted with an individual during task performance. The second example involves Robert, who was working on his individual class project on the teacher-assigned topic of a healthy lunch menu. Robert had drawn two columns on an A4-sized sheet of paper. In the left-hand column, he had drawn four lunchboxes, each with different types of food in compartments. Prices of “$9”, “$11.50”, “$8.60”, and “$10.30” were written next to each lunchbox respectively. In the right-hand column, he had written names for each lunchbox, namely “BLT box”, “carrots and cake”, “choc chip” and “fruity tutti”. The researcher asked how he came up with the price of nine dollars for one of his lunchboxes:

Researcher: And how do you decide it’s nine dollars?

Robert: We didn’t really. We just picked it.

Researcher: So, you just plucked a number from out of nowhere?

Robert: But if it is too expensive, they won’t buy it.

Researcher: Is nine dollars expensive?

Robert: Er, no. Don’t think so.

Researcher: And how do you know it’s not expensive?

Robert: Cos, usually in cafes or something, they are five dollars or something or something like that.

The thinking conversation shows that Robert had a criterion to guide his decisions on pricing—his lunchbox should not be “too expensive”. He used his knowledge of food prices in cafes to guide him in his decision making. It is unclear how Robert came to the conclusion that nine dollars compared to five was not too expensive—whether it was an error in reasoning or a lack of reasoning on his part. Whatever the case, the conversation has surfaced an important aspect of thinking ability: reasoning in decision making. This emergent aspect of Robert’s performance may require further evaluation to better appreciate his ability to apply reasoning in decision making and determine whether teaching intervention is required.

The examples demonstrate that engaging students in thinking conversations gives them the opportunity to justify ideas, make judgments, and offer supportive reasons for conclusions reached. Thinking conversations draw the students’ attention to—and encourages them to talk about—aspects of their thinking. The information elicited through thinking conversations can be analysed and interpreted to generate useful insights into students’ thinking competence. The inductive assessment approach discussed here allows for open-ended data gathering and analysis of students’ competence and learning needs because it takes into consideration the emergent nature and unanticipated aspects of students’ competence. It is not the intent of this article to suggest that the inductive assessment approach is superior to conventional methods that adopt a deductive assessment approach.. More often than not, it is useful to analyse data using deductive and inductive approaches (Hatch, 2002; Thomas, 2006). Both inductive and deductive assessment approaches can play useful complementary roles in the evaluation of students’ competence. For example, a test or structured performance assessment can be devised to further investigate and evaluate Robert’s ability to apply reasoning in decision making or the issue of critical thinking identified in the thinking conversation with Dan, James, and John.

Another assessment strategy is to select a suitable theoretical framework or model to serve as a guide for analysis and assessment after data have been gathered using the inductive assessment approach. Such a strategy employs inductive assessment to elicit information on students’ competence without necessarily focusing on predetermined aspects of students’ performance. Inductive assessment is followed by deductive assessment, where a suitable theoretical framework or model is used to guide interpretation. This approach is discussed in the next section of the article.

Interpretation using theoretical models

The ability to structure one’s thinking processes is a critical part of thinking competence. Being a good or better thinker involves being able to apply metacognitive skills to regulate and monitor one’s thinking process, including the ability to plan or structure one’s approach to cognitive tasks (Lee, 2011). It is generally recognised in thinking-skills literature that the application of thinking skills frequently occurs in the context of cognitive processes such as problem solving and decision making, and that competent thinkers are those who are able to apply their thinking skills to perform such tasks effectively (Higgins, Hall, Baumfield, & Moseley, 2005; Moseley, et al., 2005; Perkins, Jay, & Tishman, 1994). Problem solving and decision making have generally been recognised as complex tasks that people commonly face in their lives or professional work, and both types of tasks require good thinking skills (Beyer, 1984; Marzano, et al., 1988; Swartz, 2003; Yinger, 1980). It is not uncommon to find frameworks of thinking skills describing thinking steps involved in these cognitive processes, including decision making and problem solving (Marzano, et al., 1988; Moseley, et al., 2005; Rankin, 1987). Programmes that teach students to be better thinkers often make use of thinking tools, thinking maps or graphic organisers to guide students through various cognitive processes (see for example Burke & Williams, 2008; Ong, 2006; Swartz, 2008). These are techniques that typically help students approach a task in a systematic manner by helping them to structure specific aspects of the cognitive process and guiding them through specific procedures to complete the task.

Students’ thinking competence in cognitive processes such as problem solving and decision making can be meaningfully assessed against theoretical models of task performance (Lee, 2011, 2012a). The comparison of learner’s performance to that of experts or theoretical models to assess the development of expertise is commonly carried out in professional training (Chipman, Schraagen, & Shalin, 2000; Lajoie, 2003; Mumford, Friedrich, Caughron, & Antes, 2009). Two case examples are discussed to illustrate the process of comparing students’ task performance against theoretical models of decision making and problem solving. The first example involves a thinking conversation the researcher had with Cathy and Liz in relation to their project on food 50 years later:

Cathy: Well, we’re doing it on food and I was just wondering if some of the food we have now is not going to be here 50 years later. And 50 years ago, I bet we didn’t have food that we have now like chocolate or sweets.

Researcher: So how are you going to find out?

Cathy: Internet, books.

Researcher: How does one decide in 50 years’ time what’s not going to be around?

Cathy: I think I read in a book that bananas won’t be around.

Researcher: Why won’t bananas be around?

Liz: Because trees aren’t growing that well.

Cathy: I don’t know. I think it’s just bananas, no—I think there’s another vegetable—was there another vegetable, was it tomatoes or ...?

Liz: I think bananas aren’t going to be around because they like—you can’t buy banana trees in like certain places, so they got to grow in certain places so they don’t grow that well in certain places.

The selection of a theoretical model to facilitate the analysis of this conversation draws on existing literature on cognitive processes. Wales, Nardi, and Stager (1986) suggest a four-stage model of decision making:

1)state goal

2)generate ideas

3)prepare a plan

4)take action.

Ehrenberg, Ehrenberg, and Durfee (1979) propose a three-step decision making model:

1)clarify requirements and anticipate the ideal characteristics that would meet all of the requirements

2)identify, clarify, and verify characteristics of each alternative against the ideal, and select the closest alternative

3)verify choice against the requirements.

The adaptation and modification of these models resulted in a theoretical model for decision making that was used to facilitate analysis in the study (see Figure 1).

Figure 1: Theoretical model of decision making

The process that Cathy and Liz engaged in had several components described in the theoretical model of decision making in Figure 1. Cathy wondered what sort of food would not exist in 50 years’ time. Together, Cathy and Liz were able to generate two possibilities—bananas and tomatoes. They made some attempt at evaluating the alternatives, and concluded that it had to be bananas because they grow in certain places and not in others. However, Cathy and Liz did not appear to have a clear set of criteria against which to evaluate the possibilities to help them determine which of these alternatives could best answer their question. Liz’s reasoning that bananas could only grow in some places and not in others did not appear to have direct and clear relevance. It was, therefore, not a very strong and clear justification for her view that bananas would not be around in 50 years’ time. For process, the children could have incorporated the additional steps of developing a set of evaluation criteria and evaluating the outcome of their decision against it. The information could be useful for the teacher to determine whether the children could benefit from teaching support in terms of their ability to critically appraise the various alternatives in order to make a choice based on sound reasoning. The lack of a structured process (such as the one in Figure 1) led to a discussion that quickly went off-track and off-task:

Researcher: They say that the world is getting warmer, and banana trees grow well in warm weather. Why do you say that banana trees shouldn’t be around?

Cathy: Well, we need proof.

Researcher: That’s right.

Cathy: [We cannot] just say that banana trees won’t be around, because I don’t think so.

Researcher: If the world is getting warmer, what kind of trees would grow well in warm weather?

Cathy: Feijoas, figs, oranges, apples, lots of other trees.

Liz: Yeah, mostly, most of the trees grow in the warm.

Researcher: If the world gets too warm, apple trees may not be around because they grow well in the cold. Or do they?

Cathy: I don’t know. I haven’t done much study on trees or vegetables.

Liz: Cabbage, carrots ...

Beth, who came and sat down at the table to observe, joins in the conversation.

Beth: In Fiji, normally trees and things grow in warm weather, like coconuts ... pineapples.

Liz: Coconuts are really dear at the moment because—the coconuts are going to come over from Fiji to Dunedin. And they’re four dollars for one.

Cathy: I think it’s going to be one dollar fifty—I hope it will be in the future.

Beth: I think—you know things were like less expensive in the past. I think things are going to be more expensive in the future because people are going to be more rich, there’s going to be more money.

The analysis of the data generated through the thinking conversations with Cathy and Liz shows that students’ thinking competence depends on their ability to adopt a structured or organised approach when they engage in open-ended and ill-structured tasks. In addition, the conversation suggests that knowledge can play an equally important role as thinking skills in the effectiveness of children’s reasoning. For example, Cathy made the point that she was unable to comment on whether apple trees grow well in the cold because she had not “done much study on trees or vegetables”. The example shows that the comparison of student’s performance to a theoretical model can surface issues in relation to students’ knowledge, their thinking skills and how they structure their process of thinking, if at all.

The second example involves the use of a problem-solving model to facilitate the analysis of Dan’s performance in his project on packaging and wrapping. Dan’s teacher had asked all the students to find ways to make their work more interesting in response to feedback received from some parents on their project presentations the day before. Dan was seated at his table with pieces of paper in front of him and what appears to be a self-made booklet with drawings on it. There were also some plastic food boxes, a plastic grocery bag, and a roll of cling-on plastic wrapping on his table. The following conversation describes the thinking conversation the researcher had with Dan as he worked on his project:

Dan: Our project is on wrapping and packaging. We’re trying to make it more interesting so other people will look at it more.

Researcher: What are you thinking about?

Dan: Um, well, we’ve got a display of wrapping (points to a plastic food storage box and a roll of cling-on plastic wrapping) and a plastic bag (picks up a plastic grocery bag) for wrapping.

Researcher: And what have you got here?

Dan: I’ve got heaps of ideas (points to a paper with what appears to be a mind-map—a bubble with a word in the middle and spokes coming out of it and joined to other bubbles with words in them). We watched a story of the Coca-Cola can video and we wrote down (picks up his self-made booklet and flips through the pages of drawings with words written at the bottom of each drawing) ...

Researcher: What are you trying to do next?

Dan: Mmm ... we’re doing this—we’re onto doing types of wrappings ... and here’s wonderings (points to bottom half of a piece of paper where he has written the heading “Wonderings” and drawn empty boxes—two labelled ‘Q’ (question), one big box with the words “How has wrapping changed?” and an empty box labelled “A” below). We have to do three wonderings . . . and this is the greatest wondering (points to the box with the words “How has wrapping changed?”).

Researcher: How has wrapping changed? Is that the question you trying to answer?

Dan: We’re trying to answer it.

The selection of a theoretical model to facilitate the analysis of this conversation draws on existing literature on models of problem solving. Bransford and Stein (1984) propose the IDEAL model of problem solving:

1)identify the problem

2)define the problem

3)explore strategies

4)act on ideas

5)look for the effects.

It is not always clear, however, when a problem is considered as resolved. It is possible that some problems cannot be completely resolved. In such instances, it might be expedient to establish a set of criteria for what could reasonably be considered a successful resolution of the problem. Furthermore, the strategies or solutions that can be explored are likely to be dependent on the skills, information, and resources available. For the purposes of this study the researcher adopted the IDEAL model and expanded it to include these considerations to develop a theoretical model for problem solving (see Figure 2):

Figure 2: Theoretical model for problem solving

Using the problem-solving model in Figure 2 as a guide, it can be seen that Dan was able to describe what his project was about and define the problem, at the beginning of the conversation, as one of finding a way to make his project presentation more interesting to parents. He reviewed the three pieces of work that he had already done:

a display of wrappings and packaging made up of some plastic food storage boxes, a plastic grocery bag, and a roll of cling-on wrapping on his table

pieces of paper in front of him which he later explained were records of his brainstorming

a self-made booklet on the manufacture of Coca-Cola drink cans with drawings in it.

However, Dan did not complete the process of defining and evaluating the problem. For example, he did not elaborate what were the possible causes of the problem and what needed to be addressed. When I asked him what he was going to do next, he talked about doing three more “wonderings” to fill up a space he had on one of the sheets of paper that showed his brainstorming. He thought that “how has wrapping changed?” was the focus of his task at that point, but it was unclear how this thought was related to the original problem posed by the teacher. While Dan generated ideas quite readily, it is unclear whether he had a well-defined mental picture of what aspects of the problem he was attempting to address and how his ideas were intended to solve the problem. He did not seem to have any criteria for success that would help him determine whether his ideas would actually address the problem and evaluate the outcome after implementation. It is possible that the focus on the assigned project topic of “how has wrapping changed?” and on producing wonderings—which the teacher was observed to have encouraged the students to do at the beginning of the project—could have prevented Dan from seeing and addressing the immediate problem that he was asked to resolve. Nevertheless, the example provides evidence that students’ thinking competence in task performance can be impacted by a lack of ability to adopt a structured or organised process (Lee, 2011). Complex, open-ended, and ill-structured tasks can be challenging for learners because they have to orchestrate processes, thinking skills, domain knowledge and metacognitive skills to plan, monitor, and evaluate task performance (Lee, 2011; Reiser, 2002). There is a strong argument that adult scaffolding is needed to provide structure for complex tasks to help learners to focus on relevant aspects of the tasks and provide pathways that students can learn and apply in a metacognitive manner in subsequent open-ended tasks (Hmelo-Silver, Duncan, & Chinn, 2007; Orion & Kali, 2005; Sharma & Hannafin, 2007). The approach discussed here allows for the assessment of the structure and process of a learner’s thinking approach, in addition to their domain knowledge and thinking skills.

The two examples demonstrate that comparisons of students’ performance to theoretical models can offer a means to analyse students’ thinking competence and identify learning needs. While one may argue that the use of a theoretical model is similar to employing rubrics in criteria-based performance assessment, it is worthwhile to note the differences between approaches:

1.Information is gathered on the students in an open-ended exploratory mode. The gathering of information is not influenced by what adults hope to see, or what the adults expect or think the students should do.

2.The appropriate theoretical models are chosen after data have been gathered to serve as guides for analysis and assessment. Choice of which models to use is influenced or determined by what the students did, whereas rubrics are typically constructed beforehand.

3.The theoretical models suggest specific ideas on how students’ processes could be supported and improved. Rubrics show whether the students are meeting a set of predetermined criteria in targeted aspects of their conceptual understanding and skills and may not address process and structural issues that children may unexpectedly encounter.

A note of caution is necessary in adopting the approach of comparing students’ task performance to theoretical models. Inherent in such an approach is the notion that a theoretical model can be applicable to the various situations in which the students’ complex thinking skills are manifested. On the contrary, students’ approaches to task performances and the processes involved vary widely. A United States-based study found that people’s natural approaches to problem solving took a wide variety of forms and came to the conclusion that no one model can describe all the variations of problem solving for all people after examining more than 150 cases (Douglas, 2002; Isaken & Treffinger, 2004). It is may be unreasonable, therefore, to expect that students’ processes follow a given theoretical model exactly, regardless of the situation in which the task performance is carried out.

Any description of students’ thinking processes should portray their many approaches to task performance and also allow for the possibility that there may be more than one perspective in interpreting students’ thinking behaviour.

Conclusions

The findings of this study contributes to the ongoing conversations about whether and how key competencies, in particular the thinking competency, should be assessed within the New Zealand curriculum. The study reported in this article makes a significant contribution to the field of assessment by presenting an inductive assessment approach to investigate and understand students’ emergent thinking competence. It is argued that the inductive assessment approach allows for an open-ended and exploratory approach to assessing students’ competence without the use of predetermined criteria or a focus on preconceived aspects of students’ performance. An inductive assessment approach can provide meaningful information, and it take into consideration the emergent and situated nature of students’ learning—it is based on the premise that students construct and co-construct their understanding. It focuses on what the student is capable of, and where the student may need scaffolding—not the student’s expected response to the teacher. Lee (2012b) argues that questions about what students can do and what aspects of their emergent thinking competence require adult support cannot be fully addressed by conventional measures that focus on predetermined aspects of task performance. Even and Wallach (2004) state that teachers need to be tuned in to their students and believe that there is something to be learned from listening to them in order to form an accurate assessment of their abilities and ways of thinking. Listening to students is critical to understanding their thinking and valuing the intellectual and episteomological resources they bring to the classroom (Callahan, 2012; Empson & Jacobs, 2008; Maskiewicz & Winters, 2012).

There is some degree of parallel between the inductive assessment approach discussed in this article and the divergent assessment approach of Torrance and Pryor (2001). The aim in both approaches is to ascertain what the student is capable of doing through interaction between the adult and the child. On the other hand, there are clear distinctions, for the inductive assessment approach:

1)focuses on the assessment of students’ thinking competence

2)argues for the need to respond flexibly to the emergent and unanticipated aspects of students’ thinking competence

3)proposes the technique of thinking conversation to elicit information not just on knowledge of students, but also their thinking skills and processes

4)suggests inductive analysis of the information using for guidance a variety of knowledge in the field of thinking skills, including how people reason and issues related to it, the application of thinking skills in the contexts of cognitive processes such as decision making and problem solving, and different models and thinking frameworks available in literature.

Every assessment strategy has its strengths and weaknesses, and each method can only provide certain types of information and cover only particular aspects of students’ learning and abilities (Wortham, 2008). Using a variety of carefully chosen assessment methods can provide different perspectives and therefore a more comprehensive picture of students’ learning and abilities (Feld & Bergan, 2002). Inductive and deductive assessment approaches can play complementary roles in assessing students’ competence. The inductive assessment approach can be used to discover specific aspects of students’ capabilities and learning needs, including those that are emergent and unanticipated, and a deductive assessment approach can be employed to evaluate these aspects of students’ performance in greater depth.

Limitations

The limitations of the inductive assessment approach discussed in this article require further investigation. It should be noted that the researcher was able to generate the insights into the children’s thinking because he was able to position himself as an inquirer, and not a teacher or assessor, in relation to the children in the study. The children were eager to share their thinking and tell their story—sometimes of their own initiation—even though the questions posed by the researcher might not always be of the open-ended type. On the other hand, the role of the teacher, and the teacher’s relationship with the students, could potentially render the use of thinking conversations to elicit students’ thinking. Students may not always be open to share their thinking, especially if they feel that they are being evaluated or tested on what they know. Given the importance of asking open-ended questions, and to ensure that the questions posed do not come across as evaluating or testing questions to the students, it may be useful to develop a set of protocols to guide the types of questions that could be posed during a thinking conversation. Furthermore, the unpredictability and variety of data that could be generated on students’ thinking through the inductive assessment approach can pose potential challenges for meaningful interpretation by teachers. There is also the conundrum of choosing between the efficiency—and perhaps simplicity and safety for some—of a highly structured and narrowly focused deductive assessment approach, and the complexity of a flexible and responsive inductive assessment approach. The flexibility and complexity of the inductive assessment approach may well require teachers to be proficient practitioners of good thinking themselves and to have extensive knowledge of the field of thinking skills.

References

Bassham, G., Irwin, W., Nardone, H., & Wallace, J. (2008). Critical thinking: A student’s introduction (3rd ed.). Boston: McGraw-Hill Higher Education.

Beyer, B. K. (1984). Improving thinking skills: Practical approaches. The Phi Delta Kappan, 65(8), 556–560.

Boyd, S., & Watson, V. (2006a). Shifting the frame: Exploring integration of the key competencies at six normal schools. Wellington: New Zealand Council for Educational Research.

Boyd, S., & Watson, V. (2006b, December). Unpacking the key competencies: What does it mean for primary schools? Paper presented at the NZARE Conference, Rotorua.

Bransford, J. D., & Stein, B. S. (1984). The IDEAL problem solver. New York: Freeman.

Burke, L. A., & Williams, J. M. (2008). Developing young thinkers: An intervention aimed to enhance children’s thinking skills. Thinking Skills and Creativity, 3(2), 104–124.

Callahan, K. M. (2012). Listening responsively. Teaching Children Mathematics, 18(5), 296–305.

Chipman, S. F., Schraagen, J. M., & Shalin, V. L. (2000). Introduction to cognitive task analysis. In J. M. Schraagen, S. F. Chipman & V. L. Shalin (Eds), Cognitive task analysis (pp. 3–23). Mahwah, NJ: Lawrence Erlbaum Associates.

Delandshere, G., & Petrosky, A. R. (1998). Assessment of complex performances: Limitations of key measurement assumptions. Educational Researcher, 27(2), 14–24.

Douglas, N. (2002). A picture is worth a thousand words: An executive summary of Pershyn’s 1992 master’s project. Buffalo, NY: International Center for Studies in Creativity, Buffalo State College, State University of New York.

Ehrenberg, S. D., Ehrenberg, L. M., & Durfee, D. (1979). BASICS: Teaching / learning strategies. Miami Beach, FL: Institute for Curriculum and Instruction.

Empson, S. B., & Jacobs, V. R. (2008). Learning to listen to children’s mathematics. In D. Tirosh & T. Woods (Eds), Tools and processes in mathematics teacher education (pp. 257–281). Rotterdam, The Netherlands: Sense Publishers.

Even, R. (2005). Using assessment to inform instructional decisions: How hard can it be? Mathematics Educational Research Journal, 17(3), 51–67.

Even, R., & Wallach, T. (2004). Between student observation and student assessment: A critical reflection. Canadian Journal of Science, Mathematics and Technology Education, 4(4), 483–495.

Evertson, C. M., & Neal, K. W. (2006). Looking into learning-centered classrooms: Implication for classroom management. Washington, DC: National Education Association.

Feld, J. K., & Bergan, K. S. (2002). Assessment tools in the 21st century. Child Care Information Exchange, 146, 62–66.

Garnham, A., & Oakhill, J. (1994). Thinking and reasoning. Malden, MA: Blackwell Publishing.

Hatch, J. A. (2002). Doing qualitative research in education settings. Albany, NY: State University of New York Press.

Higgins, S., Hall, E., Baumfield, V., & Moseley, D. (2005). A meta-analysis of the impact of the implementation of thinking skills approaches on pupils (Project report). London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

Hipkins, R. (2005). Thinking about the key competencies in the light of the intention to foster lifelong learning. set: Research Information for Teachers, 3, 36–38.

Hipkins, R. (2006). The nature of the key competencies: A background paper. Wellington: New Zealand Council for Educational Research.

Hipkins, R. (2009). Determining meaning for key competencies via assessment practices. Assessment Matters, 1, 4–19.

Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107.

Hoffman, R. R., & Militello, L. G. (2009). Perspectives on cognitive analysis: Historical origins and modern communities of practice. New York: Psychology Press.

Isaken, S. G., & Treffinger, D. J. (2004). Celebrating 50 years of reflective practice: Versions of creative problem-solving. Journal of Creative Behavior, 38, 75–101.

Lajoie, S. P. (2003). Transitions and trajectories for studies of expertise. Educational Researcher, 32(8), 21–25.

Lee, S. (2011). Cognitive process mapping: Adapting cognitive task analysis to research and educational assessment of young children’s thinking skills in the classroom. Doctoral thesis, University of Otago, Dunedin.

Lee, S. (2012a). Co-constructional task analysis: Moving beyond adult-based models to assess young children’s task performance. Early Child Development and Care, Advance online publication. doi: 10.1080/03004430.03002012.03721358

Lee, S. (2012b). Thinking conversations: An open-ended and exploratory approach to assessing students’ thinking competence. New Zealand Journal of Educational Studies, 47(1), 5–17.

Marzano, R. J., Brandt, R. S., Hughes, C. S., Jones, B. F., Presseisen, B. Z., Rankin, S. C., et al. (1988). Dimensions of thinking: A framework for curriculum and instruction. Alexandria, VA: Association for Supervision and Curriculum Development.

Maskiewicz, A. C., & Winters, V. A. (2012). Understanding the co-construction of inquiry practices: A case study of a responsive teaching environment. Journal of Research in Science Teaching, 49(4), 429–464.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage Publications.

Ministry of Education. (2007). The New Zealand curriculum. Wellington: Learning Media.

Moseley, D., Baumfield, V., Elliot, J., Gregson, M., Higgins, S., Miller, J., et al. (2005). Frameworks for thinking: A handbook for teaching and learning. Cambridge, UK: Cambridge University Press.

Mumford, M. D., Friedrich, T. L., Caughron, J. J., & Antes, A. L. (2009). Leadership development and assessment: Describing and rethinking the state of the art. In K. A. Ericsson (Ed.), Development of professional expertise: Toward measurement of expert performance and design of optimal learning environments (pp. 84–107). New York: Cambridge University Press.

Ong, A.-C. (2006). The infusion approach to teaching thinking. In A.-C. Ong & D. G. Borich (Eds), Teaching strategies that promote thinking: Models and curriculum approaches (pp. 241–261). Singapore: McGraw-Hill Education.

Orion, N., & Kali, Y. (2005). The effect of an earth-science learning program on students’ scientific thinking skills. Journal of Geoscience Education, 53(4), 387–393.

Patton, M. Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage Publications.

Perkins, D., Jay, E., & Tishman, S. (1994). Assessing thinking: A framework for measuring critical thinking and problem-solving skills at the college level. In A. Greenwood (Ed.), The national assessment of college students learning: Identification of the skills to be taught, learned and assessed (pp. 65–112). Washington, DC: National Center for Education Statistics.

Pryor, J., & Crossouard, B. (2005). A sociocultural theorization of formative assessment. Paper presented at the The Sociocultural Theory in Educational Research and Practice Conference, University of Manchester.

Rankin, S. (1987, October). Assessing higher order thinking skills: Issues and practices keynote address. Paper presented at the Assessing Higher Order Thinking Skills: Issues and Practices Conference, Clackamas, OR.

Reiser, B. J. (2002). Why scaffolding should sometimes make tasks more difficult for learners. Paper presented at the Conference on Computer Support for Collaborative Learning: Foundations for a CSCL Community.

Schmidt, M., & Plue, L. (2000). The new world of performance-based assessment. Orbit, 30(4), 14–17.

Sharma, P., & Hannafin, M. J. (2007). Scaffolding in technology-enhanced learning environments. Interactive Learning Environments, 15(1), 27–46.

Swartz, R. (2003). Infusing critical and creative thinking into instruction in high school classrooms. In D. J. Fasko (Ed.), Critical thinking and reasoning: Current research, theory, and practice (pp. 207–251). Cresskill, NJ: Hampton Press.

Swartz, R. J. (2008). Energizing learning. Educational Leadership, 65(5), 26–31.

Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27(2), 237–246.

Torrance, H., & Pryor, J. (2001). Developing formative assessment in the classroom: Using action research to explore and modify theory. British Educational Research Journal, 27(5), 615–631.

van Gelder, T. (2005). Teaching critical thinking: Some lessons from cognitive science. College Teaching, 45(1), 1–6.

Wales, C. E., Nardi, A. H., & Stager, R. A. (1986). Decision-making: New paradigm for education. Educational Leadership, 43(8), 37–41.

Watson, G., & Glaser, E. (1980). Critical thinking appraisal manual. New York: Harcourt Brace Jovanovich.

Wortham, S. C. (2008). Assessment in early childhood education (5th ed.). Upper Saddle River, NJ: Pearson Education.

Yinger, R. J. (1980). Can we really teach them to think? In R. E. Young (Ed.), New directions for teaching and learning: Fostering critical thinking (pp. 11–31). San Francisco: Jossey-Bass.

Zimmerman, C. (2007). The development of scientific thinking skills in elementary and middle school. Developmental Review, 27(2), 172–223.

Note

The author

Scott Lee, Australian Catholic University
Level 3, 174 Victoria Parade, East Melbourne, 3002, Victoria, Australiascott.lee@acu.edu.au

Dr Scott Lee is a research fellow at the Australian Catholic University, Faculty of Education. His research interests include children’s thinking and problem solving skills, self-talk, construction play, assessment of thinking skills, and early childhood education.

1New Zealand has a decile system that indicates the socioeconomic status of the community the student body is drawn from. It extends from 1 (low) to 10 (high) and is used primarily to determine state funding for the school.