The role of evidence in teachers’ professional decision making

The movement toward “evidence-based practice” in the classroom has been growing, to the point where this phrase is prominent in the pronouncements of politicians, media commentators, policymakers, researchers, and teachers.

My concern is that the consequence of these assumptions is an education system in which limited instructional approaches are prescribed. The prescription of these approaches would restrict opportunities for teachers to make judgments that are sensitive to and relevant for their own students and settings, undermining their capacity to fulfil the demands of the profession. I fear that regardless of the goals of the movement, the outcome will be undemocratic, in that open, intelligent, and above all, democratic debate about the purposes, contexts, and practices of education will be silenced. (Before you cry “slippery slope!” please know that I will be ever so pleased if the movement achieves an elevation of the position of research and evidence in education practice and policy, without prescribing practice and restricting the professional activity of teachers.)

I am in support of teachers using robust evidence to make rational judgments about educational practices. I am not arguing against the use of evidence to inform practice. The question at this level is not whether or not to use evidence, but about what role evidence should play in teacher decision-making.

Claims that classroom practice must be “evidence-based” are often underpinned by the problematic assumptions that:

  • Education, evidence, and research are (or can be) value-free.
  • Evidence can tell us causal rules for action in education.
  • What is evidenced as effective is desirable, and desirable outcomes are measurable using standardised, quantitative measures.

These assumptions touch on issues of the quality of teacher’s decisions about their professional practice, and the quality of education research. In this series of posts, I will explore, and attempt to refute, some of the assumptions that underpin evidence-based practice claims. I will argue that these propositions are at best superficial, and at worst fallacious. Claims that classroom practice must be “evidence-based” ignore or deny that:

  • A teacher’s role is to make informed and intelligent decisions about practice to achieve various outcomes with and for students in their classes.
  • Academic achievement, though central, is not the only intended purpose of education.
  • Research has both a cultural and an instrumental role to play in informing educational practice.
  • Education, unlike science, involves interactions between related factors that are not tangible or concrete.
  • What is “effective” may not be desirable or appropriate. Educational practice is highly contextualised.
  • Evidence from RCTs describe patterns in populations, but may not be relevant to particular individuals.
  • Evidence is one of several pieces of information that can inform a teacher to make decisions.

I would also like to see more actual evidence and reasoning made to support claims about practice than simply that practices are “evidence-based.”

A few others have considered this issue in their own blogs, notably José Picardo in his post The problem with evidence based practice, and my friend Corinne in her post Evidence-Based Practice: Supporting decisions or a stick to beat us with? Both José and Corinne are practising classroom teachers; José in the UK, and Corinne here in Sydney, Australia. José worries about the inflexibility of prescribing practice on the basis of data, usually quantitative, and highly contextualised, which may or may not be valid for any other context. Corinne worries that this evidence will be used to prescribe standardised instruction, which would be a disservice to both teachers and students. Both José and Corinne’s posts have informed my thoughts for this one, and I strongly encourage you to go and read their posts, too.

The role of a teacher and the purpose of education

A teacher’s role is to make informed and intelligent decisions about practice to achieve various outcomes with and for students in their classes.

A teacher’s role is to make judgments about how best to help their students learn in the environments in which they teach. They generally do so competently, thoughtfully, and with appropriate caution, in consideration of their own values and those of their students and other stakeholders.

Teachers are aware that their decisions might not achieve the intended outcomes. They monitor the impacts of their decisions over time, evaluate the results, and respond flexibly as necessary.

Teachers reflect on their practice to grow and improve. As they do this, their experience informs their intuition for decision-making, and they become better at it (Berliner, 2004). In general, teachers are thoughtful, caring, knowledgeable, and skilful individuals.

Teachers are specialists in education, in the subjects that they teach, in their teaching contexts, and in their students, and can use their expertise and experience as well as evidence to make informed decisions about their teaching practice. The question that is relevant to teachers is not so much about the effectiveness of their actions, as about the potential educational value of what they do.

Academic achievement, though central, is not the only intended purpose of education

The curriculum and mandated assessment programs reflect the purposes of education, which are constantly discussed and debated, with a broad range of diverse views and values informing the debate. It is a sign of a healthy democracy.

Currently, the academic knowledge, understanding, skills, and capabilities that are valued by a society are described in mandated curriculums that teachers in all schools must follow. In Australia we have a national curriculum that articulates what is valued at each level of schooling from Foundation to Year 10. This necessarily describes the scope of what should be taught in the classroom (or conversely, may demand too much, as some teachers have complained of our curriculum).

Curriculums evolve, as they should do, and are in dispute, as they should be, as teachers, researchers, policymakers, parents, and students discuss what it is that they believe is necessary and desirable for students to know and be able to do when they graduate.

Programs of assessment such as NAPLAN (Australia’s National Assessment Program of Literacy and Numeracy, a standardised test completed by all Year 3, 5, 7, and 9 students every year) further narrow teachers’ choices about what, but also how and when to teach particular knowledge, understandings, skills and capabilities.

Academic learning is not the only intended outcome of education. Teachers might set additional learning goals for individual students. These may be behavioural, social, cognitive, affective, physical, or perhaps something else entirely. These goals are negotiated by teachers with students, their parents, other teachers, guidance officers, support staff, administration staff, etc, and are mediated by mandated policies and curriculums.

Education also provides students with opportunities to cooperate, collaborate, and socialise with peers of different backgrounds, identities, and experiences, in preparation for work and life as an adult. These opportunities may develop social, behavioural, affective, and cognitive, styles and habits.

The questions of what the purpose of education is, and what additional learning goals are desirable and appropriate for different students, require value judgments to be made in consideration of students, individually and collectively, and their learning environments.

Teachers are best placed to make decisions about learning goals for their students, and how best to achieve them, drawing on their professional and expert knowledge of individual students, classroom dynamics, and learning environments, as well as a range of evidence about learning and practice. Restricting the options for practice available to teachers

to particular practices is unhelpful, as a range of practices might be needed to achieve different goals for different students. Prescriptive practice undermines the specialised knowledge and skills central to the professional role of teachers.

Personally, I would argue that the purpose of education is to provide students with the knowledge, understandings, skills, capabilities, and cognitive styles for making appropriate judgments and decisions for themselves, their families, and their communities. To me, a part of this is providing students with a safe space to explore the values of others and make decisions about what they personally value, and showing them that it is okay to change one’s mind, or one’s values.

How I would manage to achieve this goal, while still operating within the narrow framework of the curriculum and mandatory assessment programs would depend on the circumstances of the individual students I had in my class, my relationships with them, the dynamics and relationships between them, and the environment in which we learned. Further, I would need to be cognisant of the inconsistencies between the values demonstrated by my teaching practices and those required for achieving “success” in the assessment program.

You might, of course, disagree! As I’ve said, ongoing discussion about what we value, and why, what this looks like, and how we value it, is a sign of a healthy democracy. We are a diverse society, with many values, ideas, and skills to contribute.

A teacher’s role is to make decisions about practice to help their students to achieve particular goals. Discussions about evidence can (and should) inform teachers’ decisions. The consequence of prescriptive practice is the reduction in scope for teachers to make decisions regarding classroom practice and learning experiences in consideration of the specific needs, goals, and contexts of their students in particular teaching and learning settings.

The role of education research

Research has both a cultural and an instrumental role to play in informing education practice

While research in almost all fields aim to approach “truth,” there is no single cookbook approach that can guarantee this outcome. Broadly, research is a process of testing our ideas. Education research is a form of social science research that aims to test ideas about education policy and practice. Social science research is characterised much the same as research in the pure sciences.

Research in all fields is based upon the same principles: the search for universalism (general principles); organisation (to conceptualise related ideas); scepticism (questioning assumptions and looking for alternative explanations); and communalism (a community that shares norms and principles for doing research; Merton, 1973).

However, social science research, and education research specifically, is different to science research in its context and scope. By the nature of the contexts that social scientists investigate, the questions that they pose, the methodologies that are use to collect evidence, and the forms of evidence that are collected, the evidence must be analysed and interpreted differently to that in the physical or natural sciences.

Like research in the physical sciences, education researchers pose significant questions that can be investigated empirically, link research with relevant theory, and use methods that permit direct investigation of the questions.

Like other social sciences, education research plays both an instrumental role in the generation of strategies, techniques, practices, and other means for achieving ends, and a cultural role in the provision of different frameworks for understanding and imagining social realities. Such frameworks can help teachers to develop different understandings of their practice, and to see and imagine their practice differently. The examination of practice through different lenses allows us to understand problems in new ways, or to see new problems we hadn’t anticipated (Biesta suggests feminist theory as an example of how cultural research can unveil problems not previously recognised, and help us toward resolution). These two roles, the cultural and the technical or instrumental, are distinguishable, but not easily separable, as they mutually inform and reinforce each other. If there is too much of either, education research risks losing relevance.

Education research is also different to other forms of social science research such as psychology research. Education researchers pose questions about education policy and practice. They inquire about policy and practice at all levels of the education systems we operate, from the individual student and teachers, to classrooms and schools and school systems. They make predictions about what the impacts of policies and practices are or might be and why, what teachers are teaching and how what the outcomes of various practices might be in particular contexts and for particular students. Education researchers build on earlier work, challenging, re-examining, and extending ideas about what education is and can do, which and how educational strategies and activities “work.”

Research from many other fields informs (and may be informed by) the results of education research: for example, psychology, particularly the fields of developmental, cognitive, and behavioural psychology.

Education research methods suit the purposes of education research. Some education research aims to test an explanatory hypothesis; some studies be exploratory, identifying ideas for further examination; some may examine a particular case, place, event, interaction, system, policy, learner, teacher, practice, or technology. Education research with these aims may collect evidence, and the form and amount of evidence collected varies with the purpose and question of the research. This is the case in other fields of research, too. This instrumental research is balanced by cultural research that questions normative assumptions about education, constructs new frameworks, integrates ideas into new theories, and critically considers the normative roles, functions, practices, contexts and values of education. Just as education, and education practices, cannot be value-free, nor can evidence and research, in education or in any other field. Such an assumption is fallacious.

Education, unlike science, involves interactions between related factors that are not tangible or concrete

There are important and irreconcilable differences between science and education. Education and the natural world are not as homologous as some would want us believe. Science describes physical interactions that are tangible, predictable, concrete, and often isolatable, leading us to develop reliable understandings of causal mechanisms. In contrast, education describes a complex process of mediated transactions between humans and their environment. “If teaching is to have any effect on learning, it is because of the fact that students interpret and try to make sense of what they are being taught” (Biesta, 2007, p8).

It is extremely difficult to measure, analyse, and interpret relationships between variables in education research in the same way that we can, for example, the laws of gravity or biochemical interactions in the cell. It is also much harder in education to manipulate a single variable at a time, or examine a single relationship at a time, and virtually impossible to identify with perfect certainty practices as causes and learning as effects.

This is because education is an open, recursive, semiotic system: it has a high degree of interaction with external factors in the environment; it is characterised by behaviours which are caused by both internal and external feedback; and it operates through an exchange of meaning rather than physical force. In other words, while careful observations of the scientific world can reveal causal mechanisms, evidence collected about education practices is limited to suggesting approximate probable correlations.

“What works” often assumes education is a closed, deterministic system, in which relationships between factors can be controlled, and observed directly; and that causes necessarily trigger effects, and effects must have direct causes.

While predictions about the outcomes of practice may be informed by evidence, they are by no means guaranteed. Claims about “evidence-based practice” are predicated on the belief that education is a causal process, and scepticism is warranted. Evidence cannot tell us causal rules for action in education.

Education research can fulfil a technical role in investigating practice (and other aspects of education) by collecting and interpreting evidence. The role of cultural and critical research is to ask important questions about the normative practices and values of education that allow us to identify, define, and respond to problems in education. This research is complementary to technical research, and the two forms of research each inform the other. Education is not like science, in that the interactions occurring are complex and largely unseen. There is a multitude of factors, known and unknown, that can affect the outcome of any intervention or practice, and thus we cannot assume causal relationships in the same ways we can in the physical sciences.

The problems with evidence

I have deliberately avoided defining evidence in my posts so far, but now it becomes necessary to do so. I appreciate the simplicity of Jose Picardo’s definition: “a sign or indication that something has been shown to work” (emphasis mine). Immediately, however, questions arise: What counts as a sign? Who decides that it counts? How is it perceived? Who perceives it? Works for what?

To the homeopath, homeopathy clearly works. To the teacher who assigns the labels of visual, auditory, or kinaesthetic learner to her students, using learning styles pedagogies clearly work. To the proponent of uniforms, or direct instruction, or inquiry, or Montessori, their policies and practices clearly work.

In response to my first post, Gary Jones suggested four sources of evidence: research, school data, practitioner expertise, and stakeholder values. This narrows the definition somewhat, though introduces new terms that need defining and querying (what is “practitioner expertise”? Is it developed through experience? Through training? Training in what? Through education? Education about what? All of these things?). Gary appears to have a nuanced view of evidence worth pondering, and I suggest you read some of his posts.

Usually, when people talk about evidence, they mean data. Broadly, there are two forms of data (with some messy, in between forms): quantitative, which is numerical, usually continuous data; and qualitative, which is categorical, descriptive, or narrative data.

Researchers engaged in research that requires qualitative data often collect a great deal of “rich data that reveals the perspectives and understandings of the research participants” (Gay, Mills & Airasian, 2009). Analysis of qualitative data is often very difficult, time-consuming, and challenging, and necessitates a sensibility shaped by recognition of bias, framing, and positionality.

Teachers undertake qualitative data analysis when they sit down to evaluate students’ textual work (writing, posters, artworks, graphs, etc) against any criteria and standards that have been defined, using their professional expertise and experience to interpret, evaluate, and justify their judgment of each student’s evidence of learning.

Analysis of quantitative data is usually to determine how well the data “fits” an assumed normal population (with most people being “average” and a few outliers at either end); we describe this analysis as statistical.

In experimental research that collects quantitative data, if an intervention caused a change, and if the data collected was valid (it measured the intended outcome) and reliable (the measurement is not affected by time or other factors, a pretty big if in education), analysis of the data can also inform us of the size of that change for the sample of the population tested (the effect size), and how likely that the change was the result of sampling effects rather than the intervention (the p-value, or significance). Analysis can also tell us how much variance there is across the sample of the population (how spread out the data were).

Teachers collect quantitative data when they sit down to check students’ responses to questions on tests, where responses are either correct or incorrect. Scores can indicate how much is known by a student about the topic or focus of the test, or how consistently students apply particular heuristics or habits to achieve particular outcomes. The scores on tests can be compared with those of other students, other classes, or even other schools and jurisdictions, as is the case in standardised testing.

Teachers regularly collect and evaluate both quantitative and qualitative evidence of student performance with in-class and school-based assessment; this is part of a teacher’s professional role. This evidence is a form of feedback about student performance. A teacher uses this evidence to make decisions about practice, and to make judgments of student performance. Teachers also sit in cohorts and compare their judgments of student work in a process known as moderation, which serves to ensure consistency between judgments, and give feedback to teachers to improve their future judgments.

Teacher judgments might be considered more valid and reliable than standardised measures, as teachers generally evaluate student learning on multiple assessment pieces over time, and in different ways, and with a deep and purposeful awareness of context, allowing them to triangulate the evidence they have collected to justify their judgments. Standards provided by curriculum bodies help teachers to do this according to values held by society – or at least, those formally responsible for making decisions about education policy – about what is expected of our students.

My criticism is of those who insist on using quantitative evidence derived by experimental research, including RCTs, to determine “what works,” to the exclusion of other forms of evidence, and insist on prescribing this practice to teachers. Given that the evidence-based practice movement appears not to trust teacher judgment, teacher grading might be viewed as problematic by evidence-based practice advocates, as it necessitates teacher judgment, and judgment cannot be value free.

What is “effective” may not be desirable or appropriate

Education practice is framed by purpose. Without purpose, education practice is without direction. Questions about effectiveness must be secondary to questions of purpose, which is a value judgment.

Evidence of effectiveness valued by “evidence-based practice” advocates is often from experimental research that collects and statistically analyses quantitative data using standardised, quantitative measures. This criterion narrows the focus of education not to what is desirable or appropriate or even necessarily valued, but to what can be measured or represented and analysed quantitatively, even if doing so might be considered an invalid representation: the number of correct answers on a test as a measure of “numeracy”, or the number of points of improvement from before and after an intervention as an indication of change in the ability.

Unfortunately, many learning goals are not easily quantified, measured, or compared. The emphasis on evidence shifts the focus from practices that might help teachers achieve those aims they judge to be desirable or appropriate (as discussed in the second post), to achieving those aims that can be measured and compared, and these may not be desirable or appropriate. Indeed, they may have secondary outcomes, side effects, which make them quite undesirable or inappropriate.

The ongoing controversies around the NAPLAN exemplify this tension between what can be measured (a limited proxy of “literacy” and “numeracy”), and the secondary outcomes, as well as questions of the purpose of the program to begin with. These include reports of students feeling anxious, and becoming ill. Obviously, student anxiety is not a desirable or appropriate outcome of any educational practice. There are also many stories of teachers “teaching to the test” (sometimes at the direction of administrators), and in taking the time and space to do so, reducing opportunities for students to learn in areas not measured by NAPLAN, such as science, social studies, technologies, and the arts. One proposed solution to this undesirable and inappropriate narrowing of curriculum is a NAPLAN test of science. Schools have also been accused of gaming the system by asking low-performing students to sit out, and educational triaging, where students are given intensive training and attention to bring them up to pass the test at the expense of attention to other students. These activities aim to improve schools’ positions in league tables developed by the media using simplified representations of data sourced from the MySchool website. Finally, the linking of funding to school results in the NAPLAN is also arguably an undesirable and inappropriate outcome of the NAPLAN. NAPLAN oversamples the population, while undersampling the curriculum, and is a questionable measure of literacy and numeracy (ignoring, for the moment, the social, behavioural, affective and other cognitive outcomes we aim to achieve in education). All of these activities and outcomes potentially invalidate what is not a particularly valid or reliable measure of learning outcomes in the first place.

On the positive side, NAPLAN does generate a lot of data. What do we do with it?

The question of what is desirable and appropriate is as important, if not more so, than the question of what is evidenced as effective (which defaults to what is quantitatively measurable).

Educational practice is highly contextualised

Claims that classroom practice must be “evidence-based” sometimes cite evidence that has been collected by researchers in other fields, such as psychology or linguistics. Each study, in every field, should be evaluated on its own merits, not on the basis of the field to which it belongs. Such research evidence is useful, but must be considered in light of the context in which it is collected. Research from these fields is often conducted in contexts that are distinctly dissimilar or isolated from those common in education.

Education research identifies possibilities. What evidence from research can tell us approximately the probability that a practice will affect change, and possibly in what direction that change will be (towards a specified goal, or away from it, usually). Unfortunately, generalising from research in a specific educational context to a different educational context is risky. This doesn’t mean that education research is not worthwhile; it means that the evidence needs to be carefully and thoughtfully interpreted and applied. When applying that evidence to decision-making in a different context, the probability changes.

Evidence from RCTs describe patterns in populations, but may not be relevant to particular individuals

There are suggestions that education research should test hypotheses using best-practice experimental methodologies such as randomised controlled trials (RCTs). RCTs are commonly held to be the “gold standard” for collecting evidence in science, though there are criticisms of this position. According to some commentators, the lack of RCTs makes education research substandard. Dr Ben Goldacre, a physician based in the United Kingdom, is one of many who have been pushing for this form of research in education.

An RCT involves randomly allocating participants to different treatments (including a control group). Where RCTs have been conducted in education contexts, Project Follow Through and the Sheffield phonics study for example, the validity of the results are called into question by contextual factors that cannot be addressed, or by the confounding actions of the teachers themselves, who reflexively act and adapt their practice to assist students to achieve learning goals. RCTs collect useful data that can be used to compare two treatments, but with questions about whether or not causal mechanisms can be identified with any certainty in education, the value of this research is questionable (see Part 3 for a discussion of causality in education).

Even assuming that causality can be determined, there are two issues with reasoning inductively from the evidence generated by an RCT (or any other methodology). Firstly, it is problematic to assume that what has been shown to work by statistical analysis of data collected by experimental research will apply to other students to the same degree. Teacher judgment, made with deep knowledge of individual students, class dynamics, and the learning environment, is needed. Secondly, due to the nature of social science research, we cannot reason inductively with any degree of certainty that an interaction that has been shown to be effective in the past will be effective in the future.

I am concerned that the prescription of practice based on what’s been demonstrated to be “effective” by research assumes what’s best for all students, to the same degree, because those practices have been “validated”. However, those “validations” are based on a one-size-fits-all approach, and must be carefully applied to specific students and contexts.

Evidence can come from various sources, including academic and school- based research. The question of what is desirable and appropriate is a value judgment that must be made in consideration of the purpose(s) of practices, the participating students and the learning environment. Teachers already collect evidence and make judgments about student learning, and use that evidence to make decisions about practice. Basing practice on what is effective, or focusing on attempts to measure quantitatively what is better judged qualitatively, can have undesirable or inappropriate consequences. NAPLAN is an example of this. We need to be cautious about applying evidence collected from one context to another, and from what has worked for a large group, to what will work for a specific student or small set of students.

The role of evidence

Evidence is one of several pieces of information that can inform a teacher to make decisions

Whether it comes from formal academic research, assessment programs, school programs, or class assessment, evidence can be interesting, informative, and useful. It can reveal possible relationships between practice and student learning.

Evidence of “effectiveness” is valuable, in some cases it is necessary, but it is very rarely sufficient to justify (or contest) decisions about educational practice. It is one of several sources of information that can be reasoned from for a professional educator to justify (or contest) decisions about their practice. Other sources of information, as Gary Jones touches on, include:

  • Professional knowledge (theories of practice, pedagogy, technology, education, and child and adolescent development, etc);
  • Content knowledge (theories and frameworks of knowledge, understanding, skills, and capabilities related to the content that is to be learned by students);
  • Contextual knowledge (knowledge of students, their parents, and other stakeholders; knowledge of context, including physical, social, and cultural environment, policies, laws, and local practices);
  • and Experience (learned habits and practices developed through rehearsal and reflection; see Berliner, 2004).

Evidence can inform decisions about classroom practice, but the evidence must be thoughtfully and tentatively applied. My friend and mentor David Geelan suggests:

“Good evidence is always valuable. The problem arises when we surge on ahead of the available evidence and get dogmatic on the basis of poor evidence filled in with guesses and assumptions.”

Simply picking up Hattie’s Visible Learning (for example) and taking the effect sizes that were calculated at face value is not recommended. An understanding of the purpose, context, specifications and limitations of the meta-analysis is necessary for using that evidence appropriately. What meta-analyses like Hattie’s Visible Learning can suggest to us is which practices we can have a high confidence in, and which we may need to be more tentative about. Though the presentation of effect sizes suggests approximately how likely a practice is to raise students’ (as a group, not on any individual student’s) academic achievement, Hattie’s analyses have come under fire for using inappropriate calculations. It would be inappropriate to read any more than that into the results of his analysis, yet politicians, policymakers, principals, and teachers themselves, with varying degree of skill at evaluating evidence, commonly hold up Hattie’s results as justification for implementing, banning, or changing practices and policies. The phrase “evidence-based” is meaningless if it is not engaged with critically.

In order to critically evaluate evidence, to know whether it is sound, comprehensive, and substantive, one must be able to know and understand why and how the evidence was collected and analysed, and how it is represented, interpreted, and communicated. An understanding of the theories, frameworks, and backing that underpin the practices that are being researched is also needed. Otherwise, the claim that a practice is “evidence-based” can be made to support almost the decision to use any practice in the classroom (including learning styles!).

Teachers should be encouraged to use research to guide them in making decisions about their practice

On the surface, it might seem that the requirement to use programs and practices that are promoted as “evidence-based” would save teachers time: less decisions to make, less time to spend with research (either formal on informal). A little of this might be acceptable; most of us are happy to accept that a curriculum is necessary to guide content selection, and that some sort of representation of achievement is necessary at the end of the compulsory years of schooling, particularly for those students who wish to enter tertiary education. A lot of prescription, whether or not such approaches are “evidence-based,” is not. A tightly prescriptive curriculum, mandated programs, and compulsory assessment bind teachers and their students to a structure that is inflexible. This reduces their capacity to respond to the various needs of their students along the way to attaining goals.

Evidence is a useful tool, but to use it appropriately for making decisions requires an understanding about what constitutes useful evidence, appropriate methodologies for data collection, contextual knowledge that will help a teacher to know whether or not the research they’re examining is relevant, and most importantly, whether it’s actually going to be useful for a particular student on any one day. When the term “evidence-based” is thrown around as justification for prescribing a particular program, practice or pedagogy, the nuances of the population that was sampled, the contexts for and of the research, the method and methodology for data collection, and the analysis of the results, are lost.

The research literacy required for evaluating claims and supporting evidence and theory is one of the goals of pre-service teacher education. Research literacy involves an understanding of educational theories and philosophies and how and why they arose, how they are consistent and inconsistent and how they might be critically dissected, how evidence is derived and how it might be properly analysed, and how all these ideas are communicated. This is necessary preparation for graduating teachers to be able to justify professional decisions they will make in the classroom.

An ideal situation is one in which teachers can make and use appropriate interpretations of evidence from accessible research, where their interpretations are consistent with our best theories of learning and teaching (mechanisms and theoretical frameworks), to make appropriate decisions about practices that will achieve desirable outcomes for their diverse groups of students in the contexts in which they are teaching. Professional development workshops and conferences, teacher networks, associations, and magazines might all be avenues for teachers to access and evaluate research and evidence.

Let’s also give teachers the time, space, and access they need to evaluate research critically, and purposefully, to inform their decisions. Let’s have some research-informed decision-making, rather than “evidence-based” practice.

What do you think?

Bibliography

Berliner, D.C. (1994). Expertise: The wonders of exemplary performance. In J.N. Mangieri & C.C. Block (Eds.), Creating powerful thinking in teachers and students (pp. 141-186). New York: Holt, Rinehart & Winston.

Berliner, D. C. (2004). Describing the behavior and documenting the accomplishments of expert teachers. Bulletin of Science, Technology & Society, 24, 200-212.

Biesta, G. (2007). Why “what works” won’t work: Evidence-based practice and the democratic deficit in educational research. Educational Theory, 57(1), 1-22.

Biesta, G. J. (2010). Why ‘what works’ still won’t work: From evidence- based education to value-based education. Studies in Philosophy and Education, 29(5), 491-503.

Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge.

Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. University of Chicago Press.

Sanderson, I. (2003). Is it ‘what works’ that matters? Evaluation and evidence-based policy-making. Research Papers in Education, 18(4), 331- 345.

Shavelson, R. J., & Towne, L. (2002). Scientific Research in Education. Washington, DC: National Academy Press.