Overview of Week 6 - How might I evaluate my teaching (Part 2)?

Table of Contents

  1. Overview of Week 6
  2. Experiencing - Evaluating a teaching episode
  3. Examining - Examining an evaluation experience
  4. Explaining - Designing evaluation
  5. Applying - How can I evaluate my teaching? Designing your evaluation

 

 


1. Where are we?

This week you will continue engaging with the driving question

How might I evaluate my teaching practice?

The focus being to develop a personalised evaluation framework/plan that you can use in Assignment 1, Part B. As with last week, you will encounter different approaches to evaluating learning and teaching and it is important to keep your context in mind when assessing these approaches for suitability and flexibility for your context.


2. This week's learning path

This week's learning path consists of the following four sections.

Experiencing

Where the notion of a teaching episode is clarified and you're asked to use a particular evaluation instrument to perform a small evaluation on this course.

This includes:

Examining

Asks you to use the components of an evaluation from last week's learning path to reflect on the evaluation instrument from the Experiencing phase above..

This includes:

Explaining

Offers a description of various concepts associated with the design of an evaluation plan.

This includes:

Applying

Start drawing all of this and last week's work together to start work on Assignment 1, Part B.

This includes:


3. This week’s references

The following references can also be found in the Week 6 section of the library of the course's Zotero group.

Reference list

Alvarez, M. E., & Anderson-Ketchmark, C. (2011). Danielson’s Framework for Teaching. Children & Schools, 33(1), 61–63.
Chickering, A. W., & Gamson, Z. F. (1987). Seven Principles for Good Practice in Undergraduate Education. AAHE Bulletin. Retrieved from https://eric.ed.gov/?id=ED282491
Darwin, S. (2012). Moving beyond face value: re-envisioning higher education evaluation as a generator of professional knowledge. Assessment & Evaluation in Higher Education, 37(6), 733–745. https://doi.org/10.1080/02602938.2011.565114
Harvey, J. (Ed.). (1998). Evaluation Cookbook. Heriot-Watt University. Retrieved from http://www.icbl.hw.ac.uk/ltdi/cookbook/
To read (2 pages) Ross, M. E. (2010). Designing and Using Program Evaluation as a Tool for Reform. Journal of Research on Leadership Education, 5(12), 481–500. https://doi.org/10.1177/194277511000501207
Tenam-Zemach, M., & Flynn, J. E. (2015). Rubric Nation: Critical Inquiries on the Impact of Rubrics in Education. IAP.
To read Tran, N. D. (2015). Reconceptualisation of approaches to teaching evaluation in higher education. Issues in Educational Research, 25(1), 50–61.
Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22–42. https://doi.org/10.1016/j.stueduc.2016.08.007
Wiliam, D. (2006). Assessment: Learning communities can use it to engineer a bridge connecting teaching and learning. JSD, 27(1).

Experiencing - Evaluating a teaching episode

Table of Contents

  1. Overview of Week 6
  2. Experiencing - Evaluating a teaching episode
  3. Examining - Examining an evaluation experience
  4. Explaining - Designing evaluation
  5. Applying - How can I evaluate my teaching? Designing your evaluation

 

 


1. Introduction

Last week, we investigated the components of evaluation and how the align to create a strong evaluation plan. In this week, we continue learning about evaluation, but with a more practical focus on developing a plan to evaluate a teaching episode, which is the crux of Assignment 1, Part B. Our driving question is the same as last week:

How might I evaluate my teaching practice?

To start with, let’s use a concrete example of an evaluation of a teaching episode. In the following discussion, the teaching episode is the online content of a course.


2. Some notes on "teaching episode"

Assignment 1, Part B asks that you evaluate a teaching episode.  In this book, the teaching episode you'll be evaluating is part of an online course. What is a teaching epsiode?

We've used this term quite explicitly in an attempt to be inclusive. Given the potential diversity of prior teaching experience for students in this course, we wanted to avoid using a term that excluded anyone's experience.

For this course, a teaching episode is a situation where you have prepared some sort of intervention or environment in which students are expected to learn something.

Size?

We consider the teaching episode you will be asked to evaluate next as likely being too large for your assignment. We use it here because it's accessible, your evaluation would provide some useful information for use, but, more importantly, because it provides one clear view of what an evaluation might look like.

In the end, how "large" a teaching episode you use is up to you. Whatever size teaching episode you choose, your submission will need to fit the word length limit for Assignment 1, Part B.


3. Evaluating online teaching at USQ

At USQ, there is a document called StudyDesk Design & Implementation Expectations that is used to evaluate a course’s StudyDesk. This document outlines the “expectations” of a StudyDesk and provides a rubric to evaluate a course’s online presence. It is commonly used by some support teams and can be used by the course examiner to undertake a self-assessment of their StudyDesk.

Examining the expectations document

Take a look at the StudyDesk Design & Implementation Expectations document and answer the following questions:

  1. Who do you think set these expectations?

  2. What is clear/unclear in the evaluation rubric?

  3. What are some features of the evaluation rubric?

  4. What is missing from/can be improved in the evaluation rubric?

  5. How do you think the evaluation rubric was created/constructed?

  6. When you look at the evaluation rubric, do you think that you would like to use it to evaluate your online teaching space?


4. How does EDU8702 stack up?

We recommend completing the following activity, at least using the expectations document to evaluate an aspect of this course's Study Desk.  However, if you're worried about sharing your evaluation via the following means, please feel free not to share.

Evaluating EDU8702

Now, using one criterion (e.g. A course orientation is provided) from the StudyDesk Design & Implementation Expectations document, evaluate this course (yes, EDU8702!).

  • What do you find?

  • Does this course meet the expectations for that criteria and its indicators? Why or why not? What could be improved?

  • How easy is it to evaluate this course using this evaluation rubric?

If you are comfortable with it, please share your evaluation of the EDU8702 StudyDesk in the forum. This will provide valuable feedback to us in the design and implementation of the course. It will be very much appreciated!

Examining - Examining an evaluation experience

Table of Contents

  1. Overview of Week 6
  2. Experiencing - Evaluating a teaching episode
  3. Examining - Examining an evaluation experience
  4. Explaining - Designing evaluation
  5. Applying - How can I evaluate my teaching? Designing your evaluation

 

 


1. Reflecting on the evaluation experience

In Experiencing, you used an evaluation rubric (StudyDesk Design & Implementation Expectations) to evaluate a teaching episode (an aspect of the EDU8702 StudyDesk). In this book, we focus on this experience of evaluating EDU8702 using the StudyDesk Expectations rubric.

Examining an evaluation using the expectations document

Drawing on the components of evaluation that you learned about in Week 5, reflect on and answer the following questions with respect to the StudyDesk Design & Implementation Expectations document:

  • What was the purpose of the evaluation?

  • Why would a teacher (or some other stakeholder) want to evaluate this, i.e. what type of goal do they want to address?

  • What type of evaluation was this?

  • What type of evidence did you use in the evaluation? Are there other types of evidence you could use to support the evaluation?

  • What types of data would support this evaluation?

  • What type of data collection method was used?  

  • How effective do you think that this evaluation is for this type of teaching episode?

Now, in a blog post, write a reflective summary of your experience of using the StudyDesk Expectations evaluation rubric and of reviewing the components of evaluations with regards to the StudyDesk Expectations evaluation rubric.

Explaining - Designing evaluation

Table of Contents

  1. Overview of Week 6
  2. Experiencing - Evaluating a teaching episode
  3. Examining - Examining an evaluation experience
  4. Explaining - Designing evaluation
  5. Applying - How can I evaluate my teaching? Designing your evaluation

 

 


1. Introduction

The aim now is to help provide some abstract conceptualisations that will help you develop a plan to design an evaluation plan for improving learning and teaching. 

Build on your existing work

This week and last you have been asked to develop a range of artefacts and thinking about teaching episodes you might evaluate using a range of conceptualisations that have been introduced.

Before you start working through this book, remind yourself of that work and use it to ask yourself questions and make choices about the conceptualisations and resources introduced here.


2. Designing evaluations

Designing an evaluation can be as simple or complex as required based on the purpose of the evaluation. For example, if you simply want to know “How did my students like the interview an expert section in learning analytics?”, then you could design a feedback tool or or form that asks them that simple question and uses a Likert scale for responses with a free text comment section. However, evaluations can also be significantly more complex as you saw in the StudyDesk Expectations document in the Experience and Examine books.

The critical aspect that underpins the simplicity or complexity lies in the purpose of the evaluation: Why are you evaluating this teaching episode - for what benefit?

To begin, it is worthwhile to review some approaches to evaluation from a program perspective. The article by Ross (2010) presents an overview of designing a program evaluation as a tool for reform, which is a larger goal than evaluating a teaching episode. However, pages 482–484 provide a clear overview of five approaches that are good to bear in mind as you design your evaluation.

Five approaches to designing an evaluation

As you read pages 482–484, consider the following questions:

  • Which approach do you think works best for your teaching episode?

  • Do you see value in combining two or more of the approaches?

If the article is interesting, then please do continue to read it as it provides a clear discussion of how a program evaluation was constructed. 


3. Foundational components of an evaluation document

There are three foundational components to structuring a formal evaluation document. These are the evaluation criteria, indicators of these criteria, and evidence for these criteria.

  1. Evaluation criteria;

    The criteria that are used to evaluate whether or the degree to which something has been effective, implemented, etc. These are the “big” parts of the evaluation - the combined criteria will form the foundation of the evaluation.

  2. Indicators; and,

    The indicators are different aspects of each criteria that work together to provide an overall picture of whether the criteria has been met, or to what degree the criteria has been met.

  3. Evidence.

    This supports each of the indicators to demonstrate competence, degree of competence, or lack thereof, of each indicator. These combine to provide an overall view of the criteria.

As an example, in the StudyDesk Expectations evaluation rubric, there are ten criteria, and each criterion has between three and nine indicators that help the evaluator determine what level of standard is being demonstrated in a StudyDesk. In the following image, you can see that the criteria is “Legal and institutional requirements are addressed”. There are four indicators (4.1, 4.2, 4.3, and 4.4) that are rated on their degree of presence (yes, no, partly) and that will help the evaluator determine to what degree this criteria is being addressed (rating of needs work, developing, good practice, or exemplary). The evidence can be placed into the “comments” box at the bottom. Finally, the rating can be determined based on how well the indicators are being met. Note that the ratings are described at the beginning of this evaluation rubric and are presented below the annotated table.

Components of an evaluation document

Depending on your evaluation goal and style, you might need to include a rating scale that will help you determine the degree to which something is or is not occurring in your teaching episode. Not every evaluation has a rating scale, but they can support clearer understanding of the evaluation, simpler decision making, and enable easier dissemination of results, where applicable. The following rating table has been taken from the StudyDesk Expectations evaluation rubric.

Review scale descriptions


4. Another example evaluation

Time to look at another more broadly used example of evaluation, in particular, Charlotte Danielson’s (2007) Framework for Teaching Evaluation Instrument. The intent behind examining the Danielson framework is to examine how they have constructed and described their criteria, indicators and evidence. The Danielson Framework has been included so that you can see a non-tabular version. Evaluation rubrics/frameworks/supporting documents can be designed in many different ways, and it can be dependent on the types of data and evidence that you are using.

Alvarez et al. (2011) describe the original intent behind this work was “that it be used for self-assessment, teacher preparation, recruitment and hiring, mentoring, peer coaching, supervision, and evaluation” (p. 61). It’s a tool to help make judgements about complex task of teaching within schools (not higher education, but it might arguably be used in higher education). It does this by identifying “four domains of teaching responsibility: planning and preparation (Domain 1), classroom environment (Domain 2), instruction (Domain 3), and professional responsibilities (Domain 4)” (Alvarez et al., 2011, p. 61). Each domain broken up further with indicators, criteria and evidence.

Examining the framework

Using the Framework for Teaching Evaluation Instrument:
  1. Choose one domain to focus on - if your teaching episode aligns with one of these domains, then we recommend that you focus on that domain.

  2. Look at the criteria and make a judgement: Do these criteria address the key aspects of that domain? Is anything missing? What can be aligned/transferred to tertiary teaching contexts?

  3. Choose one criteria and read its description and indicators. Do these align and support the criteria? Does the evidence align and support the criteria and indicators?

  4. What else do you need to know in order to use this criteria with its indicators to evaluate a teaching episode?


5. Choosing criteria, indicators and evidence

Designing an evaluation document/rubric/instrument is complex. How do you know what is important? What is not?

Alvarez et al (2011) explain that the Danielson Framework arose out of work done at a large educational testing service in the USA. It was developed “using practice, wisdom and research”, field tested and further researched. Research that is outlined in more detail in Danielson (2007).

You (and most practising higher education teachers) don’t have the time (and perhaps knowledge) to undertake such rigorous and important work in developing your evaluation instrument. What can you do?

There are at least three possible answers to that question:

  1. Use your own knowledge and assumptions;
    As established earlier in the course, your existing assumptions are not the best foundation upon which to base practice. It is important that you question those assumptions.

  2. Use existing theory to guide your design; or,
    As established earlier, theory can be a useful way to question assumptions and focus on what’s important for learning and teaching.

  3. Use existing evaluation instruments.
    You are not the first person to wish to evaluate their learning and teaching. There are established evaluation instruments and plans.


6. Using theory

Back in the week 2 learning path you were introduced to the perspective of theory as a tool for offering an informed perspective on a complex domain and offering principles for action. With evaluation, theory can be used in at least two broad ways:

  1. The design of broad evaluation approaches (or paradigms).
    The Ross (2010) reading from earlier in this book describes five classifications of evaluation approaches, which are arguably based on different broad theoretical foundations.

  2. The design of specific evaluation plans and instruments.
    One you’ve chosen a particular evaluation approach/paradigm the design of your particular plan or instrument can be guided by theory. For example, theory can guide your choice of criteria, indicators and evidence.

Given the focus here on helping you answer the question

How might I evaluate my teaching?

The focus here will be on the second way - using theory to design specific evaluation plans. The next couple of pages illustrate the theory and literature underpinning the StudyDesk expectations document and look at a paper that uses theory to identify three different ways for students to evaluate teaching.


6.1. Theory underpinning the expectations document

The StudyDesk Expectations document you used earlier in this learning includes a references section that points to the research and theory used to design that document, including (but not limited to):

In a perfect world, any interventions you plan for your teacher inquiry into student learning (TISL) will be informed by appropriate theories, frameworks and models. Those should provide pointers on what is considered important enough to measure and why it is important.


6.2. Using theory to ponder Student Evaluation of Teaching (SET)

Most universities employ a form of student evaluation of teaching (SET) in the form of end of semester surveys for each course. Tran (2015) identifies that a major issue with much of this practice is based on implicit and explicit beliefs about quality learning. Tran (2015) argues that the use of frameworks or theories of teaching and student learning offer a useful way to “unpack the ‘hidden’ assumption about teaching in any teaching evaluation” (p. 51). (Remember that assumptions are a major challenge to TISL). To demonstrate this, Tran (2015) uses Bigg’s 3P model of teaching and learning to identify three different approaches to SET and then argue for a particular approach.

Read and ponder

Read Tran (2015), with a particular focus on the sections examining the three different approaches to SET - Student presage focused evaluation; Teaching focused approach; and, learning focused approach.

As you develop the evaluation plan for your teaching episode, ask yourself about the focus of your evaluation: what students are; what the teacher does; or, what students do?

(If you have a sense of deja vu, that may be due to an earlier mention in this course of Bigg’s 3 levels of teaching.)

7. Reuse other evaluation instruments

There has always been work going on around the evaluation of learning and teaching, interest that has grown in recent years with a growing expectation that higher education should be able to demonstrate its effectiveness and value for money. Consequently, there are existing evaluation instruments and plans that can be drawn upon.  

Some examples include:


7.1. Challenges to re-use

Wiliam (2006) argues

That is why “what works” is not the right question in education. Everything works somewhere, but nothing works everywhere. (p. 17)

Taking something (e.g. an evaluation instrument) from one educational context and using it in another is not always straight forward or appropriate. Differences in the context may make re-use less than useful.

Use of the Danielson framework

Earlier in this book you looked at the Danielson framework and its origins. It is an approach to evaluation that has been widely used in the United States, especially more recently as part of moves in that country based around the idea of teacher accountability. A movement that received significant government funding and hence focused the attention of many stakeholders. A number of whom - including the Bill and Melinda Gates Foundation -  looked to the Danielson framework as a potential tool.

In this blog post, Danielson herself reflects on some of the issues with teacher accountability and its implementation the USA, including:

Comments that touch on some of the difficulties associated with evaluation and measurement, such as those raised by Campbell’s law and Goodhart’s law.

Issues that are somewhat reinforced by the comments on Danielson’s post. Especially those that give an insight into the apparent experience of teachers who have experience use of the Danielson framework.


8. A critical view of evaluation

 A critical inquiry into evaluation would seek to reveal some of the assumptions that underpin it in order to better understand it and its impact. Again, due to space, we’ll only briefly touch on this and we’ll start with the following scene from the movie Dead Poet’s Society.

 

The foreword (Westheimer, 2015) from the book “Rubric Nation: Critical Inquiries on the Impact of Rubrics in Education” uses this scene to make one point about evaluation. In particular, the use of rubrics and the type of evaluation to which the Danielson framework has been put in the USA. It argues that those sort of calls for standardised measures and evaluation are part of a broader global movement.    It goes on to identify some of issues that may arise from evaluation.  In particular,

..the idea that we should clearly articulate educational goals and then devise methods for determining whether those goals are met is irresistibly tidy...Uncritical acceptance of even such a commonsense-seeming idea, however, is misguided for the following reason: education is first and foremost about human relationship and interaction, and as anyone who tried to create a rubric for family fealty or for love or for trust would discover, any effort to quantify complex human interactions quickly devolves into a fool’s errand. (Westheimer, 2015, p. viii)

These comments are specifically aimed at the idea of standardised evaluation, not to individual teachers evaluating their own performance in contextually appropriate and informed ways. As such they appear to offer a potential explanation for some of the comments on Danielson’s blog post. They also point to some of the challenges in widespread re-use of evaluation instruments or approaches from other contexts, and the inherent difficulty involved in evaluating learning and teaching.


9. Strategies for designing evaluation

The following pages provide an overview of four strategies that can help you design your evaluation clearly and solidly. The four strategies are:

  1. Embed your evaluation in the learning and teaching process for learners and teachers.
  2. Plan the evaluation with deliberate intent
  3. Think carefully about your data collection and sampling procedures.
  4. Close the loop: feedback and feedforward what you learn from your evaluation.


9.1. Embed your evaluation in the L&T process

When designing your learning and teaching activities, think about ways to make your students’ progress in learning explicitly visible to them and to yourself.

For example, in EDU8702, you can see your activity tracking on the right side of the course activity page - you can track your progress through the content in this way. You are also asked to regularly reflect on your learning through your blog and your assignments are designed to review those reflections and build on them.

These combined learning experiences can be a valuable source of evaluative information for you as the learner and the teacher. Both can see your progress and development in terms of the course content, learning design, and progress.


9.2. Plan the evaluation with intent

This refers to the evaluation component prompts that you discovered in Week 5 and that were reiterated in the Examine book. However, there are two additional prompts regarding timelines and resources, because these need to be known in order to design an evaluation.

    1. What is the purpose of the evaluation?

    2. What kinds of evidence will help you understand more about these issues or aims?

    3. What sources of information will you use?

    4. What methods or approach will you take in collecting the information?

    5. What timeline would be best for undertaking each element of the evaluation?

    6. What resources and support do you have?

It is important to ensure that each of these aspects of the evaluation align. In this way, they will form a strong basis for the evaluation. The following evaluation planning grid is an example of how you can investigate the alignment of these components.

Sample format for an evaluation planning grid

 


9.3. Think about data collection and sampling procedures

Remember to check the university requirements regarding evaluation of learning and teaching, and adhere to any policies that might be in place, e.g. the Evaluation of Teaching, Courses and Programs Policy and Procedure at USQ. Also, discuss evaluation with your colleagues to learn from their experiences.

    1. Use multiple viewing lenses or triangulation. If you collect data from multiple sources and/or a variety of perspectives, it creates richer and deeper understanding of what is happening in the area being evaluated. For example, collecting student and teacher feedback can provide a more in-depth understanding of a teaching episode.

    2. Decide on the amount, type, and format of data based on what you need and what you can use. Ensure that the amount, type, and format of data aligns with your evaluation purpose.

    3. Consider your sampling procedures and response rates, where applicable.

    4. Complete any evaluation questionnaires or tasks yourself. This will help you check for errors, comprehensibility, and reasonability.

    5. Ensure that you can actually make a decision based on the evaluation data that you have collected.


9.4. Close the loop: feedback and feedforward

After you have completed the evaluation and determined actions based on the results, consider who else might benefit from hearing about your evaluation, and how you might disseminate this information to them. As you would have seen throughout these two weeks on evaluation, it is difficult to find information on evaluation of teaching practice in higher education, so your colleagues could benefit significantly from you sharing your experiences, learnings, and outcomes.

Applying it to your episode

Considering the teaching episode that you will evaluate for Assignment 1, Part B, develop responses and/or approaches to addressing the four strategies listed above. Perhaps, write a reflective blog post on how you will address the four aspects of these strategies and how these will impact your evaluation design, e.g. what additional/alternative considerations you might have, how you might re-imagine your evaluation, etc.

Applying - How can I evaluate my teacing? Designing your evaluation

Table of Contents

  1. Overview of Week 6
  2. Experiencing - Evaluating a teaching episode
  3. Examining - Examining an evaluation experience
  4. Explaining - Designing evaluation
  5. Applying - How can I evaluate my teaching? Designing your evaluation

 

 


1. Introduction

During this week, you have been investigating how to design an evaluation based on the components of evaluation discussed in Week 5 and a developing understanding of the parts and processes of designing an evaluation. Throughout this week, it has been assumed that you have identified the teaching episode that you will focus on, but it is now important to make this explicit here.

Work through the following activities to more fully develop your evaluation for your teaching episode. But first, read through all of the activities, before you start working on each of the parts.

Understand Assignment 1, Part B

Now might be a good time to revisit Assignment 1, Part B.

2. Teaching episode context

What's your context?

Describe your teaching episode in its full context. This will be a refinement of the description that you prepared in the Applying book in Week 5.

3. Plan with intent

Laying the ground work

Answer the following questions for the evaluation of your specific teaching episode:

  1. What is the purpose of the evaluation?

  2. What kinds of evidence will help you understand more about these issues or aims?

  3. What sources of information will you use?

  4. What methods or approach will you take in collecting the information?

  5. What timeline would be best for undertaking each element of the evaluation?

  6. What resources and support do you have?


4. Criteria, indicators and evidence

Identifying your criteria, indicators and evidence

Considering the purpose of your evaluation, construct responses to the following questions to design your evaluation plan.

  1. Criteria

    1. What criteria can be used to evaluate your teaching episode that will align with your evaluation purpose?

    2. Looking at these criteria, do they adequately cover your evaluation purpose?

    3. Do you need more criteria?

    4. Do you need fewer criteria?

    5. Are there some criteria that could be combined?

    6. Are there some criteria that are too complex and need to be separated?

    7. Does each criterion cover one area/specific item?

  2. Indicators.
    For each of the criteria above, answer the following questions:

    1. What will indicate that this criteria is being addressed/incorporated/undertaken?

    2. How will these indicators be assessed, i.e. what kind of rating scale would be appropriate, if appropriate?

    3. Are there enough indicators for each criteria?

    4. Are there too many indicators for each criteria?

    5. Could some indicators be combined?

    6. Does each indicator assess only one aspect?

    7. Should any indicators be separated into two?

  3. Evidence.
    For each of the indicators above, answer the following questions:

    1. What types of evidence could be used to support the indicators?

    2. Is it reasonable evidence?

    3. When looking at the evidence for all indicators for one criteria, does it all align?

    4. Is there any other evidence listed in Part 2 that has not been used? Should it be used? Should it be removed from the list in Part 2?


5. Developing your evaluation plan

Combine the information from Parts 1 to 3 into an evaluation plan. You will need to consider how to present the various pieces of information so that a colleague could also use the evaluation plan for the teaching episode. This will help you consider the clarity of the information that you are presenting.

You can create an artefact that reflects the way you understand your evaluation and the content - there is no predetermined format to use. Recollect the evaluation frameworks that you have seen in Week 5 and Week 6, and if any of these aligned with the way you think.

Once you have created your artefact, share it on your blog with a reflection on the process of creating your evaluation plan. Your reflection should highlight your key learnings, any difficulties you faced, and any other considerations that you want to make regarding your evaluation plan.