Looking Beyond the Evidence: What’s Your Story? — Donna Ellis, Director of the Centre for Teaching Excellence

Face covered with data
Have you ever felt overwhelmed?  I’m sitting at my computer on a late November afternoon contemplating what I have taken away from two recent events: a provincial symposium on assessing learning outcomes and an international conference for educational developers on transformative relationships in relation to fostering cultures of deep learning.

I attended numerous sessions and overall I came away with a sense of what I call “data overwhelmosis”. We have more data and more evidence available to us than ever before in higher education.  We have software to help us identify specific learning outcomes and each student’s level of achievement for each outcome. We have online templates for course syllabi that generate maps of the learning outcomes for an entire program’s curriculum. We can use learning analytics and data analytics to monitor students’ progress (or failure).  We can do social network analyses to show how we connect to one another, how information flows within a unit or across an entire institution (or beyond).  We know what educational development practices have empirical backing. The list goes on.  My point is that it’s clear that we can capture almost anything. We can collate massive amounts of data and generate evidence for (or against) almost anything you can imagine. But to what end? What’s the purpose? And what’s the overarching plan?

We’ve talked a lot about these questions as part of devising and implementing our Centre’s assessment plan as well as our upcoming external review.  Just because we can get data doesn’t mean it’s a good idea.  How much is enough? What will we do with what we collect?  Why will it matter?  Data collection takes time and effort.  We know this from any research project we have undertaken.  In our line of work, any time that we ask our staff to input data about their work, this is time not spent working with a client.  There has to be a good reason to ask staff members to spend time in this way.  This is where the role of questions becomes critical.

For research projects, we determine research questions.  We did the same when devising our assessment plan.  These questions guide our every move:  our methodological decisions, the types of data we need, the appropriate analysis methods, and the way we write up our results.  The questions enable us to select the data that will help us determine answers, and these limited data become the evidence for our conclusions.  We’ve realized that we don’t need every piece of data that we could collect – just the data that are relevant to the questions.  This is a freeing revelation.

But it doesn’t end there.  The evidence isn’t enough.  We need to find the story.  What does the evidence mean?  How will it affect what we do tomorrow or in the next five years?  I worry that higher education in general – and educational development specifically – is getting bogged down in the weeds and not stepping back to identify what those weeds are telling us.  The examples that I noted in the second paragraph help to illuminate the issue.  But what are we overlooking?  Which way is the wind blowing now and in the future?  Our questions create important frames to make data manageable and even meaningful, but thinking about how to tell the story of the evidence seems the most crucial of all to me.

In the next few months, we will be aiming to tell the story of CTE in our self-study, which will extend far beyond what we convey in our annual reports.  We will be analyzing existing relevant data and collecting new data as needed to fill perceived gaps.  We will be striving to ensure that we have sufficient information to assist our external reviewers in addressing the questions set in the Terms of Reference for the review.  But from all of this, what we most need is to tell our story and listen to what it is telling us.  I’m not entirely sure what we’ll hear, but I am very intrigued by what will emerge.  The evidence is critical, but we need to move beyond it to better understand where we are and where we’re going.

Walking the Talk: Preparing for an External Review — Donna Ellis, CTE Director

review facesAs many of you know, one of CTE’s key services involves facilitating curriculum retreats for departments as they prepare a self-study for academic program review. We help faculty to: identify program-level outcomes, map courses according to those outcomes, and explore how course assessments fit into the curriculum map. See our curriculum renewal website for more details. We feel strongly that we should also undertake similar processes for our own work.  Even though we have only a few programs that might look like a coherent curriculum, and fewer still that lead to some kind of certification, everything we do is meant to be an opportunity for learning about teaching.

As part of the Academic Programming strategic plan, CTE is up for external review in 2017.  Like an academic department, we recognize that some preparation is in order, including thinking carefully about where we’ve been and where we’re going.  One key preparatory step has been our work over the past few terms on designing a comprehensive plan to assess the work of the Centre.  Despite a growing body of literature in the area of teaching centre assessment, no set standards have emerged for assessing a Centre’s work, so we’ve been developing some parameters and frameworks to help guide our efforts.   In many ways we have been predicting this activity since we began preparing to support new academic review processes in 2007, and it is exciting to be acting more fully on it now.

Key principles underlying our plan’s development include being: a) collaborative by involving multiple stakeholders, b) defensible by drawing from the evaluation literature, and c) comprehensive while not assessing everything all the time.  Most importantly, we are striving to create a plan that is sustainable so that no one feels overburdened by contributing to its implementation: we can focus on providing our services and you can focus on your ongoing development.

Our plan is centred around four key questions:

  • Who comes to us?
  • To what extent are we meeting our participants’ needs?
  • What intended outcomes are our participants meeting?
  • How effective are our processes?

These questions are meant to help us gather evidence about our overall impact and indicate how well we are achieving our aims of building capacity, building community, and promoting a culture that values teaching and learning.  We are interested in both short- and long-term results, including the ripple effects to our work, and we are poised to look more systematically at connections and transformations.

Building on the evaluation tool of a logic model, we have identified both output and outcome data to collect:

  • Output Data: event registrations and staff reports (e.g., consultation numbers); resource hits (e.g., website traffic); internal planning (e.g., workload analyses)
  • Outcome Data: surveys (post-event and long-term); participant reports/narratives; interviews and focus groups; other data (e.g., network analyses)

We are now at the stage of building a matrix that involves us mapping our various programs and services according to the questions that need to be addressed in each case and the most appropriate type(s) of data to collect.  A CTE working group is also developing survey questions based on our programming outcomes.  Our goal is to pilot our new instruments and processes this Fall, so expect to experience some new ways of providing feedback on our services.  We thank you in advance for assisting us with this important work.

Another part of our plan is to release an annual report that highlights key assessment data.  We plan to replace our Fall newsletter with this annual report, with the first one scheduled to come out this Fall.  Our new assessment plan won’t be in place by then, but we already have various data to report on to help increase transparency around CTE and the impact of our services.

I am grateful for all of the talented individuals at CTE who are contributing to our assessment plan and annual report, and to our faculty colleagues who have been providing feedback along the way.  Like academic program reviews, our assessment work is part of a larger ongoing continuous improvement process.   There is more work ahead of us, but it is exciting to contemplate all that we can learn from it.

If you have feedback about our assessment approach or would like to learn more, please give me a call   (ext. 35713) or send me an email (donnae@uwaterloo.ca).

References:

Grabove, V., Kustra, E., Lopes, V., Potter, M.K., Wiggers, R., & Woodhouse, R. (2012). Teaching and Learning Centres: Their Evolving Role Within Ontario Colleges and Universities. Toronto: Higher Education Quality Council of Ontario.

Hines, S.R. (2011). How mature teaching and learning centers evaluate their services. In J.E. Miller & J.E. Groccia (Eds.). To improve the academy: Resources for faculty, instructional, and organizational development (pp. 277-279). San Francisco, CA: Jossey-Bass.

Wright, M. C. (2011). Measuring a teaching center’s effectiveness. In C.E. Cook & M. Kaplan (Eds.),  Advancing the culture of teaching on campus: How a teaching center can make a difference (pp.38-49). Sterling, VA: Stylus.

Program development and evaluation logic model templates from University of Wisconsin-Extension:  http://www.uwex.edu/ces/pdande/evaluation/evallogicmodelworksheets.html

Rethinking Assessments of Student Learning — Donna Ellis

student collaborationAs I write this article, a number of you will have just finished marking your final exams.  Did your students learn what you wanted them to learn?  Did your exam and your other course assessments enable them to demonstrate and perhaps even further extend their learning?

Assessments of student learning are a critical part of courses.  Overall, they are the major driver of what students choose to do and focus on in a course.  But do our assessments require students to learn?  In his recent talk at uWaterloo, Eric Mazur from Harvard University would suggest that the answer is often no.  In his talk, “Assessment: The Silent Killer of Learning,” he outlined various problems with our current approaches to assessment and some suggestions about how to make improvements.

He began by asking the audience to discuss the purposes of assessment.  We were to turn to a partner; mine was an undergraduate student.  Her initial response to his question was:  to pass our courses, get a degree, and get a job.  Upon further reflection, she also added: “to parrot back what the teacher says.”  Are any of these responses clearly about learning?  No, and that is one of the biggest problems from my perspective.  Conceiving of assessments as “obstacles along the road” to get to a desired end goal makes it hard to recognize that they can and should be part of the journey of learning.  Traditional, regurgitation-based tests do not tend to contribute to this journey.  However, many other types of assessments do contribute, such as assignments that enable students to practice skills learned in class with new applications, or group exams that require students to explain and defend their answers to their peers, or final projects that focus on analysis, synthesis, and evaluation.  How can we reinforce the role of assessments in the learning process?

One way that Mazur outlined is to use authentic assessments.  He indicated that a lack of authenticity is a major problem in physics education.  He explained that when a physicist has a problem, they typically know the desired outcome but not the process needed to reach a solution.  However, in textbooks, the problem and the process are made apparent, with the outcome being the unknown.  This situation results in the students being given information that they would not automatically have in a real-world setting as well as miss many critical learning opportunities.  The call for authentic assessments also came from our 2014 Opportunities and New Directions (OND) conference speaker, John Bean, who connected this approach to writing assignments (see my May 2014 newsletter article for more details).  When we make our assessments more authentic, we make it more difficult for students to believe they can just parrot back what we said in class.  We also push them to continue learning.

Authenticity, though, can come with a price for students. Such tasks are often less predictable and can sometimes lead to failure.  But whether or not something is a “failure” depends on what is being assessed, which ties back to the intended learning outcomes connected to the assessment. For example, if your goal is to have students learn about team processes, then the assessment scheme would give credit for the development of those process skills at least as much as the actual end product.

If rethinking assessments of student learning is on your 2015 “to do” list, I have 3 concrete suggestions:

  1. Watch Mazur’s talk for more ideas (see URL below).
  2. Submit a proposal and attend our annual OND teaching and learning conference on April 30. This year’s theme is “Making Teaching and Learning Visible”, and assessments of learning play a large role in providing such clarity.
  3. Participate in this year’s Teaching Excellence Academy (TEA). This intensive course redesign event occurs April 22, 23, 24, and 27, and supports you in rethinking all elements of a course design, including the assessments of student learning. Contact your department chair or director for more information or let me know if you have questions; the call for nominations will go out in early February.

And, as always, let us know how we can help!

References:

Mazur, E. (Dec 11, 2014) Assessment: The silent killer of learning. Presentation delivered at the Department of Physics & Astronomy Teaching Retreat, University of Waterloo. Downloadable here.

Donna Ellis

Peeling Back the Layers: Uncovering Organizational Culture and the Place of Teaching — Donna Ellis, CTE Director

onionAt CTE, we work collaboratively with a wide variety of our campus colleagues – it’s an integral part of what we do.  But we also work collaboratively with our colleagues at other institutions.  I have been very fortunate to be part of a research group with my teaching centre colleagues from seven other Ontario universities.  And our project has been an absolutely fascinating one:  how can we uncover the value that our institutions place on teaching?

Our group’s underlying belief is that one fundamental way to ensure quality teaching at our institutions is to foster an organizational culture that values teaching.  Full stop.  This organizational culture comprises the deep structure of an organization that is rooted in its members’ values, beliefs, and assumptions (Denison, 1996).  These elements lead to norms and patterns of behavior.  Austin (1990) identified various factors that contribute to the perceptions of university members about their institutional culture, including institutional mission and goals, governance structure, administrators’ leadership style, curricular structure, academic standards, student and faculty characteristics, and the physical environment. Hénard and Roseveare (2012) provided seven levers for promoting an institutional culture that values quality teaching which significantly influenced our research study.

To dig deeper into our research question and underlying belief, we secured a provincial Productivity and Innovation Fund (PIF) grant to review existing literature, develop a survey instrument, and run a pilot study at three of our institutions in the Winter 2014 term.  Nearly 4,000 faculty members and students at Western University, McMaster University, and the University of Windsor completed the pilot version of our Teaching Culture Perception Survey.  Follow-up focus groups were also run to collect further feedback and insights.

We included two main scales on our survey:  perceived existence (agreement rating) and perceived importance of a variety of indicators related to an institutional culture that values teaching.  A sampling of the items includes:

  • there is a strategic plan that positions teaching as a priority
  • teaching effectiveness is considered in hiring
  • evidence of effective teaching is considered in the evaluation of faculty members’ job performance (e.g., tenure, promotion, annual evaluations)
  • there are rewards for effective teaching
  • learning spaces such as classrooms, labs, and/or studios are designed to facilitate learning
  • educators are encouraged to use the teaching feedback they receive to improve their teaching
  • there is an adequately resourced teaching support centre
  • educators can get financial support to develop their teaching (e.g., grants programs, teaching conferences)
  • opportunities exist for educators to develop leadership in teaching (e.g., Teaching Fellows program)
  • programs are evaluated based on student learning outcomes

The factor analyses completed on the data from the faculty and the student versions of the surveys revealed some differences between what is perceived as being in place and what is perceived as important at an institution.  Consistently, the importance ratings were higher than the agreement of existence ratings, suggesting that respondents valued the various elements of a potential institutional teaching culture more than they perceived them to actually be in existence.  The results also revealed differences between the faculty members’ perceptions and those of the students.  The focus groups helped to uncover some of the complexity of the perceptions.  For example, when discussing awards to recognize excellent teaching, some participants indicated that such awards are not valued, particularly in relation to research.  Others spent time discussing the barriers to effective teaching that stem from aging and inappropriately designed teaching spaces.  Another common theme involved issues surrounding poor existing methods for evaluating teaching.

While our analyses have indicated that we need to further refine our survey instruments, we are encouraged by the interest in our work from our colleagues across Canada and beyond.  We have also launched a website where we can share information about our ongoing project, including the results as we are able to release them.

So what’s the value placed on teaching at the University of Waterloo?  I hope that in the near future we can run the revised survey at our institution so that we can better understand our university community members’ perceptions about the value being placed on this critical part of our fabric:  teaching.  I think it’s time to peel back the layers and take a closer look.

By Donna Ellis

 

References:

Austin, A.E. (1990). Faculty cultures, faculty values. New Directions for Institutional Research, 68, 61-74.

Denison, D.R. (1996). What is the difference between organizational culture and organizational climate? A native’s point of view on a decade of paradigm wars. Academy of Management Review, 21, 619-654.

Hénard, F. & Roseveare, D. (2012). Fostering quality teaching in higher education: Policies and practices. France: Organization for Economic Co-operation and Development.

Teaching Courses = Delivery + Design — Donna Ellis

effective teachingLate last term, I designed and delivered a workshop with my CTE colleague Julie Timmermans regarding peer review of teaching (PRT). Julie and I have been guiding a learning community (LC) on PRT for the past year and a half with departmental administrators. One key question that has continued to plague our group is: how do you define “effective teaching” in your context? This would seem to be a straightforward question, and yet it’s not. It’s also a very critical question as departments consider what criteria they will use to provide feedback on and/or assess teaching. One way to approach the question that the LC group asked us to explore involved identifying key principles of learning and how they might intersect with and inform PRT practices.

It was a challenge to organize the results from multiple decades of research on human learning, and yet we knew this task was important to help inform the work of the group. In the end, we categorized the main principles into three dimensions:

  • Cognitive
  • Motivational
  • Social

The cognitive dimension includes theories about students’ prior knowledge – the need to link new learning to existing knowledge and find ways to identify and address misconceptions. It also includes theory regarding the differences between novice and expert learners, particularly how they organize information. Cognitive theories also focus on the necessity for students to acquire, practice, and apply learning (knowledge and skills) and the value of metacognition.

Motivation, in the context of learning, “influences the direction, intensity, persistence, and quality of the learning behaviors in which students engage” (Ambrose et al., 2010, p.69). Expectancy-value theory from motivation identifies learning as goal-oriented behavior that is influenced by the value of the goal for students and the expectancy of success. Finally, theories within the social dimension indicate that learning involves building knowledge by interacting with others – both teachers and peers – and benefits from positive, encouraging environments.

The workshop participants worked together to identify specific instructional strategies that could be used to implement these theoretical principles of learning as well as evidence that could be collected for PRT purposes. PRT practices often include a classroom observation component in which behaviours such as organization of material or ways to engage students in class are assessed. But one “a-ha moment” from this activity was that observing classroom instructional behaviours won’t provide a holistic picture of the effectiveness of an instructor’s teaching:  teaching also involves course design decisions. For example, social learning may be assessed in class if small groups are used, but social learning may be implemented via group assignments outside of class. Reviewing the course materials related to that assignment would be the only feasible source of evidence about this form of learning. Similarly, reviewing the learning assessments used in a course would provide insights into whether students may perceive they could succeed. And reviewing the results of the students’ learning would provide information about the outcomes achieved as a result of the course delivery and design, in addition to students’ attributes and behaviours as learners.

Recognizing the role of course design fits clearly with the advice we provide in CTE about the amount of time a student should spend on a course: 3 hours in class and 5 to 7 outside of class. This out-of-class time typically involves student work that is directed by an instructor’s course design (e.g., assignments, readings).

This session left me with one key takeaway:

  1. To truly review our peers’ teaching, we need to focus on more than what happens in classrooms – course design materials are critical sources of evidence of effective teaching as well.

If you or one of your colleagues wants to explore more about course design, the Teaching Excellence Academy may be a great next step. Contact your department Chair or School Director in mid-February to discuss being nominated. Let me know if you have any questions about this multi-day workshop or about the peer review of teaching.

Reference:

Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C., Norman, M.K. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco, CA: Jossey-Bass.

Contemplating Quality + Teaching at Waterloo – Donna Ellis

Over the last few months, I have been working on a multi-institutional project on identifying indicators of an institutional culture that fosters “quality teaching”. One report that our group has been reviewing comes from the Organisation for Economic Cooperation and Development’s Institutional Management in Higher Education group. Published in 2012, the report entitled Fostering Quality Teaching in Higher Education: Policies and Practices outlines seven policy levers that institutional leaders can use to foster teaching quality. The levers provide reasonable actions to take: raising awareness of quality teaching, developing excellent teachers, engaging students, building organization for change and teaching leadership, aligning institutional policies to foster quality teaching, highlighting innovation as a driver for change, and assessing impacts. But what constitutes “quality teaching”?

At its most basic level, the authors indicate that “quality teaching is the use of pedagogical techniques to produce learning outcomes for students” (p.7). More specifically, they explain that quality teaching includes “effective design of curriculum and course content, a variety of learning contexts (including guided independent study, project-based learning, collaborative learning, experimentation, etc.), soliciting and using feedback, and effective assessment of learning outcomes. It also involves well-adapted learning environments and student support services” (p.7). These definitions focus on student learning, the honing of instructional and critical reflection skills by teachers, and the need for institutional infrastructure to support learning. What they do not focus on is the adoption of any particular pedagogical method nor the specifics of an instructor’s performance in a classroom (think about what course evaluations tend to highlight…).

The authors also identify the need to ground any efforts to shift the quality of teaching – or the culture in which teaching happens – within a collaboratively developed institutional teaching and learning framework. This framework should reflect the identity and differentiating features of an institution and define the “objectives of teaching and expected learning outcomes for students” (p.14). At uWaterloo, we have endorsed the degree level expectations (undergraduate and graduate) as the benchmarks for program level outcomes. But we do not yet have a succinct statement about our goals regarding quality teaching.

Our newly released institutional strategic plan asserts that one way we will offer leading-edge, dynamic academic programs is by “increasing the value of teaching quality and adopting a teaching-learning charter that captures Waterloo’s commitment to teaching and learning” (p.11, emphases mine). I wrote about another institution’s teaching and learning charter in the September 2012 issue of CTE’s Teaching Matters newsletter. What will our charter entail? What do we value about teaching and learning? What kind of institutional culture do we want to promote with regard to teaching quality at Waterloo? These aren’t small questions, but they’re very exciting ones to contemplate.