Gender Bias in Student Evaluations of Instructors – Stephanie Chesser

3139392279_185f654c95_m

As instructors, our gender doesn’t matter to students when it comes to their evaluations of us…does it? Students care about our knowledge of course content, about the passion that we bring to the process of teaching, about that ‘x-factor’ that teaching greats are able to bring to their classrooms…don’t they? While I have no doubt that students do care about the proven qualities that contribute to great instruction it would seem they also, to a certain extent, evaluate an instructor’s ‘x-factor’ in the context of xx or xy chromosomes.  In studies dating back decades, we can see evidence of a gender bias with regard to student evaluations of instructors.

The reported ways that this gender bias might make its way into course evaluations have been numerous in published research studies.  They include everything from the expected gender of an instructor in the discipline in which they teach, to the degree to which an instructor’s personality and teaching style fits with traditional gender stereotypes (e.g. students may expect female professors to be ‘friendly’ and ‘competent’ in their teaching role, but may be fine with male instructors simply being ‘competent’) (Kierstead, D’Agostino & Dill, 1988).  Even the gender of the student completing an evaluation has been reported to influence the rating that they might give to an instructor (e.g. some older studies have suggested that female students rate female professors more highly than male professors) (Bachen, McLoughlin & Garcia, 1999; Basow, 1995).

Recently, the popular millennial-targeted news site BuzzFeed jumped on this gendered bandwagon though its promotion of a 2015 study examining instructor evaluations on the infamous website ratemyprof.com.  BuzzFeed’s article is based on research conducted by Dr. Benjamin Schmidt (Northeastern University) which examined the number of times various adjectives were used to describe male and female professors across 25 academic disciplines among the 14 million or so entries available on ratemyprof.com.  Interestingly, Schmidt appears to have unearthed a gender bias with regard to the use of certain adjectives and has created a fascinating interactive chart that allows readers to investigate this suspected bias for themselves.  My interactions with the chart unearthed that the term ‘unfair’ can be found to occur more commonly in evaluations of female instructors in 23 of 25 disciplines, while the term ‘brilliant’ appears  to occur more commonly in evaluations of male instructors in 24 of 25 evaluations. I found that ‘funny’ was used far more frequently in male instructor evaluations, while the term ‘boring’ produced decidedly mixed results.

So why should we care about gender bias in instructor evaluations?  One important reason might be the negative impact such bias has been shown to have on instructors (in particular, female instructors).  Sadly, poor teaching evaluations have been found to increase burnout rates among female faculty (Lackritz, 2004) and may interact with other demographic factors such as age or race to further complicate student perceptions of female instructors (Arbuckle & Williams, 2003; Basow, 1998).  Given the weight that student evaluations can carry in the assessment of faculty early in their careers, it seems prudent that universities examine the process of evaluating their instructors.

While the complete removal of gender bias from evaluations might be a challenging goal, several strategies have been suggested to mitigate some of its impact (Laube, Masson, Sprague & Ferber, 2007).  For starters, universities can exclude any student evaluations that contain sexually or physically explicit comments.  Additionally, increased emphasis can be placed on teaching portfolios and peer evaluations to allow for a more comprehensive evaluation of an instructor’s activities. Finally, changes could be made to the questions contained in the teaching evaluations themselves to reduce the room available for gender bias. For instance, global measurements (e.g. _____ was an effective teacher) could be replaced with questions that directly address specific traits related to teaching quality and effectiveness (e.g. ____created a classroom environment that made me feel motivated to learn)

These types of changes, though small, are a step in the right direction towards reducing gender bias in the ways students assess their instructors.  Perhaps once we have this started, we can work on the whole red hot ‘chili’ ratings on ratemyprof.com (sigh).

 

References:

Arbuckle, J., Williams, B. D. (2003). Students’ perceptions of expressiveness: Age and gender effects on teacher evaluations. Sex Roles,49(9-10), 507-516.

Bachen, C. M., McLoughlin, M. M., Garcia, S. S. (1999). Assessing the role of gender in college students’ evaluations of faculty. Communication Education48(3), 193-210.

Basow, S. A. (1995). Student evaluations of college professors: When gender matters. Journal of Educational Psychology87(4), 656.

Basow, S. A. (1998). Student evaluations: The role of gender bias and teaching styles.

Kierstead, D., D’Agostino, P., Dill, H. (1988). Sex role stereotyping of college professors: Bias in students’ ratings of instructors. Journal of Educational Psychology80(3), 342.

Lackritz, J. R. (2004). Exploring burnout among university faculty: incidence, performance, and demographic issues. Teaching and Teacher Education20(7), 713-729.

Laube, H., K. Masson, J. Sprague, and A. L. Ferber. (2007).  ‘The impact of gender on the evaluation of teaching: What we know and what we can do.  National Women’s Studies Association Journal, 19 (3): 87104. http://muse.jhu.edu/journals/nwsa_journal/v019/19.3 laube.html Link, A. N., C.

Rethinking Assessments of Student Learning — Donna Ellis

student collaborationAs I write this article, a number of you will have just finished marking your final exams.  Did your students learn what you wanted them to learn?  Did your exam and your other course assessments enable them to demonstrate and perhaps even further extend their learning?

Assessments of student learning are a critical part of courses.  Overall, they are the major driver of what students choose to do and focus on in a course.  But do our assessments require students to learn?  In his recent talk at uWaterloo, Eric Mazur from Harvard University would suggest that the answer is often no.  In his talk, “Assessment: The Silent Killer of Learning,” he outlined various problems with our current approaches to assessment and some suggestions about how to make improvements.

He began by asking the audience to discuss the purposes of assessment.  We were to turn to a partner; mine was an undergraduate student.  Her initial response to his question was:  to pass our courses, get a degree, and get a job.  Upon further reflection, she also added: “to parrot back what the teacher says.”  Are any of these responses clearly about learning?  No, and that is one of the biggest problems from my perspective.  Conceiving of assessments as “obstacles along the road” to get to a desired end goal makes it hard to recognize that they can and should be part of the journey of learning.  Traditional, regurgitation-based tests do not tend to contribute to this journey.  However, many other types of assessments do contribute, such as assignments that enable students to practice skills learned in class with new applications, or group exams that require students to explain and defend their answers to their peers, or final projects that focus on analysis, synthesis, and evaluation.  How can we reinforce the role of assessments in the learning process?

One way that Mazur outlined is to use authentic assessments.  He indicated that a lack of authenticity is a major problem in physics education.  He explained that when a physicist has a problem, they typically know the desired outcome but not the process needed to reach a solution.  However, in textbooks, the problem and the process are made apparent, with the outcome being the unknown.  This situation results in the students being given information that they would not automatically have in a real-world setting as well as miss many critical learning opportunities.  The call for authentic assessments also came from our 2014 Opportunities and New Directions (OND) conference speaker, John Bean, who connected this approach to writing assignments (see my May 2014 newsletter article for more details).  When we make our assessments more authentic, we make it more difficult for students to believe they can just parrot back what we said in class.  We also push them to continue learning.

Authenticity, though, can come with a price for students. Such tasks are often less predictable and can sometimes lead to failure.  But whether or not something is a “failure” depends on what is being assessed, which ties back to the intended learning outcomes connected to the assessment. For example, if your goal is to have students learn about team processes, then the assessment scheme would give credit for the development of those process skills at least as much as the actual end product.

If rethinking assessments of student learning is on your 2015 “to do” list, I have 3 concrete suggestions:

  1. Watch Mazur’s talk for more ideas (see URL below).
  2. Submit a proposal and attend our annual OND teaching and learning conference on April 30. This year’s theme is “Making Teaching and Learning Visible”, and assessments of learning play a large role in providing such clarity.
  3. Participate in this year’s Teaching Excellence Academy (TEA). This intensive course redesign event occurs April 22, 23, 24, and 27, and supports you in rethinking all elements of a course design, including the assessments of student learning. Contact your department chair or director for more information or let me know if you have questions; the call for nominations will go out in early February.

And, as always, let us know how we can help!

References:

Mazur, E. (Dec 11, 2014) Assessment: The silent killer of learning. Presentation delivered at the Department of Physics & Astronomy Teaching Retreat, University of Waterloo. Downloadable here.

Donna Ellis

New Tool for Making Screencast: MyBrainShark

Screencasts are an educational technology that have accelerated from zero to sixty in a relatively short time — in fact, over just the past few years. Screencasts have the potential, too, to radically change education. For one thing, they are the technology behind the pedagogical notion of “flipping” the classroom — that is, of providing content to students outside of class via screencasts, and reserving class time for more engaging activities that leverage application of knowledge, peer instruction, and collaboration. The word “flipping” almost sounds glib, but the pedagogical change it embodies is revolutionary: it threatens to upend what higher education has been for the past, oh, thousand years.

In my workshops on screencasts, I usually refer to Camtasia, Adobe Presenter, and Screencast-O-Matic as good tools for creating screencasts. Camtasia is a good choice at the University of Waterloo because we have a site license for it, so you can buy an inexpensive copy at The Chip. Screencast-O-Matic is a viable option for those who want to test the waters: its fully online (nothing to download) and free; it has limited editing capabilities, but it will give you a sense of what you might do with screencasts in your courses.

Just a few weeks ago, I also discovered another screencasting tool that I would recommend: MyBrainShark. This tool is perfect if you already have a PowerPoint presentation and want to record narration for it. It’s free, fully online, and dead simple to use. I also like the fact that links that are embedded into the PPT presentation remain “live” after the presentation has been converted into a MyBrainShark screencast.

You can see an example of a MyBrainShark screencast here (it’s a screencast about “glimpse concepts” and their relevance to smart phones).

Mid-term Feedback – Monica Vesely

Time for FeedbackDo you ever find yourself wishfully thinking “it would have been nice if I had known…” while reading over you course evaluations? Or have you ever implemented changes based on previous end-of course evaluations only to discover that the new group of students would have preferred the original iteration of your class? If you find yourself wishing you could implement changes based on feedback received for the same group of students, mid-term feedback is for you!

Mid-term evaluations are formative feedback tools that can provide valuable information about how students are experiencing a course. When properly constructed and implemented, both you and your students can benefit from the potential enhancements to the learning experience. You will gain a sense of satisfaction that the learning experience that you have developed is being received by your students as you intended and your students will be grateful for your efforts as they help to shape their own learning environment to better suit their needs.

Ideally, the tools used to obtain feedback should pose some simple questions that can be answered within the class period. Brevity and anonymity are best.

Some midterm feedback strategies include:

  • Traditional Evaluation Form: These questionnaires can be prepared with a number of Likert-style statements along with a few open-ended questions.
  • Start, Stop, Continue: Students are asked to take note of the things that they would like to see “start” in the class, “stop” in the class, or “continue” taking place in the class.
  • The One Minute Paper: By posing 2-3 guiding questions, students are able to identify the most significant things they would like changed in the course. For example: ”What are the two or three significant concepts that you have learned thus far?”, “What questions do you still have about the topics we have covered?” and “What could I have done differently to help you understand the lecture material?”

Depending on the experience of the students, you may have to provide more or less instruction and examples in order to obtain useful feedback. While upper year students will tend to be more skilled at providing constructive feedback, first and second year students may not be used to being asked for their opinion on teaching and learning. Make it clear that you are looking for constructive feedback that you can respond to immediately, this term, for their benefit.

When constructing the feedback questions, make certain that you are only collecting data that you can and will use or respond to. Regardless of the class and level, let students know why you are asking for their input, how you will share it and what you will do with it. Do not mislead the students through your choice of questions or lack of explanation into believing that everything is open for discussion.

Once you have collected the feedback, summarize and interpret it as soon as you can. Then, share it back at the next class if at all possible. Sharing the feedback with all students lets them know that what they say matters and it also lets the students know what their peers value or have difficulty with in the class. Next, identify how you intend to respond and why. If you can’t change something, that’s fine, but make certain you let the students know why. Often students are not aware of certain limitations associated with the course and they appreciate knowing. Clarify what role you as an instructor will play in implementing the changes as well as what role the students will need to play to make the change a success.

The preparation of a summary that highlights what things can be changed, what things can only be changed the next time the course is taught and those things that cannot be changed at all can provide a good overview, particularly in large classes where you will need to group and categorize the responses you receive. This type of transparent and honest exchange goes a long way towards building trust and respect with your class even if you are unable to immediately address a recommendation that has been made.

Consider using mid-term evaluations as another component in your teaching professional development. The creation, use and response to mid-term feedback is a proactive way to help avoid the risk that problems may persist unresolved throughout the course. Not only can this mid-stride feedback help to improve the learning environment for the students, but it can help improve your teaching evaluations at the end of the term as well.

Whether you are looking to bounce around ideas or for specific resources on collecting and using midterm feedback, do not hesitate to contact me, Monica Vesely, or your faculty liaison for a meeting.

References:

Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers. San Francisco: Jossey-Bass.

Davis, B. G. (2009). Tools for teaching. 2nd ed. San Francisco, CA: Jossey-Bass.

Svinicki, M., & McKeachie, W. J. (2011). McKeachie’s teaching tips: Strategies, research, and theory for college and university teachers. Belmont, CA: Wadsworth.

Yao, Y., & Grady, M. L. (2005). How do Faculty make formative use of student evaluation feedback?: A multiple case study. Journal of Personnel Evaluation in Education, 18(2), 107-126.

 

 

 

September Welcoming Events for New Faculty – Monica Vesely

ivy on brickLast week, over 40 new faculty attended a series of Welcoming Events prepared to help them acclimatize to their new roles as faculty members at the University of Waterloo. On Wednesday, September 4th new faculty gathered in E5 for a day filled with information sessions and opportunities to meet one another and members of the larger University of Waterloo community. After a brief welcome from Ian Orchard (Vice-President, Academic and Provost), the day got underway with a presentation entitled Navigating your uWaterloo Roles with campus administrators addressing the faculty triumvirate of  teaching, research and service along with some words of guidance about how co-operative education interfaces with them all. At the subsequent refreshment break, participants had the opportunity to explore the Academic Support Units Resource Fair showcasing services and resources available across campus.

Next came the Adjusting to Waterloo panel discussion where peers spoke openly about their own experiences as new faculty members and shared thoughts and insights with the audience. This year we were joined by Shannon Dea (Philosophy), Carey Bissonnette (Chemistry) and Christopher Small (Statistics & Actuarial Science/Faculty Association). The post-session Q & A period allowed new faculty to seek answers to a variety of questions ranging from academic (What types of tenure and promotions considerations do I need to be aware of?) to broader community interest inquiries (Where do I find the best pub?).

The morning was capped off by a luncheon with the Chairs, Directors and Deans in the Festival Room at South Campus Hall accompanied by more conversation and an informal information exchange. Later that day, new faculty and their families attended a BBQ at Victoria Park Pavilion in Kitchener. After words of welcome from University President, Feridun Hamdullahpur, and FAUW President, David Porreca, attendees were able to enjoy a casual meal and socialize with their fellow new colleagues and their families.

These welcoming activities were intended as a brief introduction to faculty life at the University of Waterloo and to provide a forum for our incoming class of 2013-2014 new faculty to share experiences and start making connections with their colleagues and the broader University of Waterloo community. The day’s events were planned and hosted by the New Faculty Committee which is composed of representatives from the Centre for Teaching Excellence, the Faculty Association and WatPort.

Trees of Knowledge — Mark Morton

dead treesWe all have things we don’t want to know and/or don’t want other people to know. Last week, a video of an ISIS militant beheading an American journalist was released on the web. I’m not going to watch that video, because (among other reasons) I don’t want to know what a beheading looks like. I also don’t want my kids to watch it. I warn them that once you know something, it’s pretty hard to unknow it. I tell them that just as swallowing poison will damage their bodies, consuming disturbing images can harm their minds.

I didn’t always think this. I used to espouse a view that John Milton, author of Paradise Lost, articulated in his prose work Areopagitica: “I cannot praise a fugitive and cloistered virtue, unexercised and unbreathed, that never sallies out and sees her adversary…. the knowledge and survey of vice is in this world [is] so necessary to the constituting of human virtue.” William Blake, the nineteenth-century author and visual artist who wrote an epic poem about Milton, believed something similar. For Blake, humans must progress from a state of childlike innocence (a guileless naivete), to adult experience (with all its horrors), and finally back to a state of renewed innocence (an innocence that encompasses and transcends human horrors). Both Milton and Blake might have been thinking of an adage attributed to the Roman playwright Terence, who said “Homo sum, humani nihil a me alienum puto” — that is, “I am human, I consider nothing human to be alien to me.” I take this to mean that all things — whether they are amazing, joyful, depressing, or horrific — are worthy of human study. And I guess they are. It’s just that I don’t want to be the one studying the horrific things. So, for better or worse, I more and more find myself changing the channel when the news comes on. There are so many things I just don’t want to know.

On the other hand, there are also things that I want to know, but other people want to keep them from me. A case in point: in February, after attending a conference in Anchorage, I took a five-hour boat cruise that got us up close to 21 different glaciers. The tour guide — who was actually a National Parks Forest Ranger — was excellent: informative, articulate, and passionate about the environment. As we approached each glacier in turn, she pointed out where the glacier was a decade ago and where it was now. In each case, the glacier had receded, sometimes by thousands of meters. Never, though, did she allude to global warming as causing the retreat of the glaciers. At the end of the cruise, I approached her and asked her about this omission. She paused, gave me a knowing look, and then said, “We’re not allowed to talk about global warming or climate change.”

Maybe the US government thinks it’s protecting me from dangerous knowledge about climate change, in the same way that I try to protect myself and my kids from disturbing images. And maybe Stephen Harper’s government is also trying to keep me safe by preventing those know-it-all scientists from sharing their troubling research with me. In reality, though, I think that neither the US government nor the current Canadian government has my best interests in mind when it comes to “dangerous” knowledge. Information about fisheries, rivers, forests, tar sands, and so on are not the same as pictures and videos of people being beheaded. We can grieve for the executed American journalist, and work toward ending such conflicts, without having to see his head fall onto the sand. But if we’re going to save the planet — or at least the ecosystems in it that support us — we need access to all the knowledge that’s out there.

 

Integrated Testlets- What are they?–Samar Mohamed

IF-AT
Sample IF-AT card

Last month I attended the annual Society for Teaching and Learning in Higher Education (STLHE) 2014 Conference that was held in Queens University, Kingston. It was an excellent opportunity for me to learn from colleagues across Canada and exchange ideas with them. One of the workshops that attracted my attention was facilitated by Aaron Slepkov and Ralph Shiell from the Dept. of Physics at Trent University. In their workshop, Ralph and Aaron focused on their newly developed testing technique: “Integrated Testlet (IT)”. The presenters started by talking about the benefits of Constructed Response (CR) questions, a common term for questions where students must compose the answer themselves, and how these types of questions enable instructors to gauge their students’ learning. CR questions also allow instructors to give part marks to partially correct answers. The presenters also commented on the trend to switch from CR questions to Multiple Choice (MC) questions in the field of Physics due to increasing class size and the resulting contraints on time and personel resources. However, traditional MC questions don’t allow for part marks or, more importantly from a pedagogical standpoint, enable the instructors (and students) to know where the students went wrong. The integrated testlet method is different in that not only does it allow the students to keep trying each MC question until they get the correct answer, “answer-until-correct question format” enabling the granting of partial marks, but student do not leave the question without knowing the correct answer enabling them to move on to the next integrated question. The method presented changed complex CR physics questions into IT questions. The IT method is based on a traditional testlet, which is a group of MC questions that are based on a common stem (or scenario). In an IT, an answer to a question (task) leads to the next task procedurally, and in this way the students’ knowledge of how various concepts are connected can be assessed. Therefore, items in an integrated testlet are presented in a particular sequence in which the answer for part (a) is used to solve for part (b) and so on. The IT rely on the use of an “answer-until-correct response format”, where the students can keep on making selections on a MC question until the correct response is identified and can be used in subsequent related items. The presenters used the Immediate Feedback Assessment Technique (IF-AT) to allow the students to do several attempts and to get part marks for their response. For more information about IF-AT cards, see Epstein Educational Enterprises website. Moreover, for a sample application of the IF-AT cards at the University of Waterloo see the CTE blog by my colleague Mary Power, The faculty of Science Liaison. In their published paper, the presenters explain the method they have used to transform CR questions to IT questions and analyzed the students’ responses for both question types; it is a very useful and interesting reading that I recommend for instructors thinking about this method.