Why It Seems Like Your Students Can’t Write — Stephanie White

Whenever I talk with instructors here about how my job is to support them in their writing and communication instruction, I hear some version of the same response: “My students are brilliant, but they can’t write a sentence to save their lives!” No matter whom I’m talking to, regardless of discipline, job title, teaching experience, linguistic background, educational background, or teaching load, nearly everyone has the same anxieties around the role of communication in their courses. But I’m always glad to have the chance to talk about these concerns. If you’re one of those instructors I’ve talked with about teaching writing and communication in your discipline, you’ve probably seen my eyes light up as I eagerly launch into my spiel about the research on teaching writing and communication across the curriculum.

You: “My students are smart, but they can’t write!” Continue reading Why It Seems Like Your Students Can’t Write — Stephanie White

Stephanie White

Stephanie White

Stephanie White is an Instructional Developer at the UWaterloo Centre for Teaching Excellence, where she focuses on TA Training and Writing Support. In addition to helping run CTE’s certificate programs for graduate students and supervising graduate-student TA Workshop Facilitators, she teaches workshops for faculty and staff on designing effective written assignments, consults one-to-one with instructors in any discipline about their written assignments, serves on committees and working groups about communications outcomes at UWaterloo, develops resources about Writing and Communication Across the Curriculum at UWaterloo, and consults with instructors on training TAs in their departments.

More Posts

The ICE model: An Alternative Learning Framework – Monica Vesely

IMost often we approach the design of our main course elements – intended learning outcomes (ILOs), formative and summative assessments, and teaching and learning activities – by turning to Bloom’s Taxonomy (and most frequently the cognitive domain) to help us determine the appropriate level of thinking required and to help us express that accurately in our descriptions.

Sometime we can find ourselves overwhelmed with the distinctions that Continue reading The ICE model: An Alternative Learning Framework – Monica Vesely

Monica Vesely

Monica Vesely

Monica Vesely is an Instructional Developer with the Centre for Teaching Excellence where she conducts teaching observations, facilitates the Instructional Skills Workshop (ISW), coordinates the Teaching Squares Program, and assists new faculty with their teaching professional development. In her focus on new faculty, she chairs the New Faculty Welcoming Committee, supports new faculty initiatives across campus, consults with new faculty to assist them with the preparation of individualized Learning About Teaching Plans (LATPs), facilitates workshops and builds community through various communications and social events. Prior to joining the Centre for Teaching Excellence, Monica worked with the NSERC Chair in Water Treatment in Civil and Environmental Engineering, taught in the Department of Chemistry, and designed learning experiences with Waterloo's Professional Development Program (WatPD).

More Posts

Online Math Numbers at Waterloo, and Comparative Judgments as a Teaching Strategy — Tonya Elliott, CEL

math equationOnline Math Numbers

If you weren’t already aware, here are a few numbers about online math at the University of Waterloo:

  • The Math Faculty has been offering fully online courses since Fall 2003 and, since that time, has offered 55 unique online courses to more than 21,000 students
  • The Centre for Education in Mathematics and Computing (CEMC), with support from the Centre for Extended Learning (CEL) and local software company Maplesoft, was the first group on campus to release a large set of open educational resources (OERs). Called CEMC courseware, the OERs include lessons, interactive worksheets, and unlimited opportunities for students to practice skills and receive feedback. At the time of this post, the resources have received over 1.8 million hits from 130,000 unique users in 181 different countries.
  • In 2015, the Canadian Network for Innovation and Excellence (CNIE) recognized CEMC, CEL, and Maplesoft for their OERs through an Award of Excellence and Innovation.
  • The Master for Mathematics for Teachers (MMT) program has the highest enrolment of all the fully online Masters programs offered at the University of Waterloo. MMT and CEL staff who work on the program were one of three teams from Waterloo who won a 2016 Canadian Association for University Continuing Education program award.
  • Maplesoft is using a focus group from Math, CEL, and CTE to develop a new authoring environment that will specifically target the needs of online STEM course authors. It is anticipated that this tool will be released in early 2017 and, over time, should save development costs by 50%.
  • The Math Faculty, together with the Provost’s office, has dedicated $1.2 Million over the next three years for additional work on online projects; over 90 course development slots allocated by CEL have already been filled.

These numbers are some of the reasons Waterloo is considered a leader in the area of online math education.

Comparative Judgments

From June 19 – 22, a small group from Waterloo and I joined an international team of mathematics educators to discuss digital open mathematics education (DOME) at the Field’s Institute in Toronto. Lots of great discussions happened including opportunities and limitations of automated STEM assessment tools, integrity-related concerns, and practical challenges like lowering the bar so that implementing fully online initiatives isn’t the “heroic efforts” for Faculty it’s often viewed as being today. Of all the discussion topics, however, the one that got me most excited – and that my brain has returned to a few times in the month since the conference – is using Comparative Judgement (CJ) in online math courses.

colour shadesThe notion behind CJ is that we are better at making comparisons than we are at making holistic judgments, and this includes judgments using a pre-determined marking scheme.  It doesn’t apply to all types of assessments, but take this test on colour shades to see an example of how using comparisons instead of holistic rankings makes a lot of sense. Proof writing and problem solving may also lend themselves well to CJ and three journal articles are listed at the end of this blog for those who would like to read more.

Here are some of the questions I’ve been pondering:

  • Are there questions we aren’t asking students because we can’t easily “measure” the quality of their responses using traditional grading techniques? How much/when could CJ improve the design of our assessments?
    • Example: Could CJ, combined with an online CJ tool similar to No More Marking, be used by students in algebra courses as a low-stakes peer assessment activity so students could see how different proofs compare to one another? Perhaps awarding bonus credit to students whose proofs were rated in the top X%.
  • Which Waterloo courses would see increases in reliability and validity if graders used CJ instead of traditional marking practices?
  • How much efficiency could Waterloo departments save if high-enrolment courses used CJ techniques instead of marking schemes to grade exam questions or entire exams? Could CEMC save resources while using CJ to do their yearly contest marking?

I don’t have answers to any of these questions yet, but my brain is definitely “on” and thinking about them. I encourage you to read the articles referenced below and send me an email (tonya.elliott@uwaterloo.ca)  If you like the idea of CJ, too, or have questions about anything I’ve written.  If you have questions about Waterloo’s online math initiatives, you’re welcome to email me or Steve Furino.


Jones, I., & Inglis, M. (2015). The problem of assessing problem solving: can comparative judgement help? Educational Studies in Mathematics, 89, 3, pp. 337 – 355.

Jones, I., Swan, M., & Pollitt, A. (2014). Assessing mathematical problem solving using comparative judgement. International Journal of Science and Mathematics Education, 13, pp. 151–177.

Pollitt, A. (2012). The method of Adaptive Comparative Judgement. Assessment in Education: Principles, Policy, & Practice. 19, 3, pp. 281 – 300.


Blackboard image courtesy of AJC1.




Tonya Elliott

Tonya Elliott

In her role as an Online Learning Consultant (OLC) with the Centre for Extended Learning (CEL), Tonya Elliott provides instructional design and project management support to faculty and staff who wish to design, develop, and/or deliver fully online courses, programs, and resources. The majority of her projects are with members of the Faculty of Mathematics; however, she really enjoys working on a variety of online projects from faculty and staff from all areas of campus.

More Posts

“Learning from Challenge and Failure”: Resources — Julie Timmermans

Michael Starbird
Michael Starbird, keynote speaker at the 2016 Teaching and Learning Conference.

Presenters at CTE’s recent Teaching and Learning conference explored the theme of Learning from Challenge and Failure. As a follow-up to the Conference, we’d like the share the following list of compiled resources:


Articles and Blog Postings

Podcasts and Talks

Growth Mindset Resources


Artificial Teaching Assistants

The "draughtsman" automaton created by Henri Maillardet around 1800.
The “draughtsman” automaton created by Henri Maillardet around 1800.

The dream of creating a device that can replicate human behaviour is longstanding: 2500 years ago, the ancient Greeks devised the story of Talos, a bronze automaton that protected the island of Crete from pirates; in the early thirteenth century, Al-Jazari designed and described human automata in his Book of Knowledge and Ingenious Mechanical Devices; in the eighteenth-century, the clockmaker Henri Maillardet invented a “mechanical lady” that wrote letters and sketched pictures; and in 2016, Ashok Goel, a computer science instructor at Georgia Tech, created a teaching assistant called Jill Watson who isn’t a human – she’s an algorithm.

Goel named his artificial teaching assistant after Watson, the computer program developed by IBM with an ability to answer questions that are posed in ordinary language. IBM’s Watson is best known for its 2011 victory over two former champions on the gameshow Jeopardy! In Goel’s computer science class, Watson’s job was to respond to questions that students asked in Piazza, an online discussion forum. Admittedly, the questions to which Watson responded were fairly routine:

Student: Should we be aiming for 1000 words or 2000 words? I know, it’s variable, but that is a big difference.

Jill Watson: There isn’t a word limit, but we will grade on both depth and succinctness. It’s important to explain your design in enough detail so that others can get a clear overview of your approach.

Goel’s students weren’t told until the end of the term that one of their online teaching assistants wasn’t human – nor did many of them suspect. Jill Watson’s responses were sufficiently helpful and “natural” that to most students she seemed as human as the other teaching assistants.

Over time – and quickly, no doubt – the ability of Jill Watson and other artificial interlocutors to answer more complex and nuanced questions will improve. But even if those abilities were to remain as they are, the potential impact of such computer programs on teaching and learning is significant. After all, in a typical course how much time is spent by teaching assistants or the instructor responding to the same routine questions (or slight variations of them) that are asked over and over? In Goel’s course, for example, he reports that his students typically post 10,000 questions per term – and he adds that Jill Watson, with just a few more tweaks, should be able to answer approximately 40% of them. That’s 4000 questions that the teaching assistants and instructor don’t have to answer. That frees up a lot of their time to provide more in-depth responses to the truly substantive questions about course content.

More time to give better answers: that sounds like a good thing. But there are also potential concerns.

It’s conceivable, for example, that using Watson might not result in better answers but in fewer jobs for teaching assistants. Universities are increasingly keen to save money, and if one Watson costs less than two or three teaching assistants, then choosing Watson would seem to be a sound financial decision. This reasoning has far broader implications than its impact on teaching assistants. According to a recent survey, 60% of the members of the British Science Association believe that within a decade, artificial intelligence will result in fewer jobs in a large number of workplace sectors, and 27% of them believe that the job losses will be significant.

Additionally, what impact might it have on students to know that they are being taught, in part, by a sophisticated chatbot – that is, by a computer program that has been designed to seem human? Maybe they won’t care: perhaps it’s not the source of an answer that matters to them, but its quality. And speaking for myself, I do love the convenience of using my iPhone to ask Siri what the population of Uzbekistan is – I don’t feel that doing so affects my sense of personal identity. On the other hand, I do find it a bit creepy when I phone a help desk and a ridiculously cheery, computerized voice insists on asking me a series of questions before connecting me to a human. If you don’t share this sense of unease, then see how you feel after watching 15 seconds of this video, featuring an even creepier encounter with artificial intelligence.

Mark Morton

Mark Morton

As Senior Instructional Developer, Mark Morton helps instructors implement new educational technologies such as clickers, wikis, concept mapping tools, question facilitation tools, screencasting, and more. Prior to joining the Centre for Teaching Excellence, Mark taught for twelve years in the English Department at the University of Winnipeg. He received his PhD in 1992 from the University of Toronto, and is the author of four books: Cupboard Love; The End; The Lover's Tongue; and Cooking with Shakespeare.

More Posts - Website

Meaningful Conversations in Minutes – Mylynh Nguyen

ConversationWith constant media stimulation, increase in competitiveness, and stress overload, “Is it possible to slow down” (1)?  Our culture can be self-driven and individualistic so it is no surprise that for many, time is a finite resource that is draining away. As a result, we try to do as much as we can in a very short time period. Our minds are filled with constant distraction, thus limiting opportunities for self-reflection to ask oneself “Am I well or am I happy?” (1).

We’d like to believe that we have been a good friend, partner, or child at various points in our life. However, upon remembering that significant person in your life, do you know or have you ever asked what were the moments when they were the happiest? The times when they were crying from tears of joys to the time when they felt the most accomplished? Surprisingly for many, we are unaware of these stories that ultimately define whom that individual has become today. We mindlessly pass every day without pondering about the conversations that we had or the connections that were made.  By simply being mindful of the questions that we pose, more specifically “questions that people have been waiting for their wholes lives to asked … because everybody in their lives is waiting for people to ask them questions, so they can be truthful about who they are and how they become what they are,” as beautifully said by Marc Pacher (2).

So what is the action plan?

1.Invite people to tell stories rather than giving answers. Instead of “How are you” substitute

  • What’s the most interesting thing that happened today?
  • What was the best part of your weekend?
  • What are you looking forward to this week? (3).

2. Enter a conversation with the willingness to learn something new

  • Celeste Headlee in her TED Talk 10 Ways to Have a Better Conversation describes how she frequently talks to people whom she doesn’t like, and with people whom she deeply disagrees yet is still able to have engaging and great conversations. She is able to do this as she is always prepared to be amazed and she seeks more to understand rather than to listen and state her own opinion and thoughts.

3. Lastly “being cognizant of [your] impact is already the first step toward change. It really does start at the individual level” my friend once said (5).

  • Brene Brown in her Power of Vulnerability talk said, “Many pretend like what we’re doing doesn’t have a huge impact on other people”. But we’d be surprise of what we are capable of when you allow yourself to be vulnerable as this “can be the birthplace of joy, of creativity, of belonging of love… the willingness to say, “I love you” the willingness to do something where there are no guarantees” (6).

That being said, you don’t have to be the most intellectual or outspoken person in the room, but what is key is the willingness to be open and the questions that are posed. There are many simple things that can be easily integrate into our daily lives, by being more mindful of the question that we ask to ultimately have a more memorable and enriching conversation. In the end it is to have better connections, new understanding and awareness to savor the moment.

At CTE, Microteaching Sessions are offered where you can choose from various topics to conduct an interactive teaching lesson. For my first topic I will be talking about the importance of communication. All participants will not only be giving feedback but will receive constructive feedback and ways to improve from knowledgeable facilitators. It’s a safe environment where you have the chance to present to fellow graduate students from various departments. Many have found these sessions beneficial as you are working on skills relevant to work, field of study or for your own personal growth. I am excited and nervous for this opportunity to talk about something I am passionate about and I hope I can successfully engage others and deliver the content well. In order to help participants formulate an effective teaching plan, the Centre for Teaching Excellence website has provided many resources such as well written guidelines, lesson plans outlines, and facilitators review the lesson before you present.

As a follow-up post, I had the chance to facilitate an hour session for an AIESEC conference for participants from various universities such as Toronto, Waterloo, Laurier, and York, that recently returned from their international exchanges. There were lots of discussion so thank you to the Graduate Instructor Developers, Charis Enns and Dave Guyadeen, and Instructional developer, Stephanie White for their great feedback and helping me make this session more successful!


Assessing Group Work Contribution – Monika Soczewinski

skydivingDuring my post-secondary education I always had some mixed feelings when I would find out that there was group work in a course I was taking. On the one hand, I was excited at the prospect of learning with and from my peers. On the other hand – as anyone who had a poor group experience in the past – I worried that some members in my group might not be as committed and would not put in effort into the project.

Group work in the classroom has many learning benefits. Students get an opportunity to work on some more generic skills, such as working in a team, collaboration, leadership, organization, and time management, among others. These are the kinds of skills that are valued by employers, and as competitiveness in the entrance into many professions grows, it is becoming increasingly important to teach them in university.

Despite these positive points, many students (and some instructors) have mixed feelings about group work, just as I did in my classes. One major concern in group work is that some students will not contribute equally to the work within their group – a behaviour called free-riding. According to studies, free-riding was identified as one of the greatest concerns students had about group work, across faculties and disciplines (Gottschall & Garcia-Bayonas, 2008; Hall & Buzwell, 2013). Since most of the work is done in a setting where the instructor cannot observe the group dynamics, instructors might have similar concerns about free-riders. The fairness of the assessment process might be compromised if students do not contribute equally but receive the same group mark.

One solution to determine how much individual students contributed to the group project is to ask group members to assess each other, in a process of peer assessment. In this situation, peers are providing feedback on their group members’ contribution levels to the project, not assessing the actual project itself. This is a popular technique because group members are in a position where they clearly see how their peers have contributed. Students are also able to decide what kinds of contributions were valuable in their unique group setting. This can include the forms of contribution that are more difficult to quantify, such as attitude, receptivity, insightfulness, organization, etc. Each student’s final grade is then a reflection of both the whole group project, as graded by the instructor, plus the peer assessment of their contribution. Each student will then come out with a unique grade.

Final Grade = group project (marked by instructor)

+/- individual contribution level (rated by peers)

Some considerations for peer assessment of group work contribution include:

  1. Set the expectations for group work: Start off the group projects with a class discussion about the expectations for each student, and why the peer assessment of contribution is important. Students will have a better understanding of their responsibilities in the group, and will know that contribution is an important factor in their grade.
  2. Criteria of the peer assessment: The best practice is to provide students with at least some guidance or criteria to help rate their peers (Goldfinch & Raeside, 1990; Wagar & Carroll, 2012). Depending on the type of project, the instructor can ask students to rate members based on contribution to each project task, they might ask for ratings on generic skills such as level of enthusiasm, organization, etc., or a combination of the two. Whatever criteria the instructor selects, it is beneficial to involve the class in the decision.
  3. Open peer assessment versus private peer assessment: Should students have an open discussion about group member contributions, or should they rate each other anonymously? According to Wagar and Carroll (2012), students show a preference for confidential peer assessment. Having an open peer assessment can detract from the sense of collaboration, and students might be afraid of openly criticizing and offending their peers.
  4. Timing of the peer assessment: Ideally, students should be given the criteria of the peer assessment at the start of the project, and fill it in once the project is completed. This allows students to understand from the start how they will be assessed, especially if they divide the work in unconventional ways that might do not fit into the criteria. Students can also pay closer attention to contributions throughout the project, and make more accurate assessments (Goldfinch & Raeside, 1990).

Visit the CTE Teaching Tips to read more about methods for assessing group work, and other group work resources.



Goldfinch, J., & Raeside, R. (1990). Development of a peer assessment technique for obtaining individual marks on a group project. Assessment & Evaluation in Higher Education, 15(3), 210-231.

Gottschall, H., & Garcia-Bayonas, M. (2008). Student attitudes towards group work among undergraduates in business administration, education and mathematics. Educational Research Quarterly32(1), 3-29.

Hall, D., & Buzwell, S. (2013). The problem of free-riding in group projects: Looking beyond social loafing as reason for non-contribution. Active Learning in Higher Education14(1), 37-49.

Wagar, T. H., & Carroll, W. R. (2012). Examining student preferences of group work evaluation approaches: Evidence from business management undergraduate students. Journal of Education for Business87(6), 358-362.