Notes from the Music Studio — Christine Zaza

playing pianojpgWhen I reflect on teaching and learning in higher education I realize that much of what I learned, I learned when I was a music student. Here are some of the highlights from the music studio that are just as applicable to university teaching and learning:

Practice, practice, practice. Actually, this would more aptly be phrased Practice-Feedback, Practice-Feedback, Practice-Feedback, but the rhythm just isn’t as good. I wouldn’t expect anyone to become a professional violinist without regular lessons with a qualified teacher. Regular feedback is critical to guiding students as they develop new skills. Without regular feedback, bad habits can become engrained and difficult to correct. In university, students learn a number of new skills and new ways of thinking and they need multiple opportunities to practice these skills with regular feedback. To ensure that students focus on the feedback and not just the grade, instructors can give a follow-up assignment students to make revisions highlighting how they have incorporated the feedback that they received on their first submission.

Practice the performance. When preparing for a recital or audition (a summative test), music students are advised to practice performing in front of friends, family –teddy bears if need be – several times, before the actual performance. Preparing for a performance is different from preparing for weekly lessons. Good performance preparation is crucial because in a performance you get one shot at the piece. There are no do-overs on stage. Similarly, when writing music theory or history exams, practicing the exam is an expected part of exam preparation. To facilitate this preparation, the Royal Conservatory of Music sells booklets of past exams. The Conservatory also returns graded exams so that students can see exactly where they earned and lost marks: considering that the Royal Conservatory of Music administers thousands of exams, three times a year, across the globe, this is a huge undertaking. At university, we know that self-testing is an effective study strategy and some instructors do provide several practice exams questions in their course. However, due to academic integrity concerns, the common practice is to deny students access to past exams as well as their own completed exam. I wonder if academic misconduct would be less of an issue if students were allowed to use past exams as practice tools. Amassing a large enough pool of past exam questions should address the concern that students will just memorize answers to questions that they’ve seen in advance.

Explicit instruction is key. It’s not very helpful to just tell a novice piano student to go home and practice. In the name of practicing, a novice student will, more than likely, play his or her piece over a few times, from bar 1 straight to the end, no matter what happens in between, and think that he has “practiced.” I know. I’ve heard it hundreds of times, and if you have a child in music lessons, I’ll bet you’ve heard it too. Explicit instruction means addressing many basic questions that an expert takes for granted: What does practicing look like? How many times a week should you practice? For how long should you practice? How do you know if you have practiced enough? How do you know if you have practiced well? Similarly, not all first students arrive at university knowing how to study. Many students would benefit from explicit instructions about learning and studying (e.g., What does studying look like? How do you know when you’ve studied enough? I’ve gone over my notes a few times – is that studying? Etc.

Know that students can’t learn it all at once. A good violin teacher knows that you can’t correct a student’s bow arm while you’re adjusting the left hand position, improving intonation, working on rhythm, teaching new notes, and refining dynamics. In any given lesson, the violin teacher chooses to let some things go while focusing on one particular aspect of playing otherwise the student will become too overwhelmed to take in any information at all. Suzuki teachers know that you always start by pointing out something positive about the student’s playing and that you can’t focus only on the errors. Students need encouragement. I think this is true at university as well. Becoming a good writer takes years and novice writers will likely continue to make several mistakes while at the same time improving one or two specific aspects of their writing. While giving feedback on written assignments, it’s important to acknowledge the positive aspects – that’s more encouraging that facing a sea of red that highlights only the errors.

Even if you didn’t take piano lessons as a child and even if have registered your 6 year old for hockey rather than violin lessons, I hope you’ll find these lessons from the music studio applicable to the university classroom.

 Photo privided by Samuel Cuenca under a Creative Commons license.

The first year is critical – Jane Holbrook

Students leaving campus
Who will stay?

Coming into campus on Monday morning was a shock, but a nice one. We don’t get a lot of downtime on our campus but the last two weeks of August and days leading up to Labour Day are usually pretty sleepy; many folks are on vacation and it’s hard to even find a coffee shop open. The throng that I biked into at the main gates Monday morning at 8:15 was a bit disorderly, but the excitement in the air was electric. And it’s the first year students, all fresh faced and enthusiastic, frantically looking for their classrooms and with high expectations that generate the most excitement.

The first couple of weeks of term are exciting but then, of course, the realities of a five course load, weekly assignments (lab reports, readings …) and then midterms set in and those first year students are often challenged to just make it through first term. Our IAP statistics show that our first year retention rate (percentage of students who return to second year here after first year) is close to 92% (UWaterloo IAP), well above the reported retention rate of 80%  for four-year public US institutions (see National Student Clearing house report ) and higher than most other Ontario universities where retention rates hover around 87% (CUDO – Common University Data Ontario). This isn’t the old case of “look to your right, look to your left, one of you won’t be here next year” that we were admonished with as students in years gone by, but if 1 in 10 students do not return after first year, this is a definite loss to the university community and setback for that young person.

Universities have recognized that students face a number of challenges in their first year and provide orientation programs, peer mentoring, study skills sessions and other supports to help new students handle the emotional and educational transitions that they will be experiencing. However, even with these programs in place, our instructors who teach first year courses have a critically important job ahead of them. Studies show that although a student’s personal situation (family background, economic stresses, etc.) and prior academic performance in high school affect first year retention, student engagement in this critical first year is also a major contributor to student retention (Kuh et al., 2008). Creating rich and engaging classroom experiences for first year students in large classes when students are coming in with a wide range of skills is a challenge, but by integrating active learning into large classes (CTE tip sheet – Activities for Large Classes), considering student motivation (CTE tip sheet – Motivating Our Students) and providing frequent, formative feedback to students, instructors across campus are helping to keep students engaged and successful.

Welcome first year students, and kudos to those great first year instructors who work hard to keep them here!

Kuh, G.D, Cruce, T.M., Shoup, R., Kinzie, J. & Gonyea, R.M. (2008) Unmasking the Effects of Student Engagement on First-Year College Grades and Persistence. The Journal of Higher Education, 79 (5), 540-563.

Jane Holbrook

Jane Holbrook

As Senior Instructional Developer (Blended Learning), Jane Holbrook helps to develop faculty programming that promotes the effective use of the online environment in on-campus courses. Working closely with Faculty Liaisons, CEL (Centre for Extended Learning) and ITMS (Instruction Technologies and Multimedia Services), she helps manage initiatives related to “blended learning” courses. She received her BSc and MSc from Dalhousie University but has also studied Graphic Design at the Nova Scotia College of Art and Design.

More Posts - Website

High Failure Rates in Introductory Computer Science Courses: Assessing the Learning Edge Momentum Hypothesis – John Doucette, CUT student  

valleyIntroductory computer science is hard. It’s not a course most students would take as a light elective, and failure rates are high (two large studies put the average at around 35% of students failing). Yet, at the same time, introductory computer science is apparently quite easy. At many institutions, the most common passing grade is an A. For instructors, this is a troubling state of affairs, which manifests as a bimodal grade distribution — a plot of students’ grades forms a valley rather than the usual peak of a normal distribution.

For most of the last forty years, the dominant hypothesis has been the existence of some hidden factor separating those who can learn to program computers from those who cannot. Recently this large body of work has become known as the “Programmer Gene” hypothesis, although most of the studies do not focus on actual genetic or natural advantages, so much as on demographics, prior education levels, standardized test scores, or past programming experience. Surprisingly, despite dozens of studies taking place over more than forty years, some involving simultaneous consideration of thirty or forty factors, no conclusive predictor of programming aptitude has been found, and the most prominent recent paper advancing such a test was ultimately retracted.

The failure of the “Programmer Gene” hypothesis to produce a working description of why students fail has led to the development of other explanations. One recently proposed approach is the Learning Edge Momentum (LEM) hypothesis, by Robins (2010). Robins proposes that the reason no programmer gene can be found is because the populations are identical, or nearly so. Instead of attributing the problem to the students, Robins argues that it is the content of the course that causes bimodal grade distributions to emerge, and that the content of introductory computer science classes is especially prone to such problems.

At the core of the LEM hypothesis is the idea that courses are composed of units of content, which are presented to students one after another in sequence. In some disciplines, content is only loosely related, and students who fail to learn one module can still easily understand subsequent topics. For example, a student taking an introductory history class will not have much more difficulty learning about Napoleon after failing to learn about Charlemagne. The topics are similar, but are not dependent. All topics lie close to the edge of student’s prior knowledge. In other disciplines however, early topics within a course are practically prerequisites for later topics, and the course rapidly moves away from the edges of students’ knowledge, into areas that are wholly foreign to them. The more early topics students master, the easier the later ones become. Conversely, the more early topics that students fail to acquire, the harder it is to learn later topics at all. This effect is dubbed “momentum.”

Robins argues that introductory computer science is an especially momentum-heavy area. A student who fails to learn conditionals will probably be unable to learn recursion or loops. A student who fails to grasp core concepts like functions or the idea of a program state will likely struggle for the entire course. Robins argues that success on early topics within the needed time period (before the course moves on) is largely random, and shows via simulation that, even if students all start with identical aptitude for a subject, if the momentum effect is increased enough, bimodal grade distributions will follow. However, no empirical validation of the hypothesis was provided, and no subsequent attempts at validation have been able to confirm this model. The main difficulty faced in evaluating the LEM hypothesis is that the predictions it makes are actually very similar to the “Programmer Gene” hypothesis. Both theories predict that students who do well early in a course will do well later on. The difference is the LEM hypothesis says this was mostly down to chance, while the “Programmer Gene” hypothesis says it was due to the students’ skill.

In my research project for the Certificate in University Teaching (CUT), I proposed a new method of evaluating the LEM hypothesis by examining the performance of remedial students — students who retake introductory computer science classes after failing them. The LEM hypothesis predicts that remedial classes should also have bimodal grade distributions, because student success on initial topics is largely random. Students taking the course for the second time should be just as likely to learn them as students taking the course the first time round. In contrast, the “Programmer Gene” hypothesis predicts that remedial courses should have normally distributed grades, with a low mean. This is because remedial students lack the supposed “gene”, and so will not be able to learn topics much more effectively the second time than they were the first time.

To evaluate this hypothesis, I acquired anonymized data from four offerings of an introductory computer science course: two with a high proportion of remedial students, and two with a very low proportion. I found weak evidence in support of the LEM hypothesis, as all grade distributions were bimodal when withdrawing students were counted as failing. However, when withdrawing students were removed entirely, only one non-remedial offering was bimodal, a result predicted by neither theory.

Although my empirical results were ultimately inconclusive, my research provides a clear way forward in evaluating different hypotheses for high failure rates in introductory computer science. A follow up study, conducted with data from a university that offers only remedial sections in the spring term (removing the confounding effects of out-of-stream students in the same class) may be able to put the question to rest for good, and facilitate the design of future curricula.

References:

Robins, A. (2010). Learning edge momentum: A new account of outcomes in CS1. Computer Science Education, 20 (1), 37-71.

The author of this blog post, John Doucette, recently completed CTE’s Certificate in University Teaching (CUT) program. He is currently a Doctoral Candidate in the Cheriton School of Computer Science.

Mark Morton

Mark Morton

As Senior Instructional Developer, Mark Morton helps instructors implement new educational technologies such as clickers, wikis, concept mapping tools, question facilitation tools, screencasting, and more. Prior to joining the Centre for Teaching Excellence, Mark taught for twelve years in the English Department at the University of Winnipeg. He received his PhD in 1992 from the University of Toronto, and is the author of four books: Cupboard Love; The End; The Lover's Tongue; and Cooking with Shakespeare.

More Posts - Website

Peer Grading in Higher Education – Hadi Hosseini

phd020608s

Most educators regard peer assessment and peer grading as a powerful pedagogical tool to engage students in the process of evaluating and grading their peers while saving instructors’ time. This process helps improve students’ understanding of the subject matter and provides an opportunity for deeper reflection on the subject matter, accessing higher levels of Bloom’s thinking taxonomy.

Designing and distributing tasks and assignments for peer assessment should be as easy as assigning a few papers to each student and wait for the magic to happen, right? …. Not really!

As instructors, we care about the fairness of our evaluation methods and providing effective feedback. Yet, throwing this crucial responsibility to the shoulders of novice students who (hopefully) have just learned the new topic seems like an awfully risky behavior. There are two major concerns when it comes to peer grading; inflated (or deflated) grades and poor quality feedback. Both of these issues seem to be originating from the same sources of insincerity of the graders and the lack of effort each student invests on grading the peers [2]. To address these issues, recently researchers have raised the question of whether we can design a peer-grading mechanism that incentivizes sincere grading and discourages any type of secret student collusion.

The simplest possible design is to evaluate the quality of the peer reviews or simply put by “reviewing the reviews” [1, 3, 4, 5]. This procedure is bullet proof since no student can get away from a poor-quality feedback or deliberately assigning insincere grades. However, even though this technique may help us achieve the goal of involving the students in the higher levels of learning, in most situations this mechanism is either costly in terms of TA-Instructor time or simply impossible in large classes. In fact, this fully supervised approach defeats one of the main purposes of using peer grading by doubling or tripling the required grading effort: each marked assignment has to be reviewed by one or two TAs for quality.

As a partially automated solution, the system may randomly send a subset of graded papers to the Teaching Assistants (TAs) to perform a sanity check (instead of doing this for every single paper). In contrast, fully automated systems provide a meta-review procedure in which students evaluate the reviews by rating the feedback they have received [1, 3, 5] or by computing a consensus grade for assignments that are initially graded by at least two or three peer graders [5, 6].

In a different approach, students are treated as potential graders throughout the term and only those who pass certain criteria will be take the role of independent graders [6]. The premise is that once an individual reaches a level of understanding, he or she can now act as a pseudo-expert and participate in the assessment procedure. Of course, to ensure fair grading the system randomly chooses a subset of graded papers to be reviewed by the instructor.

Peer assessment is still in its infancy; nevertheless a number of researchers in various disciplines are developing new techniques to address the critical issues of efficiency, fairness, and incentives. Each of the above methods (and many others that exist in the peer-grading literature) could potentially be adopted depending on course characteristics and intended outcomes. I do believe that such characteristic, to very least, must include the following:

  • Skill/knowledge transferability: Do marking skills and the knowledge of a previous topic  automatically transfer to the next topic? If so, are they sufficient?
    For example, an essay-based course may contain similar marking guidelines in all its assignments and training students once could be sufficient in transitioning students to effective peer graders.
  • Course material and structure: How are the topics that are covered in the course dependent on one another? Is the course introducing various semi-independent topics, or are the topics all contribute to building a single overarching subject.

What do you think? Have you ever used peer-assessment in your classes?

 

References

  1. Cho, K., & Schunn, C. D. (2007). Scaffolded writing and rewriting in the discipline: A web-based reciprocal peer review system.Computers & Education48(3), 409-426.
  2. Carbonara, A., Datta, A., Sinha, A., & Zick, Y. (2015) Incentivizing Peer Grading in MOOCS: an Audit Game Approach, IJCAI.
  3. Gehringer, E. F. (2001). Electronic peer review and peer grading in computer-science courses.ACM SIGCSE Bulletin33(1), 139-143.
  4. Paré, D. E., & Joordens, S. (2008). Peering into large lectures: examining peer and expert mark agreement using peerScholar, an online peer assessment tool.Journal of Computer Assisted Learning,24(6), 526-540.
  5. Robinson, R. (2001). Calibrated Peer Review™: an application to increase student reading & writing skills.The American Biology Teacher63(7), 474-480.
  6. Wright, J. R., Thornton, C., & Leyton-Brown, K. (2015). Mechanical ta: Partially automated high-stakes peer grading. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (pp. 96-101). ACM.
Hadi Hosseini

Hadi Hosseini

Hadi Hosseini is a Graduate Instructional Developer at the CTE and a PhD Candidate in the Cheriton School of Computer Science. Hadi's research interest lies at the intersection of artificial intelligence, decision theory, and microeconomics.

More Posts - Website

Ipsative Assessment, an Engineering Experience

How will students demonstrate learning? What types of Assessments will you use? https://www.flickr.com/photos/gforsythe/

Last month I attended and presented at the Canadian Engineering Education Association Conference that was held in McMaster University.  It was a wonderful learning experience that allowed all participants to connect with engineering educators not only from Canada, Continue reading Ipsative Assessment, an Engineering Experience

Samar Mohamed

Samar Mohamed

Samar Mohamed is the CTE Liaison for Faculty of Engineering. Prior to joining the Centre for Teaching Excellence Samar worked as a Post Doctoral Fellow in Electrical and Computer Engineering Dept. She received both her MSc and PhD from the University of Waterloo

More Posts

The Debate Over Accommodations: Making Space for Mental Health in the Classroom — Sarah Forbes

Equality doesnt mean EquityMost professors are aware of their responsibility to accommodate students with disabilities in their classroom.  Many of them may not be as aware that this responsibility extends to students with a documented mental health condition as well. While mental health issues are often invisible, they create many difficulties for students in academia. By allowing reasonable accommodations, instructors can encourage these students to reach their full potential.

What do these accommodations look like?

Accommodations can take many forms. For students with difficulty focusing in crowded environments due to issues like ADHD, alternative exam locations allow them to write their exams in smaller rooms. Often other resources are used alongside alternative exams such as peer note-takers, where a student will take lecture notes on behalf of someone who may not be able to multi-task or focus as well. For students with depression or anxiety who may have difficulties with motivation, short negotiated extensions on assignments may help them to manage their time. Other changes in assignment structure can be negotiated with specific students as well, such as changing a public speaking presentation to a prerecorded lecture for a student with social anxiety. In any of these cases, accommodations require the student to document their issue with AccessAbility Services. For extensions and other personalized changes in exam or assignment structure, the student and instructor can collaborate to find a solution that fits both the assessment needs of the instructor and the issues faced by the student.

cartoon accommodationsThere is some controversy over the idea of accommodations that change assignment structure or allow extra time. However, as illustrated by the cartoon accompanying this article, expecting all students to achieve the same results based on their different abilities and starting points in life is unrealistic. Accommodations given to students who need them simply gives them the chance to truly show the work they have put into the class and the knowledge they have gained.

 The debate over content warnings

The most controversial accommodation by far appears to be the “trigger warning” or “content warning.”  The idea is exactly the same under either name. For controversial or difficult topics that must be discussed in class, the instructor will present a short warning prior to the introduction of the topic. This allows students for whom this topic may be upsetting or trigger flashbacks/anxiety attacks to choose how they interact with the subject matter. This is especially important in the arts, where controversial discussions are the backbone of many classes. While discussions about rape culture and sexual assault on campus are important and help to eliminate stigma as well as introduce students to new viewpoints, they can send a student who has survived sexual assault into a debilitating panic attack, forcing them out of the conversation. Many professors view these warnings as an escape route from difficult conversations and assignments. Anyone can claim to be “triggered,” they argue, and then skip out on important lecture material and assignments with no penalty. However, the content warning does not mean that the material is not mandatory – it just allows students to be prepared for the discussion. If a student knows that they will not be able to handle the material, they can then approach the professor privately and negotiate any other accommodations necessary.

These warnings are easy to add to a syllabus. They can be placed in the class schedule, next to lectures in which topics such as sexual assault, eating disorders, violence, and any other potentially graphic or disturbing topic are discussed. The discussion culture of university is incredibly important for allowing students to experience many different ideas and viewpoints – but by including upsetting subjects without any warning it can alienate many students with mental illnesses, leaving them out of a discussion that often focuses on them. The voices we most need to hear when talking about some of these issues are from students who have personally experienced them. To encourage them to speak up, we need to keep our classrooms welcoming.

Sarah Forbes

Sarah Forbes is an undergraduate in the Psychology department at the University of Waterloo and a co-op student at the Centre for Teaching Excellence.

More Posts

‘Seeing beyond the self’: Using reflective writing as an assessment tool – Dan McRoberts

82648702_800bccf11eFor many years, post-secondary educators have been encouraged to move outside the classroom and create transformative learning experiences for university students. Field courses, service learning, and cooperative education are all examples of the kinds of programming that have become increasingly common and popular amongst undergraduates looking to incorporate some unique and useful experiences in their university careers.

Despite the popularity and growth of transformational learning, questions persist about the most effective ways of assessing student learning that results from these experiences. Experiential learning is hard-to-measure so traditional assessment measures often fall short of the mark. Reflective writing is often at the heart of assessment measures employed to qualitatively measure transformative learning, with self-evaluation, and journaling common assignment formats. There are significant challenges with using reflection to assess students, related to the highly personal nature of the transformations being recorded. Pagano and Roselle (2009) find that there is usually little clarity or systematization involved in using reflective practice. What is involved can vary substantially between courses or instructors. Also, reflection tends to rely on students’ own accounts of events and responses and as such it is very hard to discern if learning has indeed taken place. Woolf (2008) also identifies concerns with the confessional ‘dear diary’ approach to reflective writing, as he aligns this with highly personal change or transformation. Given that much of the possible value in transformative learning comes from the opportunity to ‘see beyond the self,’ the question becomes how to design assignments and assessments that will help students develop this awareness and critical reflexivity.

Sometimes it helps to divide the task into two parts, one which focuses on personal development and the other that relates to key academic objectives or themes. Peterson (2008) profiles a service-learning course assessment that combined personal narrative with more academic analysis. Students were asked to prepare two journals with these respective foci, rather than being asked to write whatever came to mind. Doubling the student (and instructor) workload may not be the ideal solution, but fortunately there are models for designing reflective writing that can assess several components in the same assignment.

One is the DEAL model developed by Patti Clayton, which involves students Describing their experience, Examining the experience in light of specific learning objectives, and Articulating their Learning. The assignment is guided by specific prompting questions that encourage students to complete these various tasks in their reflective writing, from the who, what when and where of an experience (describing learning) to more detailed prompts about what was learned and how (examining and articulating learning).

Another, perhaps less well-known, approach is the ‘refraction model’ proposed by Pagano and Roselle (2009). Refraction tries to incorporate critical thinking into the process of reflection to encourage students to move beyond their own perceptions and consider how to address problems or scenarios they may have experienced in their course. This process begins with reflection and activities that are common to the assessment of transformational learning outcomes. From here, however, the authors propose using critical analytic and thinking skills to refract this knowledge and generate learning outcomes. The first stage – reflection – involves asking students to log events and journal reactions. The critical thinking phase asks students specific questions about these experiences, and the refraction stage invites them to suggest solutions and interact with others and their ideas about the same events or issues.

Whether or not the DEAL approach or refraction model are applied, it is useful to remember what Nancy Johnston from Simon Fraser University says about reflection as a means of assessment. “We are looking for evidence of reflection, which means that students are challenging their assumptions, appreciating different points of view, acknowledging the role of power and discourse, the limitations of their conclusions and in short moving from black and white understandings towards recognizing varied shades of gray.”

(image credit: Paul Worthington)

Dan McRoberts

Dan is a TA workshop facilitator at the Centre for Teaching Excellence and a PhD candidate in the Department of Geography.

More Posts