Universal Design, Accessible Lectures, and Other Fun Buzz-Words — Michelle Ashburner, AccessAbility Services

blogI love the chalk-and-talk lecture in math. I have had the pleasure of teaching thousands of first-years, and with lots of questions, discussions, pauses, and well-formatted notes, the chalkboard lecture can go a long way. It forces students to attend lectures if they want notes directly from the instructor, allows for the presentation of dynamic visual and symbolic material, and most importantly allows for quick correction of mistakes.

Ever since I have been working with the AccessAbility Services office, I have met many students who have disabilities that interfere with their learning in the classroom environment. These students, most of whom have an above-average to superior IQ, have found wonderful ways of compensating. They have inspired me to work on making my lectures more user-friendly to persons with disabilities (Accessible Lectures), as well make my course more readily absorbed by students in general (Universal Design).

The most challenging thing to do here was with regards to testing. The main idea of creating an accessible assessment is to provide choice. In the humanities, for example, students might choose between a 40% exam, a 40% essay, or 20% split between the two. Perhaps in a history class a student could perform an exam orally while another could write a paper exam. Everyone has a preferred learning style and strength of expression, and for students with learning disabilities, being able to use this strength is of even more importance.

Well, what choice can one give with math exams? Traditionally the math midterm is a collection of questions on paper, and the possibility of an oral exam, or an essay in lieu of a problem-style written exam is out of the question. There aren’t enough resources to issue oral exams to 400 students, nor can we ensure that students understand mathematical reasoning and calculations if they are to write an essay composed entirely of text.

The exam that I gave this term was made with large font in LaTeX (which looks like 14-16pt when printed), lots of white space, and clear instructions for each question. After two common questions, the exam splits into a Part 1 and a Part 2, and students are instructed to complete one part or the other. Part 1 is mostly composed of word problems, while Part 2 is mostly composed of algorithmic problems. Part 1 does contain algorithmic, calculation-based material, and Part 2 does require students to create problem spaces and to translate wording into math; they are just presented differently.

Of 400 exams, about 230 students chose to do mostly word problems, while the rest chose the algorithmic thinker option. Keep in mind that deconstructing a word problem and going through the steps of solving takes time, so that there were more questions in Part 2 (yet the points per part were the same).

Students with a case of math anxiety (there are SO many in my classes!) can consider the algorithmic part as opposed to freezing when coming in contact with only word problems under a time constraint. They will continue to hone their word problem solving skills within the tutorial environment, where they may choose to work on a group assignment or on their own. Come the final exam, they will be more prepared for the word problems that await them.

In my experience, those who are verbally strong and are more comfortable learning the “soft” sciences tend to be more linear and algorithmic mathematics students, while those that are more comfortable going through an unpredictable journey with a math puzzle and have a more developed mathematical intuition tend to be less restricted to linear thinking. They could be characterized as “global,” or “intuitive” learners. Honestly, learning styles change and studies continue to bring light to the learning styles and strengths that tend to go together. All I have to go on is what I’ve learned from my students thus far.

It has helped me immensely to see the perspectives of my students at AccessAbility Services. When I present a word problem, I always read the text after having them read it on their own; I give breaks to process information; I try to have the learning as active as possible by prompting discussion, asking questions, and holding votes (we have very poor voter turnout in my classes. I am worried about the future of democracy).  I have digitized note outlines posted on LEARN in 14 point font, which are optional to use, but require attendance to have a complete set. My tutorial assignment instruction sheets encourage any student with difficulty producing written solutions to contact me by email, phone, or in person to discuss alternatives. I allow technology in the classroom (a whole other discussion on its own!), and I try not to assume ability to see in colour.

I have enjoyed the challenge of making an accessible math course so far, and I am looking forward to updating you all when term is over. Your thoughts will make this venture more of a success. Contact me any time.

Effective Feedback – Gowsi (CTE Co-op Student)

Feedback Notes

Entering my third year at the University of Waterloo, one thing that never crossed my mind was the idea of higher learning and teaching. I never thought to myself about how any of my classes could be improved. Was it because I did not care or was it because I was not asked?

Since working at CTE I constantly find myself analyzing my past professors and their teaching methods to see if they were actually effective. As a Sociology major I find that most my classes should have been more engaging. There are many ways professors could have engaged students to want to learn, using methods such as effective discussions or switching up the delivery of the content. But the problem was that many of my teachers never asked me how I felt about the class. As a result, I rarely found myself focused on lectures. This is why feedback is necessary in courses. Feedback in a classroom setting is beneficial to both parties involved. The students giving the feedback are able to critically examine the teaching method. The receiver of the feedback, the teacher, can get a better understanding of the effectiveness of their teaching method. The feedback allows them to cater towards students need and create a better learning atmosphere.

Feedback must have a balance of perfect timing and effective questions to be valuable to students and teachers. I have written many feedback forms for my courses at the end of the term. But what is the point of that, the students filling out the feedback forms don’t get to see any of the results. Don’t get me wrong, feedback at the end of the semester for a class is beneficial, but not for the writers. As a student I want to see the changes made from my feedback first hand to benefit me. Implementing a system of feedback throughout the term would increase student engagement and participation.

For the first time in my university existence I was asked to give feedback within the first few weeks of class this term. My professor had asked us questions based on the simple feedback mechanism called SKS. The professor simply asked the student what he should Stop, Keep, and Start doing. The next class he would display the outcome of the feedback in a graph, a simple and easy way to read the results. This method allowed students to expose their issues with the class early and allowed the professor to assess such issues early enough that the class could mover forward without any issues. He then changed his method of delivery to benefit the majority of the class.  This schedule of feedback throughout the semester should be implemented in all courses the university provides. Students in the classroom feel more welcomed because it shows that the teacher is interested in the success of their students. I guess the takeaway of this blog is that everyone is aware of the benefits of feedback, but what is key, is to begin the process sooner then later.

 

Designing assessments that curb academic dishonesty (and increase learning too!) – Jane Holbrook

bloghandI recently  listened to a segment on the Current on CBC,  about academic integrity and the effect of technology on cheating. The main guest was Dr Julia Christensen Hughes, Dean of the College of Management and Economics at the University of Guelph, who talked about the findings of some of the research that she has conducted on Canadian university students.  A whopping 80% of Canadian university students admit to having cheated. They admit to at least one of over 30 behaviours that are considered cheating at university ranging from outright cheating on exams, to plagiarism, to working in groups when specifically asked to work individually on an assignment. Interestingly this isn’t a new problem. American studies in the ‘60s found that 75% of students admitted to cheating in college.  And it’s not a new behavior for students when they get to the post-secondary environment. In one recent Canadian study 60% of high school students admitted to cheating on tests, and 75% to cheating on written work that is handed in.  Although technology provides more ways for students to cheat (buying “internet” papers, using online paper mills and just good old cut and paste from internet sites) it hasn’t impacted the overall rate of cheating. Technology has however  increased instructors’ ability to detect plagiarism thanks to online services such as Turnitin that use huge data bases of accumulated student work, web pages and online journals to compare submitted work to common sources.

What interested me most from the conversation with Dr Christensen Hughes was her finding that students were less likely to cheat if they respected the instructor, if they felt that the quality of the education that they were receiving was high and if the instructor was using assessments that were truly assessing the skills and knowledge that students were learning in the course.  This last point dovetails nicely with a book that I have just been reading, “Cheating Lessons – Learning from Academic Dishonesty” by James M. Lang.  Lang discusses how the ways that we teach and assess can impact student’s academic integrity and how instructors can design assessments that reduce academic dishonesty and also create better learning.

Lang proposes that students are more likely to cheat if:

  • there is low  intrinsic motivation to actually learn what they are being assessed on;
  • there is an emphasis on one-time performance rather than continuous improvement towards mastery;
  • the stakes are high on a single assessment;
  • they have a low expectation of success.

So what can an instructor do to decrease cheating and increase learning?

When students are intrinsically motivated, find the subject matter meaningful and can connect it to their own lives, they will learn more and retain their learning. Students driven by extrinsic rewards, such as grades, use strategic or shallow approaches to learning and will have more motivation to cheat. Posing authentic, open-ended questions to students or challenging them with problems or areas of investigation of their own choice can give students the opportunity to demonstrate their knowledge and reflect on what they have learned.  Learning portfolios that include journal entries, short essays, and reflections can assess the student learning experience and understanding of concepts (and are darn hard to cheat on).

Learning for mastery (a deep approach to learning) rather than one time performance can be encouraged and assessed. Giving students multiple attempts on assessments or offering students choices on how they will be assessed can promote a mastery approach. These tests can also provide students with feedback so that they can learn from the assessment and then apply their learning again to show mastery. Scaffolded assignments or essays, where drafts and reworked versions are submitted for feedback, can provide evidence of learning and are not likely to be purchased in the internet.

There is evidence that repeated low stakes assessments have the largest impact on learning and retention of learning, particularly if the testing is in the format of short answer questions. Known as the “testing effect” it can be achieved through the use of short online quizzes or one-minute papers. Creating opportunities for students to retrieve knowledge and rehearse answering questions not only measures learning, but also produces learning (Miller, 2011).  Lang discusses how taking the emphasis off a one big, high stakes assessment and introducing multiple low stakes assessments helps students rehearse for more substantial assessments and actually reduces cheating.

When students feel that they have no chance of success they are more likely to give up rather than attempting to master concepts, and they may look for alternative, dishonest ways to pass tests. Lang argues that helping students be aware of their level of understanding throughout a course will help them gauge how much work they need to do to be successful on major assessments. Activities like think-pair-share, clicker questions and other in-class activities or formative assessments help instil self-efficacy, and help students identify what they need to do to become capable rather than relying on cheating.

All sounds like more work for the instructor, yes, but with two great results – better learning and less cheating and presumably less time spent following up on academic integrity cases as well.

Lang, J.M. 2013. Cheating Lessons: Learning from Academic Dishonesty. Harvard University Press, Cambridge, Mass, USA.

Miller, M. 2011. What College Teachers Should Know About Memory: A Perspective from Cognitive Psychology. College Teaching, 59:117-122.

Let’s Talk about Assessment — Katherine Lithgow

Assessment literacy image katherineHow often have you marked assignments and provided comments only to find that the students don’t even bother to pick them up?  Or they get the feedback and then make the same “mistakes” on the next assignment?  How often do you sit down with your colleagues and discuss how they would mark particular assignments?

Assessment Literacy: The Foundation for Improving Student Learning (2012), discusses why students often fail to act upon feedback- they often don’t understand what the feedback means, or if they do, they don’t know what to do to address the feedback- and offers suggestions on how we can improve the process.

What captured my attention was the notion of improving the assessment process through the development of an assessment community of practice (CofP) whose membership consists of ALL parties involved in the assessment process- students, instructors and anyone else who provides feedback to students.

The authors remind us that providing feedback is a complex, social process and not an end product. In the community of practice approach to assessment, the process becomes an invitation to students to participate in the discipline’s community and  engage with more accomplished members in order to learn the conventions, culture and language of the discipline’s community through observation, discussion and engagement. By actively participating in the community, ALL PARTIES will come to have a shared understanding of the criteria which will be applied when making marking decisions, and this will help ensure that marking is objective and reliable.

And how does this shared understanding come about? Well, it comes about through formal and informal social interactions- talking with each other-dialogues, peer-to-peer discussions, student-to-faculty discussions.  And by taking steps to create an environment which encourages students to ask questions of their instructors, their classmates and  of themselves, and which fosters their capacity to peer evaluate and self-assess.  Students are more inclined to engage in the assessment process and use the feedback when they feel comfortable in the community and do not feel belittled.  Forming positive relationships is important; students are more apt, as we all are, to apply and act upon feedback when it comes from a trusted and caring source, and when it is viewed as part of an ongoing learning process where they can act upon the feedback rather than a ‘final product’.

The authors argue for taking a program wide approach to assessment rather than  a course-based approach which is what is more commonly in place.   In a course-based approach, students often feel that the feedback is unique to the course or the instructor, and they do not see how the feedback will help them in future courses.

They suggest  that in an environment that cultivates an assessment community of practice, students and instructors are more inclined to think of courses as part of a program, and assessment as part of a process that allows for more focus on ‘deep’ learning and the development of skills and concepts that are slowly learnt over time and not within the duration of a semester.

The authors do not claim to have all the answers on how to facilitate this type of environment, and acknowledge that it takes time and effort to cultivate a community of practice around assessment.   But wouldn’t it be worth it if we could help our students engage with, and effectively apply, the feedback that we’ve been spending so much time providing?

Contemplating Quality + Teaching at Waterloo – Donna Ellis

Over the last few months, I have been working on a multi-institutional project on identifying indicators of an institutional culture that fosters “quality teaching”. One report that our group has been reviewing comes from the Organisation for Economic Cooperation and Development’s Institutional Management in Higher Education group. Published in 2012, the report entitled Fostering Quality Teaching in Higher Education: Policies and Practices outlines seven policy levers that institutional leaders can use to foster teaching quality. The levers provide reasonable actions to take: raising awareness of quality teaching, developing excellent teachers, engaging students, building organization for change and teaching leadership, aligning institutional policies to foster quality teaching, highlighting innovation as a driver for change, and assessing impacts. But what constitutes “quality teaching”?

At its most basic level, the authors indicate that “quality teaching is the use of pedagogical techniques to produce learning outcomes for students” (p.7). More specifically, they explain that quality teaching includes “effective design of curriculum and course content, a variety of learning contexts (including guided independent study, project-based learning, collaborative learning, experimentation, etc.), soliciting and using feedback, and effective assessment of learning outcomes. It also involves well-adapted learning environments and student support services” (p.7). These definitions focus on student learning, the honing of instructional and critical reflection skills by teachers, and the need for institutional infrastructure to support learning. What they do not focus on is the adoption of any particular pedagogical method nor the specifics of an instructor’s performance in a classroom (think about what course evaluations tend to highlight…).

The authors also identify the need to ground any efforts to shift the quality of teaching – or the culture in which teaching happens – within a collaboratively developed institutional teaching and learning framework. This framework should reflect the identity and differentiating features of an institution and define the “objectives of teaching and expected learning outcomes for students” (p.14). At uWaterloo, we have endorsed the degree level expectations (undergraduate and graduate) as the benchmarks for program level outcomes. But we do not yet have a succinct statement about our goals regarding quality teaching.

Our newly released institutional strategic plan asserts that one way we will offer leading-edge, dynamic academic programs is by “increasing the value of teaching quality and adopting a teaching-learning charter that captures Waterloo’s commitment to teaching and learning” (p.11, emphases mine). I wrote about another institution’s teaching and learning charter in the September 2012 issue of CTE’s Teaching Matters newsletter. What will our charter entail? What do we value about teaching and learning? What kind of institutional culture do we want to promote with regard to teaching quality at Waterloo? These aren’t small questions, but they’re very exciting ones to contemplate.

The disposition to think critically – Veronica Brown

waterloo campus bikesAs I write this post, several Waterloo colleagues are attending the Society for Teaching and Learning in Higher Education’s (STLHE) annual conference. Seemed like a good opportunity to reflect on my experience at last year’s conference.  STLHE was the first conference I attended when I joined CTE three years ago. It was held in Toronto that year, wrapping up on the same weekend as the G-20 summit. Last year, it was in Montreal, where I watched people march a block or two from our hotel as part of their day of protest.

Interestingly, the session that continues to haunt me was related to critical thinking. In her session, Beyond skills to dispositions: Transforming the critical thinking classroom,  Shelagh Crooks, a professor at Saint Mary’s University, explored elements of the instruction of critical thinking, her goal to “raise questions in the participants’ minds about the purpose of critical thinking education, rather than propose clear solutions”(Abstract, para.3). She certainly fulfilled that goal in my case.

This idea of the disposition to think critically is what is really stuck in my head. Not just for critical thinking, but other areas  of the curriculum in which we must move beyond the knowledge and skills of a topic and encourage thought in the affective domain. Consider themes such as health and safety, societal or environmental impact, ethical behaviour, integrity, teamwork, management, etc. As educators, what is our role in the development of our students? Take health and safety for instance. Is it enough for our students to know about hazardous materials, for example, and to have the skill to work with them appropriately? Or is there a third element, to actually value health and safety? To look critically at a situation, to question a current practice when appropriate, to have the disposition to continuously look at the lab through a health and safety filter.

And so here I am, a year later. I find myself with more questions about this disposition idea than answers. It is something I am exploring as part of the curriculum work I support. Many of us are wondering not only about teaching and learning in the affective domain but, as a next step, how to assess it. If developing this disposition is our instructional goal, how will we know our students have achieved it? If this is a question you are pondering, too, let me know, I’d love to chat with you about it.

WaterlooWinter
I realize this image isn’t related to critical thinking but I thought I would share it for anyone missing the snow…

 

Has the feedback sandwich passed its “eat-by date”? – Karly Neath

sandwichWho would dispute the idea that feedback is a good thing? Evidence from research, our personal experiences, and common sense make it clear: Formative assessment, consisting of lots of feedback and opportunities to use that feedback, enhances student performance and achievement.

A commonly used approach when delivering feedback is the “sandwich method”. You sandwich the negative feedback between the pieces of positive feedback. It has always been done this way, so it must work, right?

This traditional approach might work once, maybe twice. After that people recognize when a feedback sandwich is coming their way and it is the moment that they hear the positive praise.

In fact, we actually start to form a conditioned response (anchor) to positive feedback from an instructor followed by negative feedback. The negative feedback blasts the first positive comment out of the receiver’s brain. The receiver then thinks hard about the negative feedback which drives it into memory. The receiver is now on guard for more negative feedback and cannot hear the positive comments that end the cycle. The result is that what the student is doing well is not being reinforced. This is a good enough reason to search for alternative ways of delivering feedback.

An alternative method has been proposed by Marion Grobb Finkelstein, a member of the Canadian Association of Professional Speakers.

According to Marion, the key to success when structuring feedback is to only offer positive feedback. Here is her formula:

How to Give Feedback:

  1. “When you… (describe his/her behaviour)
  2. …consider doing this” (describe your suggested behaviour)
  3. This will help you get… (describe the benefit, the gain, what they will move towards)
  4. And it will help you avoid…” (describe what they will move away from)
  5. End with an authentic compliment and encouraging praise.

Here is the model put into action:

“Brenda, when writing up your experimental report, consider the idea of including a graphical display to represent your data. This will make your data clear to the reader and avoid the frustration of the time-pressed TA marking your report that would arise if they do not understand your results. Good job on the written component of the report! I look forward to seeing your final version.”

Instead of giving Brenda feedback using the typical sandwich – “Brenda, your report is good but it didn’t have graphs. Your written component was good.” – the new approach communicates the same information with a positive tone.

With this model comes the hope of keeping students engaged and motivated with the end goal of improving student learning. It may have a different flavour than the sandwich that you typically order. But that one is full of bologna anyways!

I encourage you to give this method a try and to research other ideas. Do not be afraid to challenge the traditional sandwich method!