Wrapping to Uncover Learning – Monica Vesely

Many of us have likely heard the term wrapper or cognitive wrapper used when discussing ways to help our students in becoming more independent and self-aware learners. In particular, this term comes up when discussing assessment as a learning opportunity. So what exactly is a cognitive wrapper and how can it be used to aid learning?

In brief, a cognitive wrapper is a tool to guide students before, during or after a teaching and learning event to help them identify their own approaches to the teaching and learning event and what aspects of their behavior are productive and which aspects are not. It encourages students to purposefully examine what they can and should change so as to improve the teaching and learning experience. Wrappers are a structured way to guide students through a reflective process that increases their self-awareness and leads to a modification of behavior through self-regulation.

Continue reading Wrapping to Uncover Learning – Monica Vesely

What We Can Only Learn from Others — Donna Ellis, CTE Director

eurekaYou know when you have an “a-ha” moment and two ideas from completely different contexts suddenly merge in your mind?  I had this happen to me when I attended a recent faculty panel discussion in Math about the use of clickers.  The panelists shared a variety of experiences and gave excellent advice to their colleagues.  My “a-ha” moment arose when the panel facilitator declared how much she had learned about her students when she started to use clickers:  “I thought I knew what they were thinking.  Boy, was I wrong!”  Her statement cemented for me the extreme value of asking others about their thinking rather than making assumptions and then devising plans based on those assumptions.

You may have heard that CTE is going to have an external review in 2017.  It’s time and it’s part of our institutional strategic plan for outstanding academic programming.  Our Centre was launched in 2007, a merger of three existing units that supported teaching excellence.  Many things have changed since then, including the structure of our leadership, our staffing, the breadth of services that we provide, and our location.  Organic, evolutionary change is positive, but there’s value in stepping back to see where we’ve been, what’s on the horizon, and how to get there.  And this is where the “a-ha” moment comes in:  my small CTE team working on this review cannot know what others think about where we are and where we could go.  I’ve always known this, but it’s one thing to know it and another to do something about it.

And so we’ll be asking, both as we prepare for our self-study and during the external reviewers’ visit.  We have already started to ask some different questions on our feedback instruments about our services, focusing on ways that working with us have helped to enhance your capacity and your community as teachers.  These changes are part of launching a comprehensive assessment plan that connects to our Centre’s overall aims.  But we have also begun to work on sets of questions for our external review about areas that we might be too close to see clearly or cannot know because the responses needed are others’ perceptions.  These questions involve topics ranging from our mission statement and organizational structure to our relationships with others and the quality of our work.  We also need input on the possibilities for “CTE 2.0”:  where could we be in another 10 years?

We’ll be starting this data collection with our own staff members, doing a SWOT analysis (strengths, weaknesses, opportunities, and threats) this spring term.  But we will be seeking input far beyond our own walls, including beyond UWaterloo.  When we come knocking (literally or by email or by online survey), I trust you’ll answer and provide your honest feedback and insights.  We believe we are a responsive organization that helps those who work with us to achieve their goals, and we have some data to support these claims, but we want more.  We want your input.  We want to be able to say: “We didn’t know that. We’re so glad we asked!”

If you have thoughts or insights into our external review plans, please let me know.  You can reach me at donnae@uwaterloo.ca or at extension 35713.  We want to make this external review activity as generative and useful as possible.  I am optimistic that with your help we can achieve just that.

Photo courtesy of David McKelvey

Program Outcomes – Join our new learning community – Veronica Brown

Goals. Aims. Objectives. Outcomes. Metrics. Performance Indicators. Ideal Graduate Attributes.

Last week, I spent some time with colleagues debating the meaning of these various terms. They are often used interchangeably but, depending who you ask, they don’t mean the same thing. I tend to lump goals, aims and objectives together because they represent our intentions – what we will work towards during a given learning experience. I see outcomes and attributes as what students are actually able to do by the end of that experience (specific behaviours, knowledge, skills, attitudes they have developed). Finally, I place metrics and performance indicators into a category of measurements of those outcomes. Our discussion last week verified that while we use these terms in the similar ways, it’s worth taking the time to clarify our shared understanding of these key terms.

Now, it’s time to expand that conversation across campus. I’m excited to announce a new learning community at CTE – program outcomes assessment. Many departments across campus are engaged in program assessment through academic program review, accreditation, and curriculum design and renewal. Bob Sproule (a member of the School of Accounting and Finance’s Learning Outcomes Committee) and I will be leading this group as we explore various aspects of program outcomes assessment.

The first session, on May 12, 2016 12:00-1:15pm in EV 241, is a brainstorming session to explore topic ideas for the coming year. Our goal is to meet twice per term, starting in Fall 2016, and we want to ensure the sessions reflect areas of interest for you, our community members. If you are unable to attend the session but are interested in joining the community, please email me, Veronica Brown (veronica.brown@uwaterloo.ca), Sr. Instructional Developer, Curriculum and Quality Enhancement, Centre for Teaching Excellence.

Small portion of a curriculum map
A slice of a curriculum map – a great tool in assessing program outcomes

Ipsative Assessment, an Engineering Experience

How will students demonstrate learning? What types of Assessments will you use? https://www.flickr.com/photos/gforsythe/

Last month I attended and presented at the Canadian Engineering Education Association Conference that was held in McMaster University.  It was a wonderful learning experience that allowed all participants to connect with engineering educators not only from Canada, Continue reading Ipsative Assessment, an Engineering Experience

Integrated Testlets- What are they?–Samar Mohamed

IF-AT
Sample IF-AT card

Last month I attended the annual Society for Teaching and Learning in Higher Education (STLHE) 2014 Conference that was held in Queens University, Kingston. It was an excellent opportunity for me to learn from colleagues across Canada and exchange ideas with them. One of the workshops that attracted my attention was facilitated by Aaron Slepkov and Ralph Shiell from the Dept. of Physics at Trent University. In their workshop, Ralph and Aaron focused on their newly developed testing technique: “Integrated Testlet (IT)”. The presenters started by talking about the benefits of Constructed Response (CR) questions, a common term for questions where students must compose the answer themselves, and how these types of questions enable instructors to gauge their students’ learning. CR questions also allow instructors to give part marks to partially correct answers. The presenters also commented on the trend to switch from CR questions to Multiple Choice (MC) questions in the field of Physics due to increasing class size and the resulting contraints on time and personel resources. However, traditional MC questions don’t allow for part marks or, more importantly from a pedagogical standpoint, enable the instructors (and students) to know where the students went wrong. The integrated testlet method is different in that not only does it allow the students to keep trying each MC question until they get the correct answer, “answer-until-correct question format” enabling the granting of partial marks, but student do not leave the question without knowing the correct answer enabling them to move on to the next integrated question. The method presented changed complex CR physics questions into IT questions. The IT method is based on a traditional testlet, which is a group of MC questions that are based on a common stem (or scenario). In an IT, an answer to a question (task) leads to the next task procedurally, and in this way the students’ knowledge of how various concepts are connected can be assessed. Therefore, items in an integrated testlet are presented in a particular sequence in which the answer for part (a) is used to solve for part (b) and so on. The IT rely on the use of an “answer-until-correct response format”, where the students can keep on making selections on a MC question until the correct response is identified and can be used in subsequent related items. The presenters used the Immediate Feedback Assessment Technique (IF-AT) to allow the students to do several attempts and to get part marks for their response. For more information about IF-AT cards, see Epstein Educational Enterprises website. Moreover, for a sample application of the IF-AT cards at the University of Waterloo see the CTE blog by my colleague Mary Power, The faculty of Science Liaison. In their published paper, the presenters explain the method they have used to transform CR questions to IT questions and analyzed the students’ responses for both question types; it is a very useful and interesting reading that I recommend for instructors thinking about this method.

Using “Transit Questions” in place-based pedagogy – Trevor Holmes

I love being in the classroom, whether it’s large or small, whether I’m officially the teacher or the learner. But I also love getting out of the classroom. Some of the most powerful experiences in my own learning and my own teaching have been observing, interacting, and reflecting in spaces other than lecture halls and seminar rooms. Some time ago, I wrote about place-based pedagogy (with some suggested reading) and gave the example of a workshop for the Educational Developers Caucus (EDC) conference at Thompson Rivers University. Since then, I have continued to use what previously I hadn’t a name for in my own cultural studies course — the field observations and intellectual response papers, the spontaneous “field trips” out into parts of campus to apply concepts, the incorporation of people’s experiences into the framework of the course.

Today’s post is about a small piece of the place-based learning experience I had at the EDC conference, a piece that I’m considering using with my own learners when they do their field observations. To date, I’ve supplied them with reflection questions and notetaking guides for the site visits. I’ve used the online quiz tool in the learning management system to ask “prime the pump” journal questions. But I’ve never yet tried the “transit question” approach. Transit questions were thought-triggering questions handed out just before traveling to the field sites in Kamloops. There were, to my recollection, four different cue cards and each pair of people received one or two cue cards. The idea was that the question on the front (and maybe there was one on the back) would ready us for what we were about to see by asking us about related prior experience with X, or what we expect to find when we get to X, or how is X usually structured. The idea was to talk to our partners about the questions and answer them informally as we made our way to the sites (which took 10-20 minutes to get to).

Photograph of two people in Iceland
Photo of two people in Iceland. Source: Karlbark’s Fotothing stream (shared under CC license)

I can imagine transit questions for pairs that would be suitable for my course too. However, we don’t always have pairs (sometimes small groups, sometimes solitary learners going to a space in their hometown, and so on). I can easily adapt the idea for solo use, though clearly I wouldn’t want someone to be taking notes in response to the prompt while, say, driving!

If we do the field trip to Laurel Creek Conservation area again to test ideas found in Jody Baker’s article about Algonquin Park and the Canadian imaginary, I’ll be using transit questions for the bus ride for sure. With other observations I will have to think about how to adapt the idea. Choosing the right question or questions seems to be important, and offering space to jot notes for those who don’t want to start talking immediately. I’d strongly encourage this approach when you know people will be traveling somewhere for the course by bus, or by foot/assistive device. I can imagine that there are lots of opportunities to do this (and it’s likely already done) in disciplines as varied as geography, planning, fine art, architecture, biology, geosciences, accounting, anthropology, and many others. I’m thinking it would be great if they could pull questions from a question bank to their phones or other devices en route as well… the possibilities!

Transit questions on the way to field sites helped to ready me and my partner for what we’d be looking at, to reflect on the implications of our mini-field trip, and to connect our histories to the present task. I recommend them wholeheartedly.

Assessment Philosophy – Veronica Brown, CTE

Bishop's University building with trees and grey sky
Bishop’s University (photo by Ryan Millar, flickr)

A few weeks ago, Julie (Timmermans from CTE) and I visited Bishop’s University to facilitate two workshops. The morning session was on course design, a condensed version of CTE’s Course Design Fundamentals. In the afternoon, the session was titled, “Designing Assessment for Learning”. We had an absolutely wonderful time and met many faculty members from both Bishop’s University and Champlain College, which is also located in Lennoxville.

We struggled for quite some time with the content of the assessment workshop. Whole university courses are dedicated to this topic. We had just three hours. What should we cover? What were the most critical messages? Should we focus on specific tools? What are some of the “best practices” that are happening at Waterloo that we should share? Should we spend an equal amount of time on both formative and summative assessment? Wow! There is so much to cover.

OK. Perhaps we could focus the assessment plan (as our intention was that the take-away from the workshop would be to have an assessment plan) around a few specific assessment tools. But what should we include? Exams? Quizzes? Assignments? Written assignments? Problem sets? Labs? Projects? Research? Essays? Community Service? Design Competitions? Case Studies? Reports? Studio projects? Individual work? Team work? Participation? Reflective Writing? Again, what a lot of content to cover in just three hours!

Our initial thoughts and design focused heavily on content and all the knowledge we wanted to impart. Ironic given the fact that we had just planned a course design workshop. We eventually took our own advice and stepped away from the content. As we continued to wrestle with these ideas, we kept asking ourselves, if there is just one thing we would like participants to know or have when they walked out the door, what would it be? It took a really, really long time to figure this out.

Eventually, we realized that we wanted participants to explore a different element of assessment. While we could impart lots of ideas related to specific tools, we decided to focus instead on how we view and value assessment. We began the workshop with an exploration of the concerns we, as instructors, have about assessment then compared it with our perceptions of students’ concerns about assessment. We then explored elements of a framework for assessment, which includes: observation (obtaining evidence of learning); interpretation (reasoning from the evidence); learning outcomes; and, at the centre of the framework, the purpose (Why am I assessing?) (the framework we presented was adapted from the National Research Council (2001). Knowing What Students Know. Washington, DC: National Academy Press, p. 44.).

Both these pieces led to the final activity of the day, in which we asked participants to articulate their Assessment Philosophy Plan, which might eventually become part of their Teaching Philosophy. The idea was to explore what their goals and philosophy were for the assessment of their students. We asked them to reflect on the following questions.

  • Who is involved with the assessment?
  • What roles does assessment play in learning?
  • What boundaries surround your assessment framework?
  • How can you provide flexibility to support the variety of learners in your class?
  • What pieces are rigid and which ones are flexible?

Having written this philosophy statement, we then asked them to reflect on how the assessments in their course reflect this philosophy. In reality, we cannot always control the contextual factors that impact our assessment choice (e.g., if part of our philosophy relates to developing a reflective practice, how do we provide formative feedback to a class of 1500?). But by reflecting and articulating our own philosophy, it can help guide us when we need to make some of the more difficult decisions tied to our assessment strategy for our course.