Have you ever felt overwhelmed? I’m sitting at my computer on a late November afternoon contemplating what I have taken away from two recent events: a provincial symposium on assessing learning outcomes and an international conference for educational developers on transformative relationships in relation to fostering cultures of deep learning.
I attended numerous sessions and overall I came away with a sense of what I call “data overwhelmosis”. We have more data and more evidence available to us than ever before in higher education. We have software to help us identify specific learning outcomes and each student’s level of achievement for each outcome. We have online templates for course syllabi that generate maps of the learning outcomes for an entire program’s curriculum. We can use learning analytics and data analytics to monitor students’ progress (or failure). We can do social network analyses to show how we connect to one another, how information flows within a unit or across an entire institution (or beyond). We know what educational development practices have empirical backing. The list goes on. My point is that it’s clear that we can capture almost anything. We can collate massive amounts of data and generate evidence for (or against) almost anything you can imagine. But to what end? What’s the purpose? And what’s the overarching plan?
We’ve talked a lot about these questions as part of devising and implementing our Centre’s assessment plan as well as our upcoming external review. Just because we can get data doesn’t mean it’s a good idea. How much is enough? What will we do with what we collect? Why will it matter? Data collection takes time and effort. We know this from any research project we have undertaken. In our line of work, any time that we ask our staff to input data about their work, this is time not spent working with a client. There has to be a good reason to ask staff members to spend time in this way. This is where the role of questions becomes critical.
For research projects, we determine research questions. We did the same when devising our assessment plan. These questions guide our every move: our methodological decisions, the types of data we need, the appropriate analysis methods, and the way we write up our results. The questions enable us to select the data that will help us determine answers, and these limited data become the evidence for our conclusions. We’ve realized that we don’t need every piece of data that we could collect – just the data that are relevant to the questions. This is a freeing revelation.
But it doesn’t end there. The evidence isn’t enough. We need to find the story. What does the evidence mean? How will it affect what we do tomorrow or in the next five years? I worry that higher education in general – and educational development specifically – is getting bogged down in the weeds and not stepping back to identify what those weeds are telling us. The examples that I noted in the second paragraph help to illuminate the issue. But what are we overlooking? Which way is the wind blowing now and in the future? Our questions create important frames to make data manageable and even meaningful, but thinking about how to tell the story of the evidence seems the most crucial of all to me.
In the next few months, we will be aiming to tell the story of CTE in our self-study, which will extend far beyond what we convey in our annual reports. We will be analyzing existing relevant data and collecting new data as needed to fill perceived gaps. We will be striving to ensure that we have sufficient information to assist our external reviewers in addressing the questions set in the Terms of Reference for the review. But from all of this, what we most need is to tell our story and listen to what it is telling us. I’m not entirely sure what we’ll hear, but I am very intrigued by what will emerge. The evidence is critical, but we need to move beyond it to better understand where we are and where we’re going.