Artificial Teaching Assistants

The "draughtsman" automaton created by Henri Maillardet around 1800.
The “draughtsman” automaton created by Henri Maillardet around 1800.

The dream of creating a device that can replicate human behaviour is longstanding: 2500 years ago, the ancient Greeks devised the story of Talos, a bronze automaton that protected the island of Crete from pirates; in the early thirteenth century, Al-Jazari designed and described human automata in his Book of Knowledge and Ingenious Mechanical Devices; in the eighteenth-century, the clockmaker Henri Maillardet invented a “mechanical lady” that wrote letters and sketched pictures; and in 2016, Ashok Goel, a computer science instructor at Georgia Tech, created a teaching assistant called Jill Watson who isn’t a human – she’s an algorithm.

Goel named his artificial teaching assistant after Watson, the computer program developed by IBM with an ability to answer questions that are posed in ordinary language. IBM’s Watson is best known for its 2011 victory over two former champions on the gameshow Jeopardy! In Goel’s computer science class, Watson’s job was to respond to questions that students asked in Piazza, an online discussion forum. Admittedly, the questions to which Watson responded were fairly routine:

Student: Should we be aiming for 1000 words or 2000 words? I know, it’s variable, but that is a big difference.

Jill Watson: There isn’t a word limit, but we will grade on both depth and succinctness. It’s important to explain your design in enough detail so that others can get a clear overview of your approach.

Goel’s students weren’t told until the end of the term that one of their online teaching assistants wasn’t human – nor did many of them suspect. Jill Watson’s responses were sufficiently helpful and “natural” that to most students she seemed as human as the other teaching assistants.

Over time – and quickly, no doubt – the ability of Jill Watson and other artificial interlocutors to answer more complex and nuanced questions will improve. But even if those abilities were to remain as they are, the potential impact of such computer programs on teaching and learning is significant. After all, in a typical course how much time is spent by teaching assistants or the instructor responding to the same routine questions (or slight variations of them) that are asked over and over? In Goel’s course, for example, he reports that his students typically post 10,000 questions per term – and he adds that Jill Watson, with just a few more tweaks, should be able to answer approximately 40% of them. That’s 4000 questions that the teaching assistants and instructor don’t have to answer. That frees up a lot of their time to provide more in-depth responses to the truly substantive questions about course content.

More time to give better answers: that sounds like a good thing. But there are also potential concerns.

It’s conceivable, for example, that using Watson might not result in better answers but in fewer jobs for teaching assistants. Universities are increasingly keen to save money, and if one Watson costs less than two or three teaching assistants, then choosing Watson would seem to be a sound financial decision. This reasoning has far broader implications than its impact on teaching assistants. According to a recent survey, 60% of the members of the British Science Association believe that within a decade, artificial intelligence will result in fewer jobs in a large number of workplace sectors, and 27% of them believe that the job losses will be significant.

Additionally, what impact might it have on students to know that they are being taught, in part, by a sophisticated chatbot – that is, by a computer program that has been designed to seem human? Maybe they won’t care: perhaps it’s not the source of an answer that matters to them, but its quality. And speaking for myself, I do love the convenience of using my iPhone to ask Siri what the population of Uzbekistan is – I don’t feel that doing so affects my sense of personal identity. On the other hand, I do find it a bit creepy when I phone a help desk and a ridiculously cheery, computerized voice insists on asking me a series of questions before connecting me to a human. If you don’t share this sense of unease, then see how you feel after watching 15 seconds of this video, featuring an even creepier encounter with artificial intelligence.

Mark Morton

Mark Morton

As Senior Instructional Developer, Mark Morton helps instructors implement new educational technologies such as clickers, wikis, concept mapping tools, question facilitation tools, screencasting, and more. Prior to joining the Centre for Teaching Excellence, Mark taught for twelve years in the English Department at the University of Winnipeg. He received his PhD in 1992 from the University of Toronto, and is the author of four books: Cupboard Love; The End; The Lover's Tongue; and Cooking with Shakespeare.

More Posts - Website

Published by

Mark Morton

Mark Morton

As Senior Instructional Developer, Mark Morton helps instructors implement new educational technologies such as clickers, wikis, concept mapping tools, question facilitation tools, screencasting, and more. Prior to joining the Centre for Teaching Excellence, Mark taught for twelve years in the English Department at the University of Winnipeg. He received his PhD in 1992 from the University of Toronto, and is the author of four books: Cupboard Love; The End; The Lover's Tongue; and Cooking with Shakespeare.

Leave a Reply