Single Blog

Using the Right Language to Assess Beyond Evaluation

Originally posted on Ryerson Student Affairs on December 9, 2015.

Assessment & You is an on-going series brought to you by Lesley D’Souza monthly on RyersonStudentAffairs.com. Whether you’re an experienced assessor or a complete newbie, she’ll introduce in-depth knowledge on the best practices, methods, and how-to, while demonstrating the act of assessment using this very series.


When I became a Residence Assistant in my second year of university I learned about the importance of language during our equity and diversity training session. I was horrified and a bit defensive once I started to understand the derogatory roots of many terms that I had used in ignorance. After that, my propensity to interrupt conversations with the phrase, “Actually, did you know where that word comes from?” pretty quickly became known to my friends and family. Since then, life experience has repeatedly emphasized why it is important to choose my words carefully. When it comes to assessment, people often use the words “assess” and “evaluate” interchangeably, however, these are distinct concepts and it is important that we understand not only the difference between them, but also other subtle nuances of assessment language. You may question why these elusive differences are important, but the power of language is considerable, and can lay between us and producing mindful, proactive assessment.

Assessment Versus Evaluation

It’s understandable that there is confusion about the relationship between assessment and evaluation since the difference is less clear in Student Affairs than in academic environments. In academics, evaluation is a summative, product-oriented, and judgmental activity that happens at the end of the course (for example, an exam at the end of a course). On the other end of the spectrum you have assessment—a formative, process-oriented, diagnostic activity that can occur throughout the experience (including at the end). Assessment is the blanket term here because while not all assessment is evaluative, all evaluation is assessment.

Because many of the differences between evaluation and assessment deal with timing, in Student Affairs, the difference between assessment and evaluation becomes muddier as many of our co-curricular programs have a less distinct beginning and end. So we find ourselves distinguishing between them based on the spirit of our efforts—are we trying to judge performance, or are we trying to explore the learning that is happening?

Indirect Versus Direct Assessment

Most of the assessment that I’ve known to take place in Student Affairs is summative in nature. That is, we conduct it at the end of our programs to evaluate our overall success. The problem is that we fall back pretty heavily on indirect assessment tools to find out if we are achieving our outcomes.

An example of indirect assessment would be a tool that collects information about what students think they have learned, versus a direct assessment that actually evaluates what students know or are able to do. Indirect assessment (ie: student feedback) can be a useful tool to support our formative assessment efforts, but can fail us when we try to evaluate our success. If you’re asking your students to tell us how well they understand the content from the program versus testing their knowledge of the content, you’re relying on their ability to be objective and remarkably self-aware. By leaving students to self-report their learning, we can’t really evaluate the learning itself. We need to conduct some kind of direct assessment which could include quizzes, reflections, rubrics, or observation to demonstrate to what degree they’ve achieved the set outcomes.

And here is the main challenge with direct assessment: most students will hardly elect to do extra quizzes or assignments that can demonstrate their learning from co-curricular programming. We have to balance pragmatism with the need for good data. Reflection and storytelling are already tools that are already widely used to support the impact of our programs. One possibility for us is to create rubrics that can be applied to these experiences so we can quantify narrative data. By doing so, we can start to dig into what our students can actually demonstrate as a result of their learning.

What Is A Rubric?

Simply put, a rubric is a document that sets expectations and criteria for success and organizes these into a series of levels that can describe the quality of the learning. Rubrics make it possible for us to ensure our evaluation is consistent, even across multiple assessors. When creating one, you should define the learning you want students to achieve, set criteria for success and then organize them into levels that have a score or level of mastery assigned to them.

Think about it—what if we started to actually include students in our efforts to assess? The beauty of using rubrics is that good ones will provide a concrete framework that makes it possible for students to self, or peer-assess, their progress. If best practice means setting our outcomes before we start our program, then it should also be something that we communicate to students at the outset of their learning experience. Doing so will help set the stage for their learning. Let’s not forget, we’re in the business of activation and agency. Being student-centred might mean building student input into the process we use to set our goals and outcomes in the first place.

Helpfully, our friends at Campus Labs have just unveiled a brand new “Rubrics” tab in Baseline that can provide you with sample rubrics, and support your first forays into rubric assessment. They even have rubrics to assess your rubrics. They call them meta-rubrics. Login using your Ryerson ID to read more about rubric best practices, advantages & disadvantages, and check out some examples from other institutions.

Everything In Moderation

Now, I don’t want to get you thinking that direct assessment is the only way to go; indirect assessment does have value. Information about shifts in student perspectives and satisfaction is useful when considering delivery methods and contextualizing learning. The problem is that we’re using it wrong—it can’t accurately tell us how we, or our students, are actually performing. Likewise, evaluation is important, but it can’t be our only purpose. A holistic approach to assessment includes evaluation, as well as direct and indirect methods—they tell us whether or not students are attaining our outcomes—and like all good assessment, together they can also show us why or why not.

Looking back again to my first language training in second year—we talked a lot about the intent versus the impact of our words. Essentially, that the intentions behind words become irrelevant in the face of hurtful impact. It’s the same with assessment. We might have the best of intentions with our programs, but what really matters are the outcomes. Evaluation, indirect & direct, formative & summative—these are all important words related to assessment, but alone they cannot make us successful. Ultimately, if we can start to use words like curiosity, innovation, accuracy, and courage to describe our assessment efforts, we’ll be rewarded with data that can truly transform our work for the better.

Comments (0)

Post a Comment

© Copyright 2016 - Lesley D'Souza * Photos by Katherine Holland Photography