Friday, December 20, 2013

The Process of Direct Assessment

This is a little unusual post, a reference post for academics.

What is it?
If you are in the education business these days, you have to do assessment or you're going to run into accreditation problems. Of course the reason to do assessment is so that you know whether you are actually teaching what you say you are teaching. A good teacher does assessment continuously. You read the papers or grade the tests or read the discussions and say to yourself, "Yeah, they're getting it" or "Wow, they completely missed it" or "Hmmm, some of them got it but some of them didn't."

We might call this sort of assessment "informal assessment." Like I said, good teachers and good departments do this sort of self-assessment continuously. In a perfect world, you wouldn't need anything else. Professors would make adjustments--both individually and corporately--without anyone having to tell them. However, not all do, and as a matter of public accountability, institutions are increasingly required to have data to back up their claims to be accomplishing their missions.

So we develop means of formal, direct assessment of student learning. By the way, student surveys are not direct assessment. They are indirect because they are the students telling you whether they liked the course or it met their expectations. This is valuable information, but it has dangers too. For example, students can grade a professor with problems better than they should because they liked her or he gave them easy grades. Meanwhile, a professor in a hard course or who is strict can receive worse evaluations than she or he deserves.

Outcomes
The first step toward formal, direct assessment is for each course to have clear "outcomes." What is a student supposed to gain from this class? These can be divided into three basic categories--outcomes in knowledge, skills, and dispositions. Of course an entire degree should have outcomes too. What is a student supposed to gain from this degree as a whole?

Goals tend to be vaguer and may or may not be "assessible." Language of outcomes implies that there is some way to measure whether a student has actually achieved the goal. Language of objectives can refer to more or less the same thing, but it sounds more like it's worded in terms of the intentions of the professor or institution rather than the student, whose learning after all is the target.

Artifacts
The key connection in assessment is to tie the outcomes of a course or a degree to specific, assessible artifacts in the course(s). So if you say you want students with a Master of Divinity degree to be able to interpret the Bible, you need to have some specific assignment or assignments in some required course that you can look at in order to see if it is happening.

So the process of assessing a course or degree requires you to map the intended outcomes of that course or degree to specific assignments in that course or degree that you can use to determine whether or not students in that course or degree are in fact achieving that outcome.

These artifacts will need to be collected in some sort of a cycle. You don't have to assess every outcome every year. You would ideally have a random sample--so you really don't want to pick just the artifacts from the best students. Ideally the professor of the course would not evaluate papers from his or her own students. You also want some sense that those evaluating the artifacts are approaching them with more or less the same ratings.

The world of online education will move eventually toward real time assessment. In this process, professors will both grade key papers and assess them at the same time. This will create a quite accurate database for overall assessment. Every student in every version of a particular course will be evaluated every time, and it will become quite clear whether students are achieving intended outcomes.

Rubric
Artifacts are evaluated according to a rather simple rubric. For example, it might be a four option scale: 1) artifact does not demonstrate the outcome, 2) it shows a little evidence of the outcome, 3) it basically achieves the outcome, and 4) it knocks the outcome out of the park. So you would hope that the averaged total of all artifacts will come somewhere in the 3-4 range.

Adjustments
The process of setting up such a system may lead to all sorts of adjustments. For example, if you can't find any artifacts relating to outcomes you have stated are part of your degree, you will need to redesign some assignments. Maybe there are some that come close but aren't quite on target. Redesign them. Certainly if students are completely failing to achieve some outcome, the pedagogy needs examined.

In a perfect world, this sort of assessment is redundant. If professors have been intentional all along, if they have been informally adjusting all along, this process will simply show that an institution is doing what it says it is. Students and the public, however, have a right to see the evidence. After all, they're paying good money to get an education.

1 comment:

Jake Schell said...

This was very informative. I finiished the AS in General Studies Online at IWU this year and will be starting the Biblical Studies program in February. There have been classes were improvments were clearly needed and others that exceeded my learning expectations. Overall, I consider my experiance at IWU invaluable. Keep up the good work!
Jake