At this time of year, educators’ minds often turn to grading. And how much a despised part of academia it is. Whether it’s an exam or the ubiquitous “final project” that characterize so many GIS courses, I’m curious about assessment of student learning of GIS in general. I don’t often see “learning outcomes” on GIS syllabi. More often I see “learning objectives,” and evidence that instructors are not making a distinction between the two.
But that doesn’t even matter right now. I am wondering how people manage to feel confident that their assessment approaches actually align with their learning outcomes (and/or objectives). I often see language like “students will understand the structure and function of a GIS,” which could be a pretty big task, depending on how you plan to measure “understand.” Which we know is a compelling but problematic word for us to use, and one to be avoided. Avoided, that is, IF we care about actually finding a way to determine if students are actually learning what we *say* we want them to learn.
Distinguishing between content knowledge and skills knowledge within introductory GIS courses becomes relevant here. This throws us into the middle of the technological, pedagogical, content knowledge (TPCK) vortex. Personally, I believe that GIS firmly belongs there, and we don’t fully understand those implications. *Especially* when we’re following what we think is a teaching with GIS approach, so that students are magically learning biology or history while the GIS they’re using is an invisible scaffold. There’s the mantra of online GIS, and that adolescents may have many fewer problems than adults with software interfaces, and that the infamous steep learning curve of desktop GIS is so 1980s-2000s, etc. But are we really to believe that ArcGIS Online is the Easy Button for all teaching with GIS?
Which brings me back to learning outcomes. What are students learning, and how are we measuring that? Head over to our Forums for chances to discuss this further.