the taxonomy was based on the “observable behaviours of learners”. that’s right, the taxonomy classifies and organises the behaviours that learners outwardly demonstrate.
bloom mentions something (which again, as an ID, made me look self-conscious and guilty!) that there is no better or worse. it’s not as though the higher levels are superior levels and the lower levels map to middling student performance. rather, he points out that the taxonomy tries to mirror reality, and that the arrangement reflects the range from simple to complex behaviours.
as IDs, we speak longingly of not having opportunities to do really deep teaching, of never moving beyond an application level in general in elearning. while i very much share that notion, i must admit, it may be a remnant of academic snobbery or bias, of the schoolroom value system where ability to handle complexity is evidence of intelligence. i’d still say “not that it isn’t…! but it’s not the only kind, or even main kind, of intelligence?” and considering i work for the corporate training and not k12 market,i should probably reconsider what is a more superior or valued form of intelligence for my learners to have in their context?
so blooms says the “emphasis in the handbook is on obtaining evidence on the extent to which desired and intended behaviours have been learned by the student”. i find this to be based on a very conventional model of teaching, where the teacher decides what is desirable or appropriate and then imparts the information to the student, who must conform and obey, no questions asked. (and we know by now what a BIG problem “no questions asked” poses for me! :D) also, note that “desired behaviour” bit. what if the student has a perfectly valid reason to not conform, to challenge the convention? education for social reform and transformation, anyone?
add to that, medium-specific considerations. can we really consider in our context that we adequately offer learners ways to demonstrate their knowledge? at least at the present moment, i really don’t think so. most of us, even khan academy, survive on mcq/mrq mechanisms.
also, what’s up with the linearity? some of us don’t have a deep view into things until they’ve sort of fermented in our minds for a while. then we can draw out all kinds of crazy analysis. but not right off the bat. and it’s not a case of the sum always being reducible to the parts, y’know?
then another thing. bloom adds a disclaimer. he says the taxonomy “cannot be used to classify educational plans which are in such a way that either the student behaviours cannot be specified or only a single (unanalyzed) term or phrase such as “understanding” or “desirable citizen” is used to describe the outcomes. only those educational programs which can be specified in terms of intended student behaviours can be classified.” meaning, there must be depth to the description, adequate for further analysis and breakdown. and this flies in the face of the corporate obsession for SMART goals (which i think is another grossly misapplied concept, but that’s a story for another day…).
that raises an interesting question for us in practice. how often do we get rich descriptions of learner behaviour in corporate behaviour? do we need to reconsider when we pull the taxonomy out of our bag of tricks? at least at first glance, organisation-wide compliance training would be an area to exclude right off. the requirement usually seems to be “people just need to know about this because it’s regulatory information. they’ll learn later how it matters for their particular jobs.” or worse, “i just need to roll out a training because it’s a mandatory topic, and i need that off my checklist this year”. but i want to dwell on instructional design considerations and perspectives separately.
coming up next, the taxonomy itself. 🙂