Wednesday, June 13, 2012

Framing the Semester, Finding the Model

I had to fudge a bit on my assessment scheme several times this year.  For each model, I identified "core skills," "proficiency indicators," and "advanced indicators."  For example:
 It seemed like a no-brainer to me that identifying when the model applied was a non-negotiable line in the sand - you are "not proficient" (the lowest of five/six levels) if you can't do that.

Reality intervened a bit - there's definitely a spot along the learning continuum where students could apply conservation of momentum fairly well, but the process of determining why momentum's conserved for this collision but not that one, etc. can actually be quite difficult, so I didn't always enforce that as an automatic NP frequently, though it certainly didn't tell me that you were proficient.

My first (and probably second) response was to move that skill up a level, so that you can be "developing" without being able to determine every case for which the model applies, but not "proficient."
While that's probably still what I'll do - it represents a different perspective on the mental development of the models than I had last year - I think there's a bigger opportunity here.

Part of the problem with developing those skills is the nature of developing one model, then the next, then the next, etc.: for the first half of the term, there's not much suspense as to which model will apply, so students don't necessarily get an authentic experience of discerning which model applies.  Even if you put in some time having them discern whether the one or two models that they know apply, there's not much suspense most of the time, because they generally know that they'll be able to solve almost all of the problems that you give them.

Here's where my idea comes in: I'm going to use a recitation problems system similar to Kelly O'Shea's, but we're not just going to look at them a few weeks before the end of the term.  We're going to start them on day one (ish).  We'll look at a big list of situations, most of which we have no idea how to attack.  We'll identify why the model(s) that we know so far don't describe these, or try to apply them and try to recognize their failure.  We'll really learn to identify when our model(s) will work. We'll motivate the construction of new models - "hey, we still can't doing anything with those colliding cars, because we don't know the force acting between them, and it's not going to be constant anyway.  We need something that can deal with that - let's crash some carts and see if we can model them!"  I think that this could be my game-changer for my students' big picture understanding of models and their ability to solve the really sticky ill-posed problems (you know, like life).  At the end of the term, we can look back and have a really tangible reminder of how far we've come.  Seems like I have three sets of recitation problems to write.

2 comments:

  1. Hey Josh,

    That sounds really cool! I do a way lamer attempt at that kind of thing by including some of Matt's problems in the early units and by putting in both goal-less problems and out-of-unit problems throughout. I know that I need to do a less lame version of it, too, though.

    Thinking about your plan, I wonder whether it would be better to use goal-less prompts for your set of problems or to use goal-ed problems that ask specific questions. The goal-less variety might be a little weirder for them to understand starting day 1 (though it doesn't take much to explain, obviously), but it could be neat to have several layered solutions to each problem. It would make it even less of a matching activity because some situations could be solved in multiple ways, a response that seemed adequate before momentum, for example, might now hugely open itself up with the new perspective of a new fundamental principle. So the "solutions" would keep evolving and expanding as new tools were built, and it wouldn't be as though you've finished any of the problems until you get to the end of the year.

    Okay, I think I might be jumping on board with you for this in the regular class, at least. I'm adding it to my summer list.

    ReplyDelete
    Replies
    1. I think this is a really cool idea too. I'm not sure about making them all goal-less problems though. From your perspective as a teacher, it's obvious and cool that the same problem can have different solutions once you start applying different models, but I would expect at least some of the students to just say "look, we already knew how to do stuff on this problem. We're done with it," and not see that they could do more with it later. I mean, if you told them to solve it again, I'm sure they would, but will they recognize on their own that it's worth doing? I'm sure some of them will, but my gut feeling is that most won't. If there is a specific answer that they don't know how to get, that might be a more compelling reason for them to want to apply the newer model.

      I think the idea of students learning to apply multiple models to the same situation to get different information about it is a good one, and that's what the goal-less problems help them learn. But in my mind at least, that is a different teaching goal than looking at a bunch of different problems and figuring out which *one* model is best to apply to each problem. I think both are valuable things to do with students, but neither will be as well accomplished if one task is meant to accomplish both. But maybe I'm wrong, or misinterpreting what the goal is?

      Delete