I had to fudge a bit on my assessment scheme several times this year. For each model, I identified "core skills," "proficiency indicators," and "advanced indicators." For example:
Reality intervened a bit - there's definitely a spot along the learning continuum where students could apply conservation of momentum fairly well, but the process of determining why momentum's conserved for this collision but not that one, etc. can actually be quite difficult, so I didn't always enforce that as an automatic NP frequently, though it certainly didn't tell me that you were proficient.
My first (and probably second) response was to move that skill up a level, so that you can be "developing" without being able to determine every case for which the model applies, but not "proficient."
Part of the problem with developing those skills is the nature of developing one model, then the next, then the next, etc.: for the first half of the term, there's not much suspense as to which model will apply, so students don't necessarily get an authentic experience of discerning which model applies. Even if you put in some time having them discern whether the one or two models that they know apply, there's not much suspense most of the time, because they generally know that they'll be able to solve almost all of the problems that you give them.
Here's where my idea comes in: I'm going to use a recitation problems system similar to Kelly O'Shea's, but we're not just going to look at them a few weeks before the end of the term. We're going to start them on day one (ish). We'll look at a big list of situations, most of which we have no idea how to attack. We'll identify why the model(s) that we know so far don't describe these, or try to apply them and try to recognize their failure. We'll really learn to identify when our model(s) will work. We'll motivate the construction of new models - "hey, we still can't doing anything with those colliding cars, because we don't know the force acting between them, and it's not going to be constant anyway. We need something that can deal with that - let's crash some carts and see if we can model them!" I think that this could be my game-changer for my students' big picture understanding of models and their ability to solve the really sticky ill-posed problems (you know, like life). At the end of the term, we can look back and have a really tangible reminder of how far we've come. Seems like I have three sets of recitation problems to write.