Saturday, June 30, 2012

...and another thing...

I tweeted recently about Sal Khan's (of Khan Academy) response in the Chronicle of Higher Education to the MTT2K tweet-project-happening-movement-thing, where teachers present critiques/parodies of Khan's video, a la Mystery Science Theater 3000:


I don't have a subscription to that journal, so I admittedly only commented on the excerpt on dy/dan.

Something in there's still bothering me, though:

     With procedural, worked problems: That’s how I learned, that’s how everyone I knew learned.

In addition to Dan's (and others') already-written critiques on procedural learning and lecture's general ineffectiveness (a point which Khan still seems not to acknowledge or even fully appreciate, which is the scary part, especially for someone being given the keys to the educational kingdom by many), there's another issue here.

One of the reasons that physics reform efforts (which are some of the most highly advanced, in terms of purposeful development and research support) have veered sharply away from lecture is that it's been shown to be ineffective.  One of the reasons that it has been a difficult sell to many (teachers, administrators, parents, students) is that lecture and procedural approaches were how they learned.

For most students (and parents and administrators as past students, unless they went into scientific/mathematical fields - but maybe not even then, if they didn't go into physics), it doesn't even work for them! It's familiar, though, and there's almost an idea that you're not supposed to understand physics and math.  It's a hoop that you jump through that you forget right after. We teachers see that mentality all the time, and SBG's a good way to start to stem that tide, but that's not my point here.

For physics teachers, it's a difficult sell precisely because lecture probably did work for them. I took a course similar to AP Physics B in high school (we didn't have AP-designated courses at my school, but it was basically the same course content) as my intro course, and a calculus-based course that was about the same as the two AP C course for my second course (and I had the good fortune to be able to take electronics and modern physics courses too).  After that first course full of a crazy amount of content, I really did have a solid grasp of the concepts beneath pretty much all of that physics.

That's a story that's been around forever - this worked for me, so it'll work for you. If it doesn't, then you're not working hard enough or you're not smart enough or whatever.  One of the historical failures of lecture was the inability to really reach folks other than those folks that become teachers, engineers, etc.  Everyone else was left out.

So if that's an old story, what's left to say? I think that one of the reasons that it could work for me was that I had time to engage with the material, even just inside my own head. I had downtime and time alone.  Quite a bit, really. There were no cell phones, text messages, Facebook, Twitter, etc.  I had email and FTP and Gopher, etc., but those weren't 24-7 sources of instant gratification like students have today.  I built stupid stuff, did silly projects, took apart things that I couldn't put back together, etc.  I wasn't out doing a Westinghouse project, I was just engaging with the world and, with some space and time, I was able to connect the real world to all of those things that I learned about in physics class.  I was interested in all of that stuff before I took physics, so I had a leg up (especially on the many kids that I see that don't know how to use a wrench or don't understand what are and aren't good uses for duct tape, etc.), but the space to do that experimentation before, during, and after my physics education was important, even if it didn't look like it all the time.

Regardless of the reform method, the key is stimulating this engagement.  Whether it's SBG, modeling, peer instruction, etc., that's really always the lynch pin. In the past, there was opportunity for that thought and experimenting (building things, etc.) outside of class, even if the lecture itself wasn't convincing to everyone that that was something that they should do (and even if it didn't really train them to do that well or at all).

Today, kids don't have (mostly through choices that they make, though there's great societal pressure to make those choices) the long stretches of attention to devote to turning these ideas over in their minds.  There's always a text message to reply to or some other distraction.  They're just never alone and their time is increasingly sucked into reactive communication.  Never before have so many been able to talk so easily and had so little to say.

If they're sitting in class listening, almost all of them are not really engaging with that material.  Outside of class, almost all are trying to get through whatever assignment or test prep they're doing so that they can get on to the next thing or the next text message, interrupted all the while so that most can't even do that bit efficiently.

What reform education methods do is actually carve out time for that reflection to happen. 

The key is to walk into the class and see what the kids are doing.  If it's something, it's probably a lot more effective than doing nothing while sitting and listening. 

Yes, it'd be great to for kids to be able to do more of this outside class, but the deck's stacked against us.  Even when kids did have time, the success rate of lecture was abysmally low. If we teach them how to think and reflect and analyze, then we know that they're actually doing some of that (because we're there when it happens), and we may even stimulate them to do more of it while they're not with us. Win, win, win.

Thursday, June 21, 2012

Art Class, Day 1

My school has a great program of faculty grants, wherein teachers apply for grants to do things during the summer that are enriching but have nothing to do with their jobs.  I love this program for a lot of reasons.  In the past, I've taken a guitar-making class and drum set lessons. This summer, I'm taking a drawing class at the Delaware Art Museum.

This is an interesting experience for me because I've never taken an art class before. I'm not particularly artistic, but I thought that it'd be good for me. I'd like to draw some cartoons for Anders's walls depicting some fundamentals of physics, so this'd be a good prep for that. I'm not a good artist, so there's that.

Most interestingly, it's a chance for me to be a student again.  Nervous, unsure, decidely non-expert - all things that our students experience constantly, but that we're far removed from now.  It's a great chance to build up that empathy and to (re)gain perspective on what it's like to be learning something for the first time.

I didn't really even know what all of the tools were that I was directed to buy, but there I was, with my shopping bag full of 15 different kinds of dark stuff to smear on paper. Of all of the things that I learned tonight, the most important one probably has a lot to do with learning physics:

     Draw what you see.

Our teacher kept saying that, and it was really true.  Don't draw what you know.  Don't draw what you remember.  Don't draw what you think should be there.  Don't draw what you think will look good with what you already have.  Draw what you see.

In physics, I see that as not trying to shoehorn a new problem into the mold of an old one, not trying to sound artificially science-y when describing a situation, not trying to figure out what the teacher wants you to write, not worrying when you're not sure how what you're doing will lead to the answer. Just physics what you see.

I want to be public with the process here, because I want to be upfront for my students that it's cool to struggle with something that's difficult, it's fine to try in an area that you're not "into," it can be fun to work hard, and that being terrified about something doesn't mean that you shouldn't attack it head-on.  So, we'll have five more weeks of class. Hopefully, I'll improve :)

Some pics from tonight!

Wednesday, June 13, 2012

Framing the Semester, Finding the Model

I had to fudge a bit on my assessment scheme several times this year.  For each model, I identified "core skills," "proficiency indicators," and "advanced indicators."  For example:
 It seemed like a no-brainer to me that identifying when the model applied was a non-negotiable line in the sand - you are "not proficient" (the lowest of five/six levels) if you can't do that.

Reality intervened a bit - there's definitely a spot along the learning continuum where students could apply conservation of momentum fairly well, but the process of determining why momentum's conserved for this collision but not that one, etc. can actually be quite difficult, so I didn't always enforce that as an automatic NP frequently, though it certainly didn't tell me that you were proficient.

My first (and probably second) response was to move that skill up a level, so that you can be "developing" without being able to determine every case for which the model applies, but not "proficient."
While that's probably still what I'll do - it represents a different perspective on the mental development of the models than I had last year - I think there's a bigger opportunity here.

Part of the problem with developing those skills is the nature of developing one model, then the next, then the next, etc.: for the first half of the term, there's not much suspense as to which model will apply, so students don't necessarily get an authentic experience of discerning which model applies.  Even if you put in some time having them discern whether the one or two models that they know apply, there's not much suspense most of the time, because they generally know that they'll be able to solve almost all of the problems that you give them.

Here's where my idea comes in: I'm going to use a recitation problems system similar to Kelly O'Shea's, but we're not just going to look at them a few weeks before the end of the term.  We're going to start them on day one (ish).  We'll look at a big list of situations, most of which we have no idea how to attack.  We'll identify why the model(s) that we know so far don't describe these, or try to apply them and try to recognize their failure.  We'll really learn to identify when our model(s) will work. We'll motivate the construction of new models - "hey, we still can't doing anything with those colliding cars, because we don't know the force acting between them, and it's not going to be constant anyway.  We need something that can deal with that - let's crash some carts and see if we can model them!"  I think that this could be my game-changer for my students' big picture understanding of models and their ability to solve the really sticky ill-posed problems (you know, like life).  At the end of the term, we can look back and have a really tangible reminder of how far we've come.  Seems like I have three sets of recitation problems to write.

Wednesday, June 6, 2012

Reassessment Management

Lots of folks (including me) have posted about reassessment management for SBG: it's a real logistical issue, and it can really reduce the amount of time that you get to spend on thinking about how you run class or your time to help kids or (usually) personal time.

I have three and a half things to say about the process:

1. Start using ActiveGrade/BlueHarvest
I haven't actually used BH, but it has many of the features of ActiveGrade, for which we were a pilot school this year, plus some others.  The key here is that it makes the process of storing feedback and creating a record of the change in understanding for a standard workable.  Before this, all of the feedback was lost, stuck on that paper that maybe was thrown away.  I had a system before for keeping track of the progression of scores (crazy Excel macro action), but it didn't get the feedback.  I hear that there may be some document uploading added to ActiveGrade before next year, too!

Where I had this last year (yeah, inches of paperwork), in addition to the Excel sheets themselves and the process of PDFing and posting individual grade summaries...
... I had only ActiveGrade this year, with the added functionality of all of that feedback.  Big win there.  They also started doing an automatic alert to parents and advisors when scores were entered.  There's a report that you can cook up yourself, but it takes some time, and the incremental notification might be nice, particularly when there's an alternate system for grade reporting that every other teacher uses (like we have here) - it avoids the two login hassle a bit and means that there aren't surprises at the end of the term, though I fear that it might trigger the helicopter parent assaults early in the year.

2. Use GoogleDocs to Schedule Reassessments
I started using a Google Form this year to have students sign up for reassessments (which I also restricted to only M, W, F, in order to give me some prep time/breathing room).
It looked like this:

This really helped the process, gave me an easily sortable list in order to prepare the reassessments efficiently, and was well-received by the students.  I can't believe that I went through a whole year last year having kids wander in, show me some problems or whatever, and me making up reassessment problems on the fly for each one.  That was nuts.  Now they're ready to go, sitting in each student's personal folder in the reassessment box on reassessment day!


2.5. Make the Standards Coarser... if you want to
I had fairly 'grainy' standards last year: constant acceleration motion was like 5 standards, maybe.  This year, I made (basically) each model a standard, so I had something like (for honors - regular is a completely different set of models): CVPM (split into graphical and algebraic the first term), CAPM (split the first term, CVPM and CAPM combined into 'Motion' in subsequent terms), BFPM, UFPM, Friction (these combined as 'Forces' in subsequent terms), UCM, GM (gravity), CoEM, ETM (work), CopM, pTM (impulse), SEM (static elec.), CEM (current electricity), Algebra, Units, Modeling, Writing, Error Analysis (the last three are tricky, and maybe the topic of a later post).  Anyway, this reduces the number of reassessments by reducing the number of standards, makes their preparations more rigorous (because there's more material on them), and forces connections among different parts of each model without being (for my taste) too muddled.  One key is how you communicate the scores, though.  I used a scale of NP (not proficient), De-, De (developing), P-, P (proficient), A (advanced - again, a sticky issue to deal with), and gave feedback with a series of tables on the last page of each assessment (or reassessment).  That looks something like this:
This keeps the feedback obviously specific for the students, without making me keep track of 50 standards per term.  I also type most of what's written into the ActiveGrade report when I enter a score.

I also started letting students make a screencast if they had a P- because of only a single issue/with my permission ('nice job conserving momentum, but you need to work on using the speed of approach/retreat to  analyze those elastic collisions'), which seemed to give good results in terms of what they produced and how much they reported that it helped them solidify their understanding.  I felt that they needed to be at a relatively high general level of understanding with the standard already to keep these 'honest,' and to make sure (for my sake and theirs) that the ability was there to transfer a description of what's happening ('energy's being transferred from Ug to K) into effective problem-solving.

I get the arguments for grainy standards, and I've done that myself, too, but I think that I'm sticking with this for all of these reasons.


3. I'm going a step further
Even with the Google form, there's still some hassle, and I devote a couple of hours per week to just printing, copying, filing, etc. reassessments, not to mention creating them.  I started developing a couple of Python applications to help me with this.  I'm posting code on github (I'm new to that, so I think that was the proper link!), if you're interested. [Share, attribute, and give me a shout if you use it] Of course, I'm no star programmer, and these are my first two Python programs, so suggestions are welcome.


The two programs do this (so far):
  • reassess.py: it opens the saved sheet (CSV) from the Google reasssessment form and creates headers with the reassesser's name, the date, the standard, and the section.  There's a spot for the questions to be imported, but I'm still working on the DB (next program).  Most exciting, it creates a LaTeX document (using a common header) and PDF for each student with all of that, including a graph paper watermark (Thanks, Mark Hammond!).  Once the questions and standard tables (see 2.5 above) are in there, it'll make its own reassessments and send them to the printer (hopefully with the double-siding, stapling, and punching!).  One button, one day's worth of reassessments.  This took me about three days. Here's a sample output PDF (so far):  
 
  • problemdb.py: it creates the problem database from a set of LaTeX documents (sans headers), each with a unique numerical filename, a second line which is a comment containing a list of standards covered in the problem, and a third line which is a comment containing a brief description.  It searches the problem database for any combination of standards that you enter and returns a list with the problem numbers, descriptions, and standard sets (and images, eventually) for matching problems.  This will let me populate a column in the Google spreadsheet assigning problems for each kid's reassessment, which reassess.py will add to their LaTeX documents.  I'll also make it print out a PDF of search results (with solutions), so that I'll be able to make keys easily.  This one took me about two hours (so far).  That's a nice learning curve!  Here's a sample of a single standard search and a two standard search, with only three problems in the DB so far: