I have three and a half things to say about the process:
1. Start using ActiveGrade/BlueHarvest
I haven't actually used BH, but it has many of the features of ActiveGrade, for which we were a pilot school this year, plus some others. The key here is that it makes the process of storing feedback and creating a record of the change in understanding for a standard workable. Before this, all of the feedback was lost, stuck on that paper that maybe was thrown away. I had a system before for keeping track of the progression of scores (crazy Excel macro action), but it didn't get the feedback. I hear that there may be some document uploading added to ActiveGrade before next year, too!
Where I had this last year (yeah, inches of paperwork), in addition to the Excel sheets themselves and the process of PDFing and posting individual grade summaries...
2. Use GoogleDocs to Schedule Reassessments
I started using a Google Form this year to have students sign up for reassessments (which I also restricted to only M, W, F, in order to give me some prep time/breathing room).
It looked like this:
This really helped the process, gave me an easily sortable list in order to prepare the reassessments efficiently, and was well-received by the students. I can't believe that I went through a whole year last year having kids wander in, show me some problems or whatever, and me making up reassessment problems on the fly for each one. That was nuts. Now they're ready to go, sitting in each student's personal folder in the reassessment box on reassessment day!
2.5. Make the Standards Coarser... if you want to
I had fairly 'grainy' standards last year: constant acceleration motion was like 5 standards, maybe. This year, I made (basically) each model a standard, so I had something like (for honors - regular is a completely different set of models): CVPM (split into graphical and algebraic the first term), CAPM (split the first term, CVPM and CAPM combined into 'Motion' in subsequent terms), BFPM, UFPM, Friction (these combined as 'Forces' in subsequent terms), UCM, GM (gravity), CoEM, ETM (work), CopM, pTM (impulse), SEM (static elec.), CEM (current electricity), Algebra, Units, Modeling, Writing, Error Analysis (the last three are tricky, and maybe the topic of a later post). Anyway, this reduces the number of reassessments by reducing the number of standards, makes their preparations more rigorous (because there's more material on them), and forces connections among different parts of each model without being (for my taste) too muddled. One key is how you communicate the scores, though. I used a scale of NP (not proficient), De-, De (developing), P-, P (proficient), A (advanced - again, a sticky issue to deal with), and gave feedback with a series of tables on the last page of each assessment (or reassessment). That looks something like this:
This keeps the feedback obviously specific for the students, without making me keep track of 50 standards per term. I also type most of what's written into the ActiveGrade report when I enter a score.
I also started letting students make a screencast if they had a P- because of only a single issue/with my permission ('nice job conserving momentum, but you need to work on using the speed of approach/retreat to analyze those elastic collisions'), which seemed to give good results in terms of what they produced and how much they reported that it helped them solidify their understanding. I felt that they needed to be at a relatively high general level of understanding with the standard already to keep these 'honest,' and to make sure (for my sake and theirs) that the ability was there to transfer a description of what's happening ('energy's being transferred from Ug to K) into effective problem-solving.
I get the arguments for grainy standards, and I've done that myself, too, but I think that I'm sticking with this for all of these reasons.
3. I'm going a step further
Even with the Google form, there's still some hassle, and I devote a couple of hours per week to just printing, copying, filing, etc. reassessments, not to mention creating them. I started developing a couple of Python applications to help me with this. I'm posting code on github (I'm new to that, so I think that was the proper link!), if you're interested. [Share, attribute, and give me a shout if you use it] Of course, I'm no star programmer, and these are my first two Python programs, so suggestions are welcome.
The two programs do this (so far):
- reassess.py: it opens the saved sheet (CSV) from the Google reasssessment form and creates headers with the reassesser's name, the date, the standard, and the section. There's a spot for the questions to be imported, but I'm still working on the DB (next program). Most exciting, it creates a LaTeX document (using a common header) and PDF for each student with all of that, including a graph paper watermark (Thanks, Mark Hammond!). Once the questions and standard tables (see 2.5 above) are in there, it'll make its own reassessments and send them to the printer (hopefully with the double-siding, stapling, and punching!). One button, one day's worth of reassessments. This took me about three days. Here's a sample output PDF (so far):
- problemdb.py: it creates the problem database from a set of LaTeX documents (sans headers), each with a unique numerical filename, a second line which is a comment containing a list of standards covered in the problem, and a third line which is a comment containing a brief description. It searches the problem database for any combination of standards that you enter and returns a list with the problem numbers, descriptions, and standard sets (and images, eventually) for matching problems. This will let me populate a column in the Google spreadsheet assigning problems for each kid's reassessment, which reassess.py will add to their LaTeX documents. I'll also make it print out a PDF of search results (with solutions), so that I'll be able to make keys easily. This one took me about two hours (so far). That's a nice learning curve! Here's a sample of a single standard search and a two standard search, with only three problems in the DB so far: