Friday, February 25, 2011

Quick Labs

Some more quick lab results:
  • Alex and Mike let an air track glider accelerate down an inclined track and fly off of the table, predicting where it would land (and placing a catch box there to protect my expensive gliders!).

 
Here are some videos of their attempts:
Success! (They realized that the end of the track has no air holes and slows the cart down a bit)
  • Haley and RJ tried to predict the amount that an air track bumper would compress while reversing the cart's motion.  They figured out that the rubber band isn't a linear spring, which colors their results a bit.

     A screenshot from them using Logger Pro and a video of the cart's turnaround to find the amount that the rubber band bent:


Wednesday, February 23, 2011

Quick Labs

Honors physics did some quick labs today - labs conceived, designed, executed, and presented on a white board within a single period.

Here's what they did:
  • Sandra and Jason - found the coefficient of kinetic friction between an air track and its glider, sans air!
  •  Danica and Steven - predicted how far a cart would roll down the ramp before being stopped by a spring
  •  Kati and Jayme - also determined the coefficient of kinetic friction between an air track glider and an air track, without the air on. 
 

Extra fun: why do you think that Jason's and Sandra's coefficient of friction was higher than Kati's and Jayme's?  Both are perhaps surprisingly high, but the track really stopped the carts surprisingly quickly!  Add your favorite theory in the comments!

Tuesday, February 15, 2011

Swish!

Zach and Chris did a "quick lab" (conceived, executed, analyzed, and presented during a single class period) this week on the forces exerted on the basketball as Chris shoots a (perfect) shot.

In their own words:
The Question - "How much force (friction) is used to spin the basketball compared to the normal force used to propel it? "

The Design - "We recorded Chris taking a shot just short of the free throw line.  We analyzed the video on Logger Pro to determine the initial velocity and the acceleration of the ball.  We also were able to determine the angular velocity and acceleration of the ball. "



The Physics - "We used the known information to calculate the normal force on the ball from Chris's hand, and also the frictional force used to spin the ball."

The Whiteboard:


The Takeaway - "About 20% of the total force used to shoot the ball was used to spin it, and Chris's jumper is silky smooth."

Friday, February 11, 2011

On A Roll

Kyle and Joseph did a "quick lab" (conceived, executed, analyzed, and presented during a single class period) this week on the rotational inertia of a roll of masking tape.

In their own words:
The Question - "What is the moment of inertia of a roll of masking tape? What is the coefficient of the moment of inertia for the tape as compared to a uniform annulus?"

The Design - "2 ramps of different and changeable angles, one with the masking tape rolling and one with a frictionless object.  Angles are changed so that the two objects have the same acceleration."

The race is shown here: pretty much the desired tie.  The video's a little dark because it was shot at 300 fps.

The Physics - "We used the net torque and net force equations to solve for the inertia.  We used the linear acceleration to find the angular acceleration.  The coefficient of the moment of inertia is the moment of inertia divided by the mass multiplied by the radius squared."

The Whiteboard: 


The Takeaway - "Our moment of inertia calculation was a little bit off as compared to an annulus of the same dimensions, but not too far off. One of our assumptions was that the tape was of uniform density. Since the tape also had cardboard to keep its shape, our experimental moment of inertia was not completely accurate."

Monday, November 8, 2010

Assessments - Depth and Frequency

A few unsorted observations that I've had on assessment, now that the first term using SBG is nearing an end:
  • Assessing a standard 3, 4, or 5 or more times in a week is silly (especially because I report scores only weekly - those would all turn into one score).  Working with those standards that many times is great, but pick one to be an assessment, preferably the hardest.
  • It's great (almost required) to assess each content standard before the test.  For me, this has meant more frequent small quizzes.
  • It's tricky to deal with assessing a standard with a shallow-level question, in the absence of other data.  Getting it right doesn't really mean a 4, though getting it wrong could easily mean 0 or 1.
  • Frequent assessments dealing with lots of standards are a pain to keep track of, and take longer to grade.  Sometimes, it's unavoidable.
  • Sometimes, I feel like I need a "knew what the hell was going on in lab" standard, apart from any sort of content or skill.
  • Broad and shallow assessments seem to give me the most pause when determining the scores to give.  Tests can be deep enough (and varied enough) that I can get a good picture of many standards at once.  Narrowly-focused quizzes, however, seem to be the best kind, because I can get the same kind of insight that I can on a test, just not over so many standards.  Lots of shallower questions (or a few questions smearing a variety of topics together) can tell me what you don't know anything about, but they're not great at telling me what you know at a 3.5 or 4 level.
  • Even though the understanding of some overarching concepts is key in the overall prosecution of a problem, I tend towards mostly assessing them via "conceptual" questions, which can come as 'clicker' questions or ranking tasks, or which can come as the same types of questions on tests.  I'm not sure if that's good or not, but that's just how it seems that it has gone.
  • Fewer kids than I've expected have come back to reassess.  It'll surely pick up towards the end of the term, but the "one standard per day" reassessment rule will keep it manageable for me.

Tuesday, October 19, 2010

Keeping Track

OK, so now we don't have a gradebook of points on assignments, but instead one of scores on standards, each of which can change multiple times during the term, and not even necessarily the same number of times for each student.  How do we keep track?


Method #1: just rewrite the old scores with new ones every time a new assessment is done

  • Pros: relatively easy to do, compatible (mostly) with the FAWeb gradebook
  • Cons: lots of entry into the gradebook software, FAWeb doesn't do the grading function correctly, but it could average the standard scores, doesn't track history of the scores, you have no record of a student's score on a particular assignment

Method #2: Have a different spreadsheet/record book for each kid, with a column or row for each standard

  • You could potentially track scores by day or by assignment
  • Pros: Lots of historical data
  • Cons: Tracking by assignment certainly brings up lots of writing of assignment name on many sheets, you can't see the class at once, so many sheets!, not compatible with FAWeb (or only so by you updating each student each week or so in parallel with your other grade sheets


Method #3 (mine): Track student scores on a weekly tally sheet (maybe up to 3 per standard per student) - each class gets a grid with students in row, standards in columns, and scores are entered in the boxes, color-coded by assignment; come up with an average or something of those, by whatever algorithm or judgment you'd like to apply (I use judgment, looking at the different scores and their assignment types); enter those scores for each student for the week into Excel gradebook for that week; the gradebook uses the calculation method specified to come up with the current score for each student for each standard and then creates a report for each student tracking historical and current scores for each standard for each student, displaying those on graphs.

  • Pros: Not as much work to implement as it sounds (really!); tracks history to a reasonable degree in the gradebook, while tracking history completely in the accumulated weekly tally sheets, if needed; historical graphs and reports for each student; class averages for each standard - overall and each week - so that I can adjust instruction to the current state of knowledge
  • Cons: Takes maybe 30-45 minutes/week to enter grades, export spreadsheets [I'm planning to write an Excel macro to make that much quicker, actually], and upload them to wikiphys (so that families and students can access them) [I don't think that's so bad, so maybe it's a pro or neutral]; not compatible with FAWeb at all [I post links to their documents in the comments on FAWeb]; takes some work to set up the sheets, etc. at the beginning of the year/term, and a little to print the summary sheets each week
You see which I'm partial to, but observations and suggestions are welcome!


Here are sample documents for a class, with names redacted:
  • Week 3 tally sheet
  • Week 3 page from gradebook
  • Summary page from gradebook (calculating the 'counting' scores for each student for each standard)
  • One student's report page, showing current ('counting') scores and historical graphs

Sunday, October 3, 2010

Endings

The big test on a unit isn't necessarily the last time that a standard is assessed - that's one of the awesome things about SBG - so students aren't off of the hook after the test is over.  That 3.5/4 that you got on acceleration goes away as soon as you show that your understanding's shaky now, and I love that.

That said, the content standards don't come up as often after their initial big unit tests.  In this way, the unit test is a kind of "final look" at content standards - nothing's ever really final, but some of those scores may not come up again, and those may stick.  It's at this point that kids would reassess individually, if they didn't quite get a concept or two on the test.

What about the skills standards?
  • We're talking about algebra, 'no numbers 'til the end', units, graphical and algebraic models, making predictions, presentation, concept synthesis, etc. here
  • These come up a lot - maybe they should be averaged over the term, or maybe the last, say, three scores should be averaged.  I don't quite know how to do that last one in my Excel gradebook (we'll look at that later!)
  • They're so universal that I'm never 'closing the book' on them, so there's not so much of an end to these
How about the lab standards?
  • We're talking about experimental design, lab writing, lab report format, error analysis, measurement here
  • Perhaps there should be a final 'big' lab planned each term, so that I'm pretty sure that the last look at these standards is a good one
  • I'd love to get some of these 'in the raw,' - that is, not in a group setting.  That's easy for writing and reports, but the measurement and especially the expt. design are usually found in group settings.  I need to get them isolated, but it's hard in a class of 17 to do that.  Perhaps on the last test of the term, somehow?