If you haven't seen this - whether you're a student or teacher - I'd certainly advise that you watch - even the first 20 minutes gets you the idea:
I'll record some of the content and reflections on it, mostly for my own benefit!
- He's using a web applet that his company developed, mostly for word clouds (so far). There are valid criticisms of word clouds, but the first big idea is about learning through doing, not through being told. Check. We also have some mini-clickers that we'll be using later. They seem to have all of the capabilities of the first-generation Senteo/SmartResponse clickers. They're branded "Response Card." They're smaller and look cheaper, but their webiste doesn't give prices. Perhaps a more cost-effective way to clickify a larger school?
- It looks like we're getting a recap of the deprecation of "sage on the stage" teaching (direct instruction, lecture, etc.) from the famous video above. It's like going to a concert - I'm comparing the little deviations from the recording I've seen so many times! :) He must have said this sooooo many times. All good points, for sure.
- I also love that his spiel includes a discussion about how evaluation results from students are generally not correlated with how much students have learned. Instead, high marks are usually associated with ease of the class, average grades, or how much they "like" the teacher. Here's what appears to be a good summary of some of these issues. I wasn't successful in finding the paper that I was looking for that showed that students that reported "liking" the teacher tended to rate the teacher highly on even objective matters like the teacher starting and ending class on time, and vice versa.
- It only took 19 minutes for the "flipped classroom" to be brought up. I have some axes to grind there. How'd we get to the point where learning on your own outside the class is the "flipped" way? I seem to remember being held responsible for learning at least the basics of the material before class, with class being for application, extension, and higher-level connections. Still not a bad concept.
- I sum it up like this (it's something that I try to keep in mind always): lecture is about the transfer of information, but "active engagement" (modeling, inquiry, peer instruction, whatever your route is) is about the construction of understanding.
- About 20 minutes into the recap of "Confessions," I'm ready for some new material (it's really really important material - watch "Confessions" if you haven't! It's just that I have seen it two dozen times. :).
- Learning certainly is about being able to use that understanding in a new context. That's why you always get problems that are totally new to you on those pesky physics tests!
- 29 minutes in - now we're talking about the FCI. Uh oh, no new material coming for next next 15 minutes, I'll wager.
- Wow - that's a Father Guido Sarducci clip. Nice.
- Now we're into Peer Instruction - 44 minutes in.
- I wish that it weren't such a pain to write new clicker questions. The questions aren't as much the hassle as using Notebook software is. I have not found Notebook to be robust enough or quick enough to use to supplant OneNote and other software that I use in class. Maybe I'll devote the summer to increasing my cache of clicker questions.
- He's absolutely right that it's painful to give up your well-designed lectures and the thrill of a slick derivation on the board. To do so in exchange for students (initially, at least) being less enamored of your teaching is even worse. To see increased understanding on the FCI and other tests makes up for it.
- Peer instruction demo: 10 seconds lecture on thermal expansion, followed by the hole-in-the-plate problem [I saw that one coming!]. We answered using his Learning Catalytics web software, discussed, and people could change their answers. 23% initially correct, 33% after discussion. Well, pobody's nerfect. PI definitely depends on a higher proportion of initially correct answers than 23%. You're supposed to pair yourself with someone with a different answer. Not all of those will result in conversion of an incorrect answer to a correct one, and many folks out there were missing someone with a correct answer. He did a good job of stressing that the mental model is the important part, and the answer is secondary - if you're not keeping in mind the overriding condition that the metal atoms all push away from each other when the metal's heated, but try to generalize surface observations, you're toast on this one. I wish that I'd been a part of a conversation (or listened to one) between novices here.
- Interesting that he just pointed folks to his "Confessions" on Youtube. They'll find it familiar!
- 75 minutes in: Peer Instruction 2.0.
- He's looking at features of his site (a bit of a commercial, really), and how they're improvements on traditional clickers. One helpful bit is that they can be used with any device, and don't require the capital investment of clickers. The big issue for me is how the responses are collated and presented, so that you can make some sense of the answers in the moment. Without that, it would be pretty weak.
- Long answer, short answer, etc. - semantic analysis is mentioned, but he doesn't go into much depth. All I really see is the word cloud tangibly at this point, which isn't super useful for me in particular.
- There's a neat vector-drawing capability - you can put up a drawing and ask students to draw a direction, and then see all of the responses collated very quickly on top of each other. Nice feature - I like this one.
- Graph sketching works like this, too. Awesome.
- Text can be highlighted (highlit?), too, with a "heat map" of what students identified as important.
- You can select a region on an image, too
- You can key the input devices at the beginning of the session to where they're sitting in the room - where are the right/wrong answers coming from? The system can also track those trends historically. If one student has correct answers, but keeps being dissuaded or keeps not being able to persuade his/her partner, then you can track that. That's great, but tracking seems like more work than almost anybody has time to do.
- Overall, the system seems to have a lot of promise. It's still in beta, so it's free for now. Who knows what it'll cost when they start charging for it, though? I'd guess that it'd be much less than buying even one set of clickers.
- After a bagel break, we're back. Me: 2, bagels: 0 for the day.
- From some of the questions submitted to Mazur during the break, it looks like all of that time spent on interactive engagement and the failure of lecture in the first half didn't get everyone on board: "How do you balance time between the peer instruction discussion and covering material?" That presupposes exactly the normal lecture paradigm and that students learn what the teacher says - they learn what they construct for themselves, instead. Me rushing through a few more things doesn't mean that they learn a few more things, only that I've said a few more things. His results show increased performance even on problem-solving when using PI, even though they don't do any problem-solving in class! This isn't an addition - it's a substitution (at least a partial one) for the lecture/problem-solving paradigm.
- Now we're back into more on Peer Instruction. I like that he's being explicit about using data from students (clickers, etc.) to shape how the class evolves. He's also talking about 30-70% initially correct being a 'sweet spot' for PI, but having plans for the other cases!
- PI Trial #2: some buoyancy questions - rock in a boat, then remove the rock - what happens to the water level?; also rock in a boat, then rock goes to the bottom of the pool - what happens to the water level? [I thought that it would be the box at the bottom of the pool question, which is also great] The starting percentage for the first was more in the wheelhouse for PI, but the second went only from 17% to around 35%, similar to the first. There wasn't much instruction before or during, so it's no big deal, but Mazur did report that "physics instructors average around 30% on the first time," which seems to me to be at least as big of a problem as what we're doing pedagogically. How does our system let you get to be a physics teacher while missing lots of fundamental conceptual understanding? (It's not just this question that I'm reacting to - I've heard this type of setup a lot, mostly as an argument about how important it is for us to teach conceptual understanding.) It's a rhetorical question - certainly there are many factors that press folks into service as physics teachers against their training, etc., but getting successfully through a physics degree program without wide-ranging conceptual understanding ought to be not possible.
- PI Trial #3: this one was more in the ethics arena (about retouching photos and how much is too much), with Mazur demonstrating how PI can be used in non-technical fields as a framer and catalyst of discussion (I suppose that's why Mazur calls his company Learning Catalytics).
- He has only addressed a couple of the audience-submitted questions, but promises that his post-docs will answer them all and email them to us. That should be an interesting document; I'll add it to the post when I receive it, or make a new post, depending on the time delay.
- His presentation of results is a good model - he shows the axes first, discusses what they mean and what information is being presented in what way, and then adds the data. That's a great way to present graphs.
- A bit now about non-tech methods to do the same polling (fingers against the chest here) - the key's the engagement, not the tech that makes it happen.
- Ending with a bit about resistance from students - the initial drop in performance/confidence upon changing methods. This is very similar to my post about the transition from unconscious incompetence to conscious incompetence, which is unfortunately the only way to get to any level of competence. This is a big one. You have to stick to your guns, and acculturate students towards the idea that making mistakes is the most important part of learning - they're not to be feared! We have to run towards the gunfire. For teachers, that also means the much less fun gunfire from students and parents when they see something new, when they have difficulty for the first time (because they're reasoning much more heavily for the first time), when their grades have no "fluff" to boost them to compensate for lack of understanding, etc. Hold the line - it's only about learning. Learning can be measured. That evidence is what you can use to confront these concerns (and the only thing that really verifies whether what you're doing is working, no matter what you're doing!), but you have to personally argue and really believe that you can change how somebody thinks about learning and how well they can learn.
- While talking for a while in the lobby afterward, we were the only two left and then got a little chat with Mazur and a handshake! :)
Josh,
ReplyDeleteThis is super helpful—thanks so much for writing the play-by-play of Mazur's talk. I've wanted to try out LC, but the cost is a bit steep IMO, given that PollEverywhere and others are free.
I couldn't find a cost anywhere online, and Mazur said that it was still free (for now) - what do you know about costs?
ReplyDelete