My “spring conference series” just ended. NARST (the conference formerly known as the National Association of Research on Science Teaching) was in early April (in Puerto Rico!) and AERA (American Educational Research Association) was about a week ago (here in San Francisco). Here are my notes and thoughts from the two conferences.
NARST
The big topic of the conference was, of course, the Next Generation Science Standards (NGSS) which officially were released at the tail end of the conference. Most people were referring to it during their presentations, even though we didn’t know exactly what it was yet. (Some people were conflating the new Framework with the new NGSS, but that’s a different story.)
There were a few presentations about one of the large studies that I am working on, an efficacy study of a middle school science curriculum. These presentations on some of our preliminary findings went well and I am really looking forward to next year’s conference when we will have even more results to report on and some awesome graphs to show.
I learned a lot of new acronyms at NARST. My favorite was one session particularly that had an enormously high density of complicated acronyms. I’m not even kidding, but someone had a slide that said: “Teachers’ CK, PCKCx, and TPCKCx differ according to ICT type” (and they were defining ICT much more broadly than I would). There was a whole session where they were just talking about TK, PCK, TPCKCx, and such. Apparently technological pedagogical content knowledge is a thing now, which is pretty interesting. The session was looking at different models of TPACK. It seems the prevalent model includes four sub-components: CK (content knowledge), PCKCx (PCK that is context dependent), TK (technological knowledge), and TPCKCx. This same session had a presenter looking at developing a line of inquiry around the nature of technology (NoT), similar to NOS.
There was a technology-enhanced assessments symposium that included many of the usual crowd. Most of this was stuff I have seen at previous conferences. But Phil Bell did mention that the NRC is releasing an Assessment Consensus report in a few months that will be focused on what future/new assessments for NGSS should/could look like.
There was also a session on games, simulations, and visualizations. I was super excited about it, even though it was at 8:30 in the morning. It turned out to be a bit more uneven than most sessions. One of the presentations was great, since it was about Phil Stewart’s dissertation work which was totally based on my dissertation work the physics game Surge (I think Phil is the only person who has read my dissertation besides my committee). So that was cool to see how that worked out. There was another good presentation about some work that Len Annetta and his students (especially Rich Lamb) are working on about validating and better understanding the Torrance Test of Creative Thinking, using with a science-based video game design workshop. Another presenter didn’t show up, but the other one did and it was basically an example of a) how not to make a learning game and b) how not to present (or perform) statistical results. Let’s just say that there were badges involved in the “game”, their “results” said that kids got better at playing the game (sort-of), and the phrase “effect size” was not something he was familiar with.
Small rant about presenting qualitative case study work (this is based on a number of presentations I saw). Listen. If you’re going to do a qualitative study, that’s great. I’m even fine with a case study approach, assuming that you have taken care to choose your cases and there is a rationale behind those choices and it’s not what I would call a ‘convenient sample’ without good cause. But, if you’re going to present an individual case study as research, you need to tell us about the selection of that case, what other cases might have been available, and how the selection of that case might constrain the interpretations and implications of the research. I would hope this would be included in the accompanying paper, but at least mention it in the presentation. It’s important. This is part of what distinguishes your research from the telling of anecdotes.
And while I’m in rant mode, please include error bars on your graphs! Or confidence intervals. [I wrote “NO ERROR BARS!” in my notes at some point.] Please. Just think about what you’re trying to communicate to your audience and what you’re saying when you don’t do basic things like this. It says a lot.
Overall, it was a great conference. I met lots of awesome researchers, had lots of good food (mofongo!!), and renewed my love of science education research.
My pictures from NARST can be found on flickr (click here!). Lots of beautiful sunsets, rainforest trekking, and the beach.
AERA
I know that I swore off AERA a couple years ago, but it was in my almost-backyard this time so it was easy to be there but not officially be there (I wasn’t presenting, didn’t register, and only went to one session). If you’re going to do AERA, that is the way to do it for sure. The main good thing about AERA is that everyone goes to AERA. So, I was able to take advantage of this by having lots of meetings, hanging out with past colleagues and grad student friends, and going to the social events to meet new friends and collaborators.
I have less substance to report on from AERA because I only went to the one session. But it was a good session. Jonathan Osborne, Rich Lehrer, Brian Reiser, and Mark Wilson had a session on “Building Learning Progressions for Science and Math Learning”. I didn’t go for the learning progressions, because I’m not really sure about their existence or usefulness, but it turned out that there wasn’t a lot of talk explicitly about learning progressions anyway. So it worked out well. One thing that was mentioned by many of the speakers was how to think about integrating content and practices as the NGSS are asking educators to implement (and to assess). Brian argued that learning about the scientific practices were essential because otherwise, just scaffolding instruction without understanding why you’re doing something in a certain way can make certain behaviors (like experimentation) become rote or routine. Which is not what we want. For example, when students are creating or using models, they should think about what the model needs to be able to do and for whom it needs to do it. When constructing knowledge, students should think about who needs to use that knowledge and what purpose is it being used for.
I think there is still a lot of work to be done around assessing the product of content knowledge/learning with scientific practices. The general consensus seems to be (and is supported by the vision of the Framework and NGSS) that these are/should be linked together and shouldn’t be assessed separately. Developing appropriate tools to do this should be high on our list of priorities.