blog

Eclipse pinhole projector

Yesterday I posted some videos on Instagram of step-by-step instructions on how to build your own pinhole projector to safely view the eclipse on August 21st. In order to make the instructions easier to share, I’ve compiled them all here (well, screenshots from them at least) to help you turn an ordinary cardboard box into a pinhole projector. 

For more information on the eclipse check out the NASA website eclipse2017.nasa.gov.

Step 1: Find a cardboard box and cut a white piece of paper to fit the bottom. 


​Step 2: Tape or glue the white paper to the inside bottom of the box. 

​Step 3: Cut two holes on opposite sides of the top of the box. 


​Step 4: Tape the top of the box closed. 

Step 5: Cover one of the holes with some foil and tape down. 


​Step 6: Put a pinhole in the foil. 


​Step 7: Use the projector with the sun behind your back, looking through the non-foil-covered hole. The sun should be projected onto the white paper in your box. You might need to move it around a bit to get the sun lined up right. 


And that’s it! It’s very simple to make. Let me know if you have any questions. Hope you safely enjoy the eclipse!

Traveling 2016

I flew about 75,000 miles this year. That’s a lot. For comparison, that’s about 1/3 of the way to the moon (238,855 miles on average). I went to:

  • LA (6 times – once for work, the rest for family stuff)
  • DC (3 times for work, including a memorable Snowpocalypse adventure)
  • New York, NY (for work, but managed to see Hamilton!)
  • Baltimore, MD (work conference)
  • Bloomington, Indiana (workshop)
  • Pittsburgh, PA (workshop)
  • Wichita, KS (work stuff)
  • Madison, WI (for a friend’s wedding!)
  • Arecibo, Puerto Rico (radio telescope!)
  • Carpinteria, CA (annual family Labor Day fun time)
  • Paris and Lyon, France (work conference)
  • Edinburgh, Scotland (work conference)
  • Singapore (work conference)
  • Hanoi and Ha Long Bay, Vietnam (vacation!)

So yeah, I’m a bit tired. That was probably too much. I’m going to try really hard not to repeat that in 2017, but who knows what will happen. I love traveling, but it is exhausting.

Here are some photo highlights from my year of travel.

Continue reading “Traveling 2016”

Multi-modal Learning Data Collection at (Small) Scale

subtitle: even the best-laid plans…

Last year (spring 2015) we collected a really nice set of data of students collaborating in groups of three. The data collection process wasn’t entirely smooth or perfect, but it generally went off without any major technical or logistical problems. We ended up with a really nice dataset of almost 150 students with high quality audio data (four channels per group), video recordings (one per group), and computer log files (ideally one per group, practically more than one). [NB: The annotated audio from this first phase of data collection will be made available soon to other researchers. You can read the paper about the data set (presented at Interspeech 2016) here.]

In the spring of 2016 we set off to do our second phase of data collection, in classrooms during a regular class period. So unlike the first phase where we had just two groups at a time with kids who had volunteered and were excited to try out some math problems (a.k.a. the best kids), we had up to 10 groups at once with varying levels of excitedness and/or willingness to follow directions. We mostly wanted to test out how well the audio capture worked with all of the background noise in a typical classroom environment and see if our speech models still held up.

Continue reading “Multi-modal Learning Data Collection at (Small) Scale”

Going Deep with David Rees

Today on the blog: a TV show recommendation. Season 2 of Going Deep with David Rees started last week and I think it’s a really good show. The basic idea of each episode is that David is trying to figure out how to do something. Something simple, like how to make an ice cube, because it turns out that even simple things are actually really complex and interesting when you break them down. While that premise is immediately interesting to me, one of the things I like best about the show is its warm sense of humor and an open and sincere quest for knowledge of everyday life. It’s this same sense of wonder and propensity for questioning things around me that initially made me want to be a scientist (and now, study how people learn science).

David Rees is a well-known artisanal pencil sharpener. Ok, maybe not well-known to a large number of people, but still, if you send him a pencil he will sharpen it by hand for you. He wrote a book on How To Sharpen Pencils, so he probably knows what he’s talking about. He is probably actually more well-known for being the person responsible for the political cartoon Get Your War On which, at least for me, made the post-9/11 George W Bush years slightly more bearable.

Season 1 of GDDR focused on important questions like How to Open a Door, How to Flip a Coin, How to Shake Hands, and How to Dig a Hole. Those might sound like silly topics for a show, and they are to a certain extent, but that’s not really what episode is totally about.

Sadly, season 1 is not available to stream anywhere at the moment, but it’s not too late to get on the bandwagon for season 2. The first episode was about How to Pet a Dog and tonight’s second episode was about How to Eavesdrop. Tonight’s episode was a really good example of how they can take a simple question and expand it into a really interesting and engaging sciencey show.

How to Eavesdrop is not really about eavesdropping perse. It is about sound. Which is one of my favorite physics topics. As David says in the episode, “how do sound waves get turned into something my brain recognizes as sound?”. Even though he talks to a former CIA spy about actual eavesdropping, the heart of the episode (to me, at least) is talking to the audiologist and learning how the ear works and talking to the cognitive scientist about how we interpret sound waves to understand speech. They even talked about the McGurk illusion which is fascinating and is also something I wrote about on this very blog about four years ago. And, to make my little academic heart even happier, GDDR popped up a citation to the McGurk et al. paper when they talked about it!

If you’re looking for a fun and engaging bit of science on your TV (or computer), you should definitely check this show out.

Beethoven visualization

I came upon this amazing visualization of Beethoven’s 7th Symphony a little while ago. I found it when searching for this piece1 (one of my favorite classical music pieces) and realized it was also a super cool visualization.

Each color corresponds to one type of instrument, from the orange violins to the greenish flutes, the yellow trumpets, and the blue bassoons. I really like how it shows the complexity of the music and also allows you to see patterns in the piece as they develop over time and repeat.


  1. I was trying to remember if this was the music that was in Mr. Holland’s Opus when he was talking about Beethoven not being born deaf. It was

R Basics

This is the second part in an ongoing series I’m doing about why I think R is awesome and why you should be using it. (Check out part one!)

So now that you have downloaded and installed RStudio and have some data you want to play with, what are the next steps? How do you get started really working with your data? In this post I’ll cover an overview of the basics of working with R. Future posts will have more details on some of these topics.

Project spaces and working directories

So RStudio has you create a “project” when you get started. You tell it where you want the project to be and then it creates a file with “.Rproj” at the end. The location where this project resides is also your working directory. This will be relevant when trying to load in data

You can have more than one project (in different places if you want) and I have found creating multiple projects is mostly helpful for keeping different R projects separate. For instance, I have a main R project called “R Stuff” and then also separate projects for a couple of the bigger research projects that I work on. Things not attached to one of those two bigger research projects go in R Stuff and then I sort them out later and move them if they grow into their own thing.

My suggestion is to create most of your code/scripts/whatever in an R script file (extension .R) instead of just using the console to type in commands when you need them. You can load one of these in the main RStudio panel and type and edit your code here. Once you have some code/commands you like, you don’t need to copy them down into the console, you can just hit command-return (on a Mac, probably control-return on Windows) (or use the “Run” command in the upper right corner of that main window.

This script will allow you to do a couple of things: first, you can see your whole data manipulation/analysis/graphing workflow all at once; second, you can make changes to one step (e.g., switching the size of your graphed data points) and then re-run the code easily; third, you can write comments.

Now, I am not always the best at writing comments. But I try. And it’s really important. Even if you don’t think anyone else is ever going to see your code, you might need to look at it later. And no matter how smart and clever you think you are (well, actually I think if you’re super cleve then this is going to be more important because on a future day you may not be having a super clever day), you will probably need to read your code again. You are always, at a minimum, collaborating with yourself. And you deserver to have well-commented and documented code. So do yourself a favor and write some sensible comments.

Loading and viewing your data

Ok, so you have a data file and you want to start working with it. You have a few options. Most likely, it’s a .csv file and I’m going to assume to start that it’s in your working directory so you can use the command

d1 <- read.csv("MyDataFile.csv")

This will create a new dataset called d1 that is made up of what was in your csv file. You can use the “Import Dataset” button in the Environment panel. If your data file is in another location, you will have to enter the correct file path.

For the rest of the examples here, I’m going to use one of the sample data sets that comes with some R packages. The mpg dataset is one of the typical datasets for examples, as it comes in the base package. It is a datatset of car models and gas mileage data. Play along at home with the following commands.

To start, load the dataset: data(mpg). This should create an entry in the Data section of the Environment panel on the right. It should tell you the name of the dataframe and that there are 234 observations of 11 variables. Alright, but if we want to look at the data? If you type head(mpg) the console will output the header of the dataframe: the column names and the first six rows of data.

I prefer using glimpse(mpg) which is actually a command from the dplyr package. (If you haven’t already downloaded the dplyr package, now is a good time. We will be using it a lot in later posts.) Glimpse gives you a more compact view of more of the dataset and also tells you how R is interpreting each variable. For instance, R thinks that manufacturer is a factor (true) and that year is an integer (also true). displ is a “double integer” which is a bit weird, but for now, let’s just go with that it’s a special class of numerical variable. None of the text-based variables showed up as strings, which is good for our purposes with this dataset.

This is fine if you have a relatively small dataset, but it begins to get unwieldy if you have a lot of variables. The summary(mpg) call will give you a different view of your data. For the text-based variables, it gives you a count of them (up to a point) and for the numerical variables, it spits out the minimum, quartiles, mean, and maximum values. Pretty handy for a quick check.

If you want to see the whole dataset (or at least, a lot more of it, depending on how big it is) in a format more closely resembling that which you’re used to in Excel or something, you can use View(mpg). This will pop up a “normal” looking dataset in the main window for you to peruse.

Alright, now that we have looked at our data, let’s talk about variables. To access a specific variable, you will use the dollar sign. So, if you want to look at (or refer to) the model variable in the dataframe, you will call it by mpg$model. This way R knows that you are looking in the dataframe mpg and you want the variable model. You can use this in combination with lots of other things. For instance, if you wanted to find the minimum year of car that is in the dataset, you could use min(mpg$year) and it should output 1999.

If you make some changes to your dataset (e.g., adding a variable, reshaping it, filtering it, etc. — all topics for a future post), you can also save your dataset in a recognizable format. So if your new dataframe is called mpg2 you can export a csv of that using write.csv(mpg2, file="mpg2"). This will put a new csv file in your working directory with the filename mpg2.csv.

Other things to think about with R

In order to maintain an up-to-date version of R within RStudio, there are three separate things you need to update: RStudio itself (the application), R (the base), and all of your packages.

Updating your packages is easy in RStudio. In the Packages tab in the lower right corner (using the default set-up), there is an “Update” button that will easily show you which packages have updates available and let’s you download and install them. Super easy. (Updating RStudio is easy too: look in the Help menu (at least on Macs).)

When you start up RStudio, the console will give you a readout of the current version of R that you are running. As of today, that is version 3.2.2 (“Fire Safety”), but if you have an earlier version of R — as long as it’s not too old — most things should run fine. Updating R is sometimes a pain because you can’t do it directly in RStudio (which I think is confusing to people because you can update your packages easily in RStudio). When you download a new version of R, RStudio will automatically detect that, so that’s not too bad. However, RStudio tries to be helpful and store your downloaded packages in the correct place, but a major version update to R actually creates a new location and you have to migrate all of your packages over to that new place. It’s a bit of a hassle, but there is an easy way around it.

update.packages(checkBuilt = T, ask = F, type = "binary")

RStudio also has support for version control. Woo! You can use either git or SVN. I have more experience with SVN, but I am in the midst of switching over to git so maybe I’ll post about that at a later date. I’m not going to go into all of the details for how to set up and use version control, but, I will say that it’s a good idea even if you don’t need it for collaboration or sharing purposes.


Next post: we’ll look at how to organize and manipulate your data using my favorite package dplyr!

Je t’aime Paris

What happened tonight in Paris is just horrible. I don’t even know what else to say about it except it’s awful.

Paris is a beautiful city. I visited there in 2004 and want to go back someday soon.

DSCN0084

I have watched Casablanca probably 20 times. It is one of my favorite movies and one of the greatest movies ever made. And the most effecting scence for me, every time, is the part where the Nazis are in the bar and start singing their garbage Nazi song and then slowly the rest of the bar patrons (led by the freedom figher Victor Laszlo) start singing La Marseillaise and eventually drown out the Nazis. It is a very powerful scene and it always always always makes me cry.

The terrorists only win if we are afraid. But we can show them with our voices that together we are stronger than them and we will drown them out.