(Part 2 in a series about teaching and iGEM; see part 1 here.)
Communication is one of the major themes of iGEM. So for the last few iGEM seasons I've been thinking pretty closely about how to help my students communicate their science clearly. And one of the things they always seem to have trouble with is the structure: what (and how much!) background to present, how to contextualize each result in the context of the larger project, how to emphasize the big-picture takeaway from among the less-important details. How to take a bunch of disparate bits and tell a story.
That's not the focus of this post. It will come, though, trust me.
On the other side of the communications coin, my students also frequently struggle with reading scientific literature. Their first attempt usually looks like they read the introduction and the conclusion, skimmed the results, and pretty much took the authors' word for what they found. It takes them a long time to understand that there's narrative structure in a piece of primary literature, too, exactly the same narrative structure as they will one day eventually use to tell their own story.
And not only is the narrative structure the same between a talk and a paper, but it serves the same purpose: to help the audience understand what is going on. The details, the individual experiments and results, make much more sense when they're integrated into a scientific story. So much so that if you look for the narrative when reading a paper, you can frequently gloss over the experimental and domain-specific details and still retain the thrust of the authors' argument. And that, of course, is the key to both reading literature in a domain that's not familiar, and to presenting your work to a room full of otherwise intelligent non-specialists.
For me, the narrative arc of a scientific story is divided into five pieces:
- What is the question? Why did you do what you did? If you're "successful", what new knowledge will you have gained? What are you trying to convince your audience of?
- What did you do? (And why did you do it that way?) What experiments were performed? How did you go about trying to answer the question you posed? Why did you choose that particular approach over some other approach?
- What did you see? What were the "raw" results?
- What does it mean? What is your interpretation of the results? Does it answer the question from #1? If not, why not? Does it raise any new questions?
- What's next? What's the next study? The next experiment? The next question to ask?
What I particularly like about this structure is that it applies to many different levels of scientific discourse. At the whole paper level, it looks like the following:
- What is the question? This is covered in the Introduction. There should be enough information here to situate the current work in the broader field and convince the reader that the question being asked is interesting and important. It also gives a broad introduction to the approach the authors took to answer the question.
- What did you do? And why? You might say "oh yes of course this is the Methods section." Frankly, I (and most other scientists I know) pretty much skip the methods section, because it answers the "what did you do" question in laborious detail without addressing the "why did you do it that way?" aspect. A well-written Results section, on the other hand, interleaves the actual experimental results with enough experimental detail to allow you to interpret them without necessarily referring back to the Methods section; and more importantly, they frequently discuss the rationale for choosing the experimental approach.
- What did you see? The Results section are, well, the results.
- What does it mean? and 5. What's next? are the domain of the Conclusion section. Did the study answer the question posted in the Introduction? Does it raise more questions? How does it move the field forward?
However, the same structure applies to an individual experiment or a "result" in the Results section of a paper or a talk. Or it should! Sometimes you have to infer the answers to some of the questions:
- What is the question? What specifically were the authors trying to learn with this one particular experiment? And how does that relate to the larger question they're trying to ask?
- What did they do? And why? The question from #1 motivates the experimental approach. For example, if I'm looking for whether protein A binds to protein B, I might choose to do a co-immunoprecipitation: use an antibody against protein A from a crude cell lysate, run the bound proteins on a gel, then do a Western blot and probe with an antibody to protein B. Is this the only possible approach? No, of course not. Why use this approach over, say, something mass-spec based? Or immunofluorescence and co-localization? Or surface-plasmon resonance?
- What did they see? If the investigators ran a Western blot, here's the actual blot to look at. If it's not in the main text, check the Supplemental Info.
- What does it mean? Without context, a Western blot is just lanes and bands. Do the presence or absence of particular bands at particular molecular weights actually answer the question that was asked? What other explanations are there for the data?
- What's next? If the experiment raises other questions, let's go test them. If there are multiple explanations for the observed data, then let's go rule them out with additional experiments.
The reason this structure works is it explicitly relates every piece of the paper to its context. By and large, humans don't learn by remembering random facts; instead, they learn by relating new material to what they already know. (That's the basis of constructivism.) And sometimes a paper is poorly written and some of the context is implicit! All too frequently a paper reads as if the authors did one thing after another with no rhyme or reason. Looking at a paper this way forces you into the authors' shoes and makes you ask "why did they choose this approach, this experiment, this strategy?"
And that is where things get really interesting. Much of the primary literature on teaching with primary literature (heh) emphasizes critical thinking (and rightly so.) But all too often, I feel like that criticism gets bogged down in the details of the experiments: niggling questions about experimental details, sample sizes, confidence intervals. Don't get me wrong! Technical correctness is important. But I think it's much more interesting to focus on the bigger structure: why did the authors answer their question using this approach instead of some other one? Are they asking questions that build on each other logically? Is there some other explanation for these results? This kind of lateral thinking, reasoning with information other than what was explicitly presented to you, is at the core of what it means to do good science.
And finally --- thinking about and presenting other peoples' science in this way will get my students used to the structure so that when it comes time to present their own work, doing so in a similar narrative arc will be much more natural.
PS - I am well aware that this take is not the first take on teaching students to read primary literature, or on science as storytelling. I doubt it's the first place they've been synthesized, either; if you know of another example, leave a comment below! This structure also draws heavily from my very favorite treatise on scientific communication, The Science of Scientific Writing. Seriously, if you haven't read it, go do so -- it's a long read, but so so worth it.