Simple but important paper on the personal microbiome

Figure 1.

Quick post.  Just saw this paper:

PLOS ONE: The Personal Human Oral Microbiome Obscures the Effects of Treatment on Periodontal Disease:

Schwarzberg K, Le R, Bharti B, Lindsay S, Casaburi G, et al. (2014) The Personal Human Oral Microbiome Obscures the Effects of Treatment on Periodontal Disease. PLoS ONE 9(1): e86708. doi:10.1371/journal.pone.0086708

It has multiple things of interest (certainly to me – as we are doing some work on the oral microbiome). But I do not have time right now to dig through all of it.  I want to just point out one very important line in it:

Our results highlight how understanding interpersonal variability among microbiomes is necessary for determining how polymicrobial diseases respond to treatment and disturbance

This is consistent with what has been a gut feeling of mine (and something I say in lots of interviews and talks) but for which I did not have any obvious citation in mind.  Now I do.

A Torrents Site for Academics – Good idea – Wish We Had Thought of That (Oh wait …)

Just saw a Tweet from Jeff Ross-Ibarra at UC Davis

It refers to this: Academics Launch Torrent Site to Share Papers and Datasets | TorrentFreak.  Nice idea.  And I hope it finds some uses.  Though I wish they had mentioned that Morgan Langille who was a post doc in my lab at the time launched a version of exactly such a site in 2010.  See for example

Now, Biotorrents has kind of wilted a bit over the years so I can see why they might not want to mention it.  But still, I think they should have.  In fact, perhaps they could have talked to Morgan Langille about some of his experiences with trying to run Biotorrents over the years …

If anyone out there wants to track him down – Morgan is now an Assistant Professor at Dalhausie University.  See his Web site here.  Follow him on Twitter here.

UPDATE 1:15 PM 2/2/14

Just discovered that a few days ago Morgan Langille and others discussed this exact issue

Quick post – Outbreaker and the "Bayesian Reconstruction of Disease Outbreaks by Combining Epidemiologic and Genomic Data"

Interesting new paper out: PLOS Computational Biology: Bayesian Reconstruction of Disease Outbreaks by Combining Epidemiologic and Genomic Data.

Full Citation:  Jombart T, Cori A, Didelot X, Cauchemez S, Fraser C, et al. (2014) Bayesian Reconstruction of Disease Outbreaks by Combining Epidemiologic and Genomic Data. PLoS Comput Biol 10(1): e1003457. doi:10.1371/journal.pcbi.1003457

Abstract:

Recent years have seen progress in the development of statistically rigorous frameworks to infer outbreak transmission trees (“who infected whom”) from epidemiological and genetic data. Making use of pathogen genome sequences in such analyses remains a challenge, however, with a variety of heuristic approaches having been explored to date. We introduce a statistical method exploiting both pathogen sequences and collection dates to unravel the dynamics of densely sampled outbreaks. Our approach identifies likely transmission events and infers dates of infections, unobserved cases and separate introductions of the disease. It also proves useful for inferring numbers of secondary infections and identifying heterogeneous infectivity and super-spreaders. After testing our approach using simulations, we illustrate the method with the analysis of the beginning of the 2003 Singaporean outbreak of Severe Acute Respiratory Syndrome (SARS), providing new insights into the early stage of this epidemic. Our approach is the first tool for disease outbreak reconstruction from genetic data widely available as free software, the R package outbreaker. It is applicable to various densely sampled epidemics, and improves previous approaches by detecting unobserved and imported cases, as well as allowing multiple introductions of the pathogen. Because of its generality, we believe this method will become a tool of choice for the analysis of densely sampled disease outbreaks, and will form a rigorous framework for subsequent methodological developments.

Check out the nice figure on a SARS outbreak:

Figure 5. Results of the analysis of the SARS data using outbreaker. This figure summarizes the reconstruction of the outbreak, showing putative transmissions (arrows) amongst individuals (rows). Arrows represent ancestries with a least 5% of support in the posterior distributions, while boxes correspond to the posterior distributions of the infection dates. Arrows are annotated by number of mutations and posterior support of the ancestries, and colored by numbers of mutations, with lighter shades of grey for larger genetic distances. The actual sequence collection dates are plotted as plain black dots. Bubbles are used to represent the generation time distribution, with larger disks used for greater infectivity. Shades of blue indicate the degree of certainty for inferring the origin of different cases, as measured by the entropy of ancestries (see methods and equation 12): blue represents conclusive identification of the ancestor of the case (low entropy), while grey shades are uncertain (high entropy).

And then the consensus transmission tree

Figure 6. Consensus transmission tree reconstruction of the SARS outbreak. This figure indicates the most supported transmission tree reconstructed by outbreaker. Cases are represented by spheres colored according to their collection dates. Edges are colored according to the corresponding numbers of mutations, with lighter shades of grey for larger numbers. Edge annotations indicate numbers of mutations and frequencies of the ancestries in the posterior samples.

Outbreaker is available here: http://cran.r-project.org/web/packages/outbreaker/index.html

I also like the 1st line of their Acknowledgements:

We are thankful to Sourceforge (http://sourceforge.net/) and CRAN (http://cran.r-project.org/) for providing great resources for developing and hosting outbreaker.

Definitely worth checking out.

The Quest for a Field Guide to the Microbes: talk at "Science in the River City"

I got invited a while back to give a talk at a “Science in the River City” workshop for 3rd – 12th grade science teachers.  I proposed (and they said yes) to the idea of talking about my “Quest for A Field Guide to the Microbes.”  I recorded the screen (slides) and audio from my talk using Camtasia and have now posted the slideshow and slides.  Here they are:

Talk slideshow with Audio on Youtube:

 

 Slides on Slideshare

"Scientific Pride and Prejudice" in the @nytimes makes claims about sciences not using evidence correctly; alas no evidence presented

Well, I guess I can say I was not pleased to see this tweet from Carl Zimmer.

//platform.twitter.com/widgets.js It is not that I have a problem with what Carl wrote. It is just that, then I went and read the article he referred to: Scientific Pride and Prejudice in the New York Times By Michael Suk-Young Chwe. And it just did not make me happy. I reread it. Again and again. And I was still unhappy.

What bugs me about this article? Well, alas, a lot. The general gist of the article is that “natural” scientists are not aware enough of how their own preconceptions might bias their work. And furthermore that literary criticism is the place to look for such self-awareness. Well, interesting idea I guess but alas, the irony is, this essay presents no evidence that literary criticism does better with evidence than natural science. Below are some of the lines / comments in the article that I am skeptical of:

  • “Scientists now worry that many published scientific results are simply not true.”
  • Scientists, eager to make striking new claims, focus only on evidence that supports their preconceptions. Psychologists call this “confirmation bias. We seek out information that confirms what we already believe. ”
    • This statement is misleading. Confirmation bias according to all definitions I could find is something more subtle. For example Oxford Dictionaries defines it as “the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories.” That is, it is a tendency – a leaning – a bias of sorts.
    • I would very much like to see evidence behind the much more extreme claim of this author that scientists focus “only on evidence that supports their preconceptions”. 
    • In my readings of actual research on confirmation bias I can find no evidence to this claim. For example see the following paper Confirmation bias: a ubiquitous phenomenon in many guises. which states:
    • As the term is used in this article and, I believe, generally by psychologists, confirmation bias connotes a less explicit, less consciously one-sided case-building process. It refers usually to unwitting selectivity in the acquisition and use of evidence. The line between deliberate selectivity in the use of evidence and unwitting molding of facts to fit hypotheses or beliefs is a difficult one to draw in practice, but the distinction is meaningful conceptually, and confirmation bias has more to do with the latter than with the former.
    • “Despite the popular belief that anything goes in literary criticism, the field has real standards of scholarly validity”
      • This is a red herring to me. I can find no evidence that their there is a popular belief that “anything goes” in literary criticism. So the author here sets a very low bar and then basically any presentation of standards is supposed to impress us.
    • “Rather, “the important thing is to be aware of one’s own bias.”
      • The author then goes on to discuss how those in the humanities are aware of the issues of confirmation bias and rather than trying to get rid of it, they just deal with it, as implied in the quote.
      • The author then writes “To deal with the problem of selective use of data, the scientific community must become self-aware and realize that it has a problem. In literary criticism, the question of how one’s arguments are influenced by one’s prejudgments has been a central methodological issue for decades.
      • Again, this implies that scientists have not been thinking about this at all which is just wrong.
    • And then the author uses the Arsenic-life story as an example of how scientists suffer from “confirmation bias.”  If you do not know about the arsenic life story see here.  What is the evidence that this was “confirmation bias“?.  I think more likely this was a case of purposeful misleading, overhyping, and bad science.  
    • Then the author gives as an example of how science actually is prone to confirmation bias by presenting a discussion of Robert Millikan’s notebooks in relation to a classic “oil drop” experiment.  Apparently, these notebooks show that the experiments got better and better over time and closer to the truth.  And in the notebooks Millikan annotated them with things like “Best yet – Beauty – Publish”.  And then the author concludes this means “In other words, Millikan excluded the data that seemed erroneous and included data that he liked, embracing his own confirmation bias.”  I don’t see evidence that this is confirmation bias.  I think better examples of confirmation bias would be cases where we have now concluded the research conclusions were wrong.  But instead, Millikan was and still is as far as I know, considered to have been correct.  He won the Nobel Prize in 1923 for his work.  Yes, there has been some criticism of his work but as far as I can tell, there is no evidence that he had confirmation bias. 
    • I am going to skip commenting on the game theory claims in this article.
    • Then the author writes “Perhaps because of its self-awareness about what Austen would call the “whims and caprices” of human reasoning, the field of psychology has been most aggressive in dealing with doubts about the validity of its research.”  Again – what is the evidence for this? Is there any evidence that the field of psychology is somehow different?
    I could go on and on.  But I won’t.  I will just say one thing.  I find it disappointing and incredibly ironic that an article that makes claims about how some fields deal better with evidence and conformation bias than other fields does not present any actual evidence to back up its claims.  And many of the claims pretty clearly run counter to available evidence.

    UPDATE 9:20 AM 2/2/2014: Storify of discussions on Twitter

    Workshop at #UCDavis 3/24 From Science to Storytelling: Effective Communication for Policy Change

    From Science to Storytelling: Effective Communication for Policy Change

    Conveying the results of your scientific research to legislators and government agencies can help influence policy – but only if you communicate effectively! If you are motivated in part by applying your research to environmental and social issues such as climate change, this workshop will give you some useful concepts and tools. Using discussion, role plays, expert advice from media consultants and individual exercises, we will explore the challenges of translating your scientific findings to policymakers and advocates, and will offer some tools for improving your communications skills. We will focus on simplifying your messages and honing your presentations.

    Invited participants:

    Academic researchers working on climate change and agriculture. Registered participants may also invite one graduate student.

    Organizers:

    California Climate and Agriculture Network (CalCAN) and Resource Media

    Date, time, location:

    March 24, 1:00 – 4:00

    Big Hanna Room, Asmundson Hall, UC Davis

    Cost: Free

    To register: Email Renata Brillinger at CalCAN (renata@calclimateag.org) with your name, department, institution and phone number. If you would like to bring one graduate student, please include their information as well. Space is limited.

    Science Communication workshop 3-24-14.pdf

    Visualization of fecal transplants – well – thankfully – of the microbial community data not the actual transplant

    I love things like this. A simple Youtube video from Antonio González Peña and Rob Knight’s group. The video shows data from a study of microbial communities and how they respond to a fecal transplant. Simple. Short. And the visualization is nice.

    Wanted – examples of ways to get DOIs for blog posts & how they are used

    For many years I have been wondering about the best way to get more formal credit for blog posts I have written.  It seems like the simplest way to do this would be to get a DOI for a blog post under some sort of publishing system and to use that DOI as the citable unit for the post.  I remember a while back Titus Brown wrote about this exact idea: Posting blog entries to figshare – Living in an Ivory Basement but I have not seen much else out there on ways to do this and what the implications are.  Anyone else out there know examples of how people have gotten DOIs for blog posts and if this has been useful?  Thanks

     

     

    Pete Seeger, RIP, on women in engineering …

    When I was growing up, we went to Pete Seeger and Arlo Guthrie concerts every year at Wolf Trap. I have loved Seeger since then and thus was very sad to hear he died a few days ago. My mom is visiting right now and she and I have been talking about “Women in Science” issues (and for example she brought me some nice presents which I posted about Monday).

    //platform.twitter.com/widgets.js And in talking to my mom today she reminded me of this song Seeger used to sing every year when we saw the show. It is by his sister. And it is a good rallying cry for “Women in STEM” fields, I think. Here it is:

    Interested in #OpenAccess publishing?: definitely take a look at this working paper from CREATe

    Well, this is certainly the most comprehensive treatise I have ever seen on Open Access publishing: Open Access Publishing: A Literature Review | CREATe.  It was written by “Giancarlo Frosio under the supervision of Estelle Derclaye (2014)” and

    comes from the Centre for Copyright and New Business Models in the Creative Economy (CREATe).  It is VERY comprehensive and has discussion, review, and comments on just about every issue associated with Open Access publishing that one could think of. I do not know if there is any particular “angle” to the writings here.  What I looked at (not all of the document) seemed to be a relatively objective assessment of various OA issues.  Anyway, it is definitely worth a look for anyone interested in scholarly publishing or Open Access publishing or related issues.