Worth a read: A simple proposal for the publication of journal citation distributions

This paper in BioRXiv is definitely worth checking out.

Abstract is below:

Although the Journal Impact Factor (JIF) is widely acknowledged to be a poor indicator of the quality of individual papers, it is used routinely to evaluate research and researchers. Here, we present a simple method for generating the citation distributions that underlie JIFs. Application of this straightforward protocol reveals the full extent of the skew of distributions and variation in citations received by published papers that is characteristic of all scientific journals. Although there are differences among journals across the spectrum of JIFs, the citation distributions overlap extensively, demonstrating that the citation performance of individual papers cannot be inferred from the JIF. We propose that this methodology be adopted by all journals as a move to greater transparency, one that should help to refocus attention on individual pieces of work and counter the inappropriate usage of JIFs during the process of research assessment.

Source: A simple proposal for the publication of journal citation distributions | bioRxiv

 

Screen Shot 2016-07-09 at 8.56.53 AM

 

Screen Shot 2016-07-09 at 8.56.33 AM

Today’s Open Science Reading: the Open Science Reviewer’s Oath

Well this certainly is interesting: The Open Science Peer Review Oath – F1000Research.  This emerged apparently from the AllBio: Open Science & Reproducibility Best Practice Workshop.  The “Oath” is summarized in the following text from a box in their paper:

Box 1. While reviewing this manuscript:

  1. I will sign my review in order to be able to have an open dialogue with you
  2. I will be honest at all times
  3. I will state my limits
  4. I will turn down reviews I am not qualified to provide
  5. I will not unduly delay the review process
  6. I will not scoop research that I had not planned to do before reading the manuscript
  7. I will be constructive in my criticism
  8. I will treat reviews as scientific discourses
  9. I will encourage discussion, and respond to your and/or editors’ questions
  10. I will try to assist in every way I ethically can to provide criticism and praise that is valid, relevant and cognisant of community norms
  11. I will encourage the application of any other open science best practices relevant to my field that would support transparency, reproducibility, re-use and integrity of your research
  12. If your results contradict earlier findings, I will allow them to stand, provided the methodology is sound and you have discussed them in context
  13. I will check that the data, software code and digital object identifiers are correct, and the models presented are archived, referenced, and accessible
  14. I will comment on how well you have achieved transparency, in terms of materials and methodology, data and code access, versioning, algorithms, software parameters and standards, such that your experiments can be repeated independently
  15. I will encourage deposition with long-term unrestricted access to the data that underpin the published concept, towards transparency and re-use
  16. I will encourage central long-term unrestricted access to any software code and support documentation that underpin the published concept, both for reproducibility of results and software availability
  17. I will remind myself to adhere to this oath by providing a clear statement and link to it in each review I write, hence helping to perpetuate good practice to the authors whose work I review.

I note – I reformatted the presentation a tiny bit here.   The Roman numerals in the paper annoyed me.  Regardless of the formatting, this is a pretty long oath.  I think it is probably too long.  Some of this could be reduced.  I am reposting the Oath below with some comments:

  1. I will sign my review in order to be able to have an open dialogue with you.  I think this is OK to have in the oath. 
  2. I will be honest at all times. Seems unnecessary.
  3. I will state my limits. Not sure what this means or how it differs from #4.  I would suggest deleting or merging with #4.
  4. I will turn down reviews I am not qualified to provide.  This is good though not sure how it differs from #3. 
  5. I will not unduly delay the review process. Good. 
  6. I will not scoop research that I had not planned to do before reading the manuscript. Good. 
  7. I will be constructive in my criticism. Good. 
  8. I will treat reviews as scientific discourses.  Not sure what this means or how it is diffeent from #9. 
  9. I will encourage discussion, and respond to your and/or editors’ questions.  Good though not sure how it differs from #8. 
  10. I will try to assist in every way I ethically can to provide criticism and praise that is valid, relevant and cognisant of community norms. OK though this seems to cancel the need for #7. 
  11. I will encourage the application of any other open science best practices relevant to my field that would support transparency, reproducibility, re-use and integrity of your research.  Good.  Seems to cancel the need for #13, #14, #15, #16. 
  12. If your results contradict earlier findings, I will allow them to stand, provided the methodology is sound and you have discussed them in context. OK though I am not sure why this raises to the level of a part of the oath over other things that should be part of a review. 
  13. I will check that the data, software code and digital object identifiers are correct, and the models presented are archived, referenced, and accessible.  Seems to be covered in #11. 
  14. I will comment on how well you have achieved transparency, in terms of materials and methodology, data and code access, versioning, algorithms, software parameters and standards, such that your experiments can be repeated independently. Seems to be covered in #11. 
  15. I will encourage deposition with long-term unrestricted access to the data that underpin the published concept, towards transparency and re-use. Seems to be covered in #11. 
  16. I will encourage central long-term unrestricted access to any software code and support documentation that underpin the published concept, both for reproducibility of results and software availability. Seems to be covered in #11. 
  17. I will remind myself to adhere to this oath by providing a clear statement and link to it in each review I write, hence helping to perpetuate good practice to the authors whose work I review.  Not sure this is needed.

The paper then goes on to provide what they call a manifesto.  I very much prefer the items in the manifesto over those in the oath:

  • Principle 1: I will sign my name to my review – I will write under my own name
  • Principle 2: I will review with integrity
  • Principle 3: I will treat the review as a discourse with you; in particular, I will provide constructive criticism
  • Principle 4: I will be an ambassador for good science practice
  • Principle 5: Support other reviewers

In fact I propose here that the authors considering reversing the Oath and the Manifesto.  What they call the Manifesto shoud be the Oath.  It is short.  And works as an Oath.  The longer, somewhat repetitive list of specific details would work better as the basis for a Manifesto.

Anyway – the paper is worth taking a look at.  I support the push for more consideration of Open Science in review though I am not sure if this Oath is done right at this point.

10 things you can do to REALLY support #OpenAccess #PDFTribute

I wrote a post earlier today in relation to the #PDFTribute movement: Ten simple ways to share PDFs of your papers #PDFtribute.  I wrote it largely to give people an outlet and information and ideas about how to better share PDFs of their academic work.  I think the more people share the better.

However, I also got shit from my brother Michael – co founder of PLoS on Twitter about how this is partly a “feel good” action.  I do think he underestimates the surge of anger over the death of Aaron Swartz and the momentum right now in the semi-civil disobedience being seen in the #PDFTribute movement.  But I also think he is right in part. So, I thought I would follow up with suggestions for what people should do in the future to really support full and open access to the academic literature.

  1. Only publish in fully open access journals.  See DOAJ — Directory of Open Access Journals.
  2. Do not do ANY work for non open access journals. That includes reviewing, suggesting reviewers, etc. 
  3. Cancel all subscriptions to closed access journals. The subscription model is part of the problem. 
  4. Work for open access journals. 
  5. Embrace openness in other aspects of your academic work. See for example Open science – Wikipedia, the free encyclopedia and Open Humanities Alliance
  6. Learn the difference between “open” and “freely available.” See Peter Suber, Open Access Overview (definition, introduction) and Open Access | PLOS
  7. Reward people in job hiring, merits and promotions for their level of openness.  Do not reward them for closed activities.
  8. Lobby for more open access requirements at the Federal, State, and Institutional level.  Make sure they are not mealy mouthed or mediocre. See What the UC “open access” policy should say for example.
  9. Embrace other changes in scientific publishing such as post-publication review that enable more rapid sharing of publications (see The Glacial Pace of Change in Scientific Publishing). 
  10. Read up on what else you can do (e.g., Peter Suber, What you can do to promote open access) and come up with your own ideas.  Oh and share them.  Openly.

Related posts from The Tree of Life



Other ideas? Please post in comments.


Next up for Science in Congress: HR3433 – the Grant Reform and Transparency Act

Just got pointed to this by Mark Martin. There is a new bill making its way through congress – HR 3433 – the Grant Reform and New Transparency Act of 2011. It has a subtitle apparently of “To amend title 31, United States Code, to provide transparency and require certain standards in the award of Federal grants, and for other purposes.”
The full text of the bill and other information is available here.
I personally don’t know much about this bill but found some discussion of it here:
I have not formed an opinion of the act but thought I would share the information since it does not seem to be getting much attention but seems like it could have impact.  I note – one group that I respect deeply supports the act: the Sunlight Foundation which involves people like Ester Dyson and Lawrence Lessig.  Any opinions or insight on the bill would be welcome.

Stop deifying "peer review" of journal publications:

Peer review.  It is a critical part of scientific research and scientific progress.  Without it, science as a field might look like Fox News Stories or postings on Jenny McCarthy’s web site, where ideas people have are given gravitas regardless of how ludicrous they are.  But somehow, many in the public and press, and many many scientists alas, have deep misconceptions about peer review.

The most recent example of such misconceptions involves the arsenic life saga.  If you are not familiar with this story – here is a summary (for some fine scale details on the early parts of the story see Carl Zimmer’s post here).

In November 2010 NASA announced that in a few days they would hold a press conference discussing a major finding about life in the universe.  On December 2, 2010, they held their press conference and discussed a paper that was in press in Science from multiple NASA funded authors including Felisa Wolfe-Simon.  The paper was of interest because it claimed to have shown that a bacterium was able to replace phosphate in its macromolecules, including its DNA, with arsenic.  The press conference made claims that were very grandiose, like that textbooks would have to be rewritten, and the study of life on Earth and elsewhere would have to be completely rethought.

After a few days of mostly very glowing press reports, a few critiques began to emerge including in particular one from Rosie Redfield, a microbiologist at the University of British Columbia.  The critiques then snowballed and snowballed and the general consensus of comments appeared to be that the paper had fundamental flaws.  Some of the critiques got way too personal in my opinion and I begged everyone to focus on the science not personal critiques.  This seemed to work a little bit and we could focus on the science, which still seemed to be dubious.  And many, including myself, expressed the opinion that the claims made by the authors in the paper and by the authors and NASA in the press conference and in comments to the press, were misleading at best.

Now critiques about new findings are not unusual.  We will get back to that in a minute.  But what was astonishing to me and many others, was how NASA and the authors responded.  They said things like:

… we hope to see this work published in a peer-reviewed journal, as this is how science best proceeds.

and

It is one thing for scientists to “argue” collegially in the public media about diverse details of established notions, their own opinions, policy matters related to health/environment/science. 

But when the scientists involved in a research finding published in scientific journal use the media to debate the questions or comments of others, they have crossed a sacred boundary [via Carl Zimmer]

and the kicker for me was a letter Zimmer posted

Mr. Zimmer, 

I am aware that Dr. Ronald Oremland has replied to your inquiry. I am in full and complete agreement with Dr. Oremland’s position (and the content of his statements) and suggest that you honor the way scientific work must be conducted. 

Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated. You can see many examples in the journals Science and Nature, the former being where our paper was published. This is a common practice not new to the scientific community. The items you are presenting do not represent the proper way to engage in a scientific discourse and we will not respond in this manner. 

Regards,
Felisa

This was amazing since, well, they were the ones who held the overhyped press conference.  And then I (and others) found it appalling that they in essence would not response to critiques because they were not “peer reviewed.” I told Zimmer

Whether they were right or not in their claims, they are now hypocritical if they say that the only response should be in the scientific literature.

Zimmer had a strong defense of scientists “discussing” the paper:

Of course, as I and others have reported, the authors of the new paper claim that all this is entirely inappropriate. They say this conversation should all be limited to peer-reviewed journals. I don’t agree. These were all on-the-record comments from experts who read the paper, which I solicited for a news article. So they’re legit in every sense of the word. Who knows–they might even help inform peer-reviewed science that comes out later on.

(I note – yes I am quoting a lot from Zimmer’s articles on the matter and there are dozens if not hundreds of others – apologies to those out there who I am not referencing – will try to dig in and add other references later if possible).

And so the saga continued.  Rosie Redfield began to do experiments to test some of the work reported in the paper.  Many critiques of the original paper were published.  The actual paper finally came out.  And many went about their daily lives (I keep thinking of the Lord of the Rings whisper “History became legend. Legend became myth. And for two and a half thousand years, the ring passed out of all knowledge.”  Alas, the arsenic story did not go away.

And now skipping over about a year.  The arsenic story came back into our consciousness thanks to the continued work of Rosie Redfield.  And amazingly and sadly, Wolfe-Simon’s response to Rosie’s work included a claim that they never said that arsenic was incorporate into the bacterium’s DNA.  (I have posted a detailed refutation of this new “not in DNA” comment here).

But that is not what I am writing about here.  What is also sad to me are the continued statements by the paper’s authors that they will not discuss any critiques or work of others unless they are published in a peer reviewed article.

For example, see Elizabeth Pannisi’s article in Science:

But Wolfe-Simon and her colleagues say the work on arsenic-based life is just beginning. They told ScienceInsider that they will not comment on the details of Redfield’s work until it has been peer reviewed and published.

So – enough of an introduction.  What is it I wanted to write about peer review?  What I want to discuss here is that the deification of a particular kind of journal peer review by the arsenic-life authors is alas not unique.  There are many who seem to have similar feelings (e.g., see this defense of the Wolfe-Simon position).  I believe this attitude towards peer review is bad for science.  Fortunately, many others agree (e.g., see this rebuttal of the defense mentioned above) and there is a growing trend to expand the concepts of what peer review is and what it means (see for example, David Dobbs great post about peer review and open science from yesterday).

Though much has been written about peer review already (e.g., see Peer review discussion at Nature as one example), I would like to add my two cents now – focusing on the exalted status some give to peer reviewed journal articles.  I have three main concerns with this attitude which can be summarized as follows

  1. Peer review is not magic
  2. Peer review is not binary
  3. Peer review is not static.

I suppose I could stop here but I should explain.

Regarding #1 “Peer review is not magic.”. 
What I mean by this is that peer review is not something that one can just ask for and “poof” it happens.  Peer review of articles (or any other type of peer review for that matter) frequently does not work as sold – work that is poor can get published and work that is sound can get rejected.  While it may pain scientists to say this (and brings up fears of FoxNews abusing findings) it is alas true.  It is not surprising however given the way articles get reviewed.

In summary this is how the process works.  People write a paper.  They then submit it to a journal. An editor or editors at the journal decide whether or not to even have it reviewed.  If they decide “no” the paper is “sent back” to the authors and then they are free to send it somewhere else.  If they decide “yes” to review it, the editors then ask a small number of “peers” to review the article (the number usually ranges from 2-3 in my field).  Peers then send in comments to the editor(s) and the editor(s) then make a “decision” and relay that decision to the authors.  They may say the paper is rejected.  Or they may say it is accepted.  Or they may say “If you address the comments of the reviewers, we would consider accepting it”.  And then the authors can make some revisions and send it back to the editors.  Then it is reviewed again (sometimes just by the editors, sometimes by “peers”).  And it may be accepted or rejected or sent back for more revisions.  And so on.

In many cases, the review by peers is insightful, detailed, useful and in the best interests of scientific progress.  But in many cases the review is flawed.  People miss mistakes.  People are busy and skim over parts of the paper.  People have grudges and hide behind anonymity.  People can be overly nice in review if the paper is from friends.  People may not understand some of the details but may not let the editors know.  Plus – the editors are not completely objective in most cases either.  Editors want “high profile” papers in many cases.  They want novelty.  They want attention.  This may lead them to ignore possible flaws in a paper in exchange for the promise that it holds.  Editors also have friends and enemies.  And so on.  In the end, the “peer review” that is being exalted by many is at best the potentially biased opinion of a couple of people.  At worst, it is a steaming pile of … Or, in other words, peer review is imperfect.  Now, I am not saying it is completely useless, as peer review of journal articles can be very helpful in many ways.  But it should be put in its rightful place.

Regarding #2: “Peer review is not binary”
The thumbs up / thumbs down style of peer review of many journal articles is a major flaw.  Sure – it would be nice if we could apply such a binary metric.  And this would make discussing science with the press and the public so much easier “No ma’am, I am sorry but that claim did not pass peer review so I cannot discuss it” “Yes sir, they proved that because their work cleared peer review.”  But in reality, papers are not “good” or “bad”.  They have good parts and bad parts and everything in between.  Peer review or articles should be viewed as a sliding scale and not a “yes” vs. “no.”

Regarding #3: “Peer review is not static”
This is perhaps the most important issue to me in peer review of scientific work.  Peer review of journal articles (as envisioned by many) is a one time event.  Once you get the thumbs up – you are through the gate and all is good forever more.  But that is just inane. Peer review should be – and in fact with most scientists is – continuous.  It should happen before, during and after the “peer review” that happens for a publication.  Peer review happens at conferences – in hallways – in lab meetings – on the phone – on skype – on twitter – at arXiv – in the shower – in classes – in letters – and so on.  Scientific findings need to be constantly evaluated – tested – reworked – critiqued – written about – discussed – blogged – tweeted – taught – made into art – presented to the public – turned inside out – and so on.
Summary:
In the end – what people should understand about peer review is that though it is not perfect, it can be done well.  And the key to doing it well is to view it as a continuous, nuanced activity and not a binary, one time event.  

UPDATE 3: some twitter comments

//platform.twitter.com/widgets.js

//platform.twitter.com/widgets.js

//platform.twitter.com/widgets.js

//platform.twitter.com/widgets.js

Interesting take on peer review & openness from outside the sciences in @nytimes

I assume many supporters of open science may have seen this already but if not it is worth a look.  The New York Times had an interesting article on Monday by Patricia Cohen: For Scholars, Web Changes Sacred Rite of Peer Review.

The article starts off with a familiar refrain

For professors, publishing in elite journals is an unavoidable part of university life. The grueling process of subjecting work to the up-or-down judgment of credentialed scholarly peers has been a cornerstone of academic culture since at least the mid-20th century.

It follows with a very important discussion focusing on how the web can transform scholarly publishing.  For example:

… scholars have begun to challenge the monopoly that peer review has on admission to career-making journals and, as a consequence, to the charmed circle of tenured academe. They argue that in an era of digital media there is a better way to assess the quality of work. Instead of relying on a few experts selected by leading publications, they advocate using the Internet to expose scholarly thinking to the swift collective judgment of a much broader interested audience.

This likely will sound very familiar to those who have read my blog, those who follow the discussions on peer review, or those with a pulse in the scientific community.  But there is a catch that caught me off guard here and might surprise many of you.  This catch is highlighted by the fact that the article was in the Arts section of the Times.  You see, the article was about transformation in the humanities.  Seems as though there is an almost completely parallel universe there where peer review and publishing and sharing are all getting re-evaluated.

This is yet another case of why we need more cross talk between the arts/humanities and the sciences.  For example, the article discusses how the journal Shakespeare Quarterly is becoming the first humanities journal to “open its reviewing to the World Wide Web.” They even recently conducted an experiment in fully open review where four preprints were posted on the web and feedback was solicited.  The feedback was then used by editors to guide the revision of the preprints to become published articles.  Sounds a lot like Biology Direct.  Note however, that they are not talking about publishing the final articles in an open access manner (see discussion of this on BigThink here) – but more about engaging the broader audience in commentary before an article is published.

The article does suggest that perhaps the humanities are lagging a bit behind the sciences in experimenting with new forms of peer review but I think that is OK.  We desperately need new experiments and ideas in this arena.  Peer review, at least the way it operates right now, has many problems.  I think there must be many better ways to go about things.  And thus cross pollination across fields from arts and humanities to economics to physics to life sciences is a good thing.

Interestingly Cohen identifies what she considers to be the most daunting obstacle to opening up review:

peer-review publishing is the path to a job and tenure, and no would-be professor wants to be the academic canary in the coal mine.

I think this is the same main obstacle in the sciences.  That is, it is our system of promotion and tenure and hiring that is the main roadblock.

Finally, I note that though the article was focusing mostly on opening up peer review, it does have some interesting bits on openness in general. In particular, there is a great line at the end from Dan Cohen from George Mason

“There is an ethical imperative to share information,” said Mr. Cohen, who regularly posts his work online, where he said thousands read it. Engaging people in different disciplines and from outside academia has made his scholarship better, he said.

I could not agree more.  Seems like the arts and humanities and sciences actually have much much more in common that many might think.

For some related posts from the web see

Some recent web stuff on peer review in the sciences

Creating Mitochondria and a sign that we need open peer review

Well, since everyone else is posting about this I figured I should too (see for example Steven Salzberg’s Blog, The Harvard Crimson, Pharyngula). If you have not heard yet, there is an article in the journal Proteomics that discussed how mitochondria must have been created by an intelligent designer.

For example on p8 the authors say:

“Alternatively, instead of sinking in a swamp of endless
debates about the evolution of mitochondria, it is better to
come up with a unified assumption that all living cells
undergo a certain degree of convergence or divergence to or
from each other to meet their survival in specific habitats.
Proteomics data greatly assist this realistic assumption that
connects all kinds of life. More logically, the points that show
proteomics overlapping between different forms of life are
more likely to be interpreted as a reflection of a single common
fingerprint initiated by a mighty creator than relying on
a single cell that is, in a doubtful way, surprisingly originating
all other kinds of life.”

Say what you want about the journal Proteomics but boy did they screw this one up. I think they probably should have caught this without much effort but who knows exactly what happened. In all fairness to them, it is possible for weird thin gs to slip through at any journal. Reviewers are busy. Editors are busy. Everyone is busy. How can we prevent this from happening again. There is a simple change we could make that would help. It is called Open Peer review. That is, if reviewers names were publicly attached to papers they reviewed, and their reviews were published, we would be less likely to see things like this happen. Then, if someone agrees to do paper review, they would be careful about it. Sure, we would probably have a harder time getting reviewers, but that would be better than publishing crap.

What are the risks with Open Peer review? Well, some people might feel afraid to criticize others especially people with power. Well, I find this sad. Scientists criticize our collaborators and friends ALL the time in private. Why not be public about it? Aren;t we supposed to be searching for the truth? If we are, shouldn’t we be willing to give our opinions in public forums?