Some notes on "Citations for Sale" about King Abdulaziz University offerring me $$ to become an adjunct faculty

There is a news story in the Daily Cal titled “Citations for Sale” by Megan Messerly about King Abdullah University King Abdulaziz University trying to pay researchers who are highly cited to become adjunct professors there to boost their rankings.  This article stemmed from a blog post by Lior Pachter.  I was interviewed by the Daily Cal reporter about this because I had sent Lior some of the communications I had had with people from KAU where they tried to get me to do this.
I am posting here some of the email discussions / threads that I shared with Lior and Megan.
Thread #1.
Here is one thread of emails in which KAU tried to get me to become an Adjunct Professor.  I have pulled out the text of the emails and removed the senders ID just in case this would get him in trouble.
Received this email 3/6/14

Dear Prof Jonathan, 

How are you? I hope every thing is going well. I hope to share work with you and initiate a collaboration between you and me in biology department, Kind Abdulaziz university . My research focuses on (redacted detail). I hope that you would agree . 

Redacted Name,Ph.D
Assistant Professor, Faculty of Sciences, King Abdulaziz University, Jeddah, KSA.

My response:

What kind of collaboration are you imagining?

His response:

Hi Prof Jonathan, 

Let me to explain that the king abdulaziz university initiated a project which is called Highly Cited Professor ( HiCi). This project can provide a contract between you and our university and from this contract you can get 7000 US$ as a monthly salary .So  this project will allow you to generate two research proposal between you and a research stuff here in order to make an excellent publications in high quality journals as you always do. 

I hope that I was clear. I’ m looking forward to hear from you. Finally, I think that a very good chance to learn from you. 

 Another email from him:

Dear prof Jonathan, 

I’ d like to tell that Prof Inder Verma will come tomorrow to our university  as a highly cited professor and he also signed a contract with us. At March 28 Prof Paul Hebert will come to our university and we actually generated two projects with Prof Paul. I hope you trust me and you call Prof  Inder and  Paul to be sure. 

From me:

I trust you – just am very busy so not sure if this is right for me
Sent from my iPhone

From him:

You will come to our university for just two visits annually and each visit will take only  one week. Take you time to think. Bye

Another email from him

Seat Dr Jonathan, 

What is your decision?

My response:

You have not really provided me with enough information about this.

From him:

Well, you will sign a contract as a highly cited professor between you and KAU. if it happen you will get 7,000 US$ per month for one year as a salary.  From  this project you would be able to generate two proposal with around 200,000 US$ and you will get incentives from each one. In the further we can initiate a mega project with 1.5 million US$.   Is that clear? 

From me:

I could use a formal , legal description of the agreement that one is expected to sign 

From him:

You can ask Prof Dr. Inder Verma he is now in my department and he did two presentation today. Also you can ask my professor prof  Paul Hebert, biodiversity institute of Ontario who will come to my department in March 28,2014.

From him:

if you would agree . Coul you please provide me with your CV with list of publication? 

From him:

Are you agree or no?

From me:

No 

You have not provided me with anywhere near enough info to evaluate this 

Do you have any legal agreement I can look at?

From him:

Agreement from KAU
without providing me with your CV I could not be able to talk to university administration. I told you before ask under verma or Paul Hebert both of them have contract. Dr verma ” editor in chief of PNAS who is left KAS since 4 hours ago. Finally, its up to.

From me:

No thanks
Not interested from what you have told me

Thread #2
Received this email on 12/17/13

Dr. Mansour Almazroui
12/17/13
to jaeisen
Dear Prof. Jonathan Eisen ,

I am Dr. Mansour Almazroui, Highly Cited Program Manager, at King Abdulaziz University (KAU), Jeddah, Saudi Arabia. On behalf of KAU with great pleasure, I would like to invite you to join our innovative collaboration program that is called “International Affiliation program”.

KAU is considered as the largest university in the region serving more than 150,000 students, with around 4,000 faculty members and 30 colleges. For more information please locate us at:  http://www.kau.edu.sa.

The envisaged program aims to elevate our local research activities in various fields. We only extend our invitation to highly ranked researchers like you, with a solid track record in research and publications to work with KAU professors.

Joining our program will immediately put you on an annual contract, as a Distinguished Adjunct Professor. In this regard, you will only be required to work at KAU premises for three weeks in each year of your contract.

We hope you to accept our invitation and looking forward to welcome you.  Please don’t hesitate to contact me for any further query or clarification.

Sincerely,
Mansour

————————————————————————————–
Dr. Mansour Almazroui
Highly Cited Program Manager,
Office of the Vice President for Graduated Studies and Research,
King Abdulaziz University (KAU).
&
Director, Center of Excellence for Climate Change Research
King Abdulaziz University
P. O. Box 80234, Jeddah 21589,
Saudi Arabia

I wrote back

I am intrigued but need more information about the three weeks of time at KAU and the details on the contract. 

Jonathan Eisen  

Sent from my iPhone

Got this back

Dear Prof. Jonathan Eisen , 

Hope this email finds you in good health. Thank you for your interest. Please find below the information you requested to be a “Distinguished Adjunct Professor” at KAU. 

1. Joining our program will put you on an annual contract initially for one year but further renewable. However, either party can terminate           its association with one month prior notice.
2. The Salary per month is $ 6000 for the period of contract.
3. You will be required to work at KAU premises for three weeks in each contract year. For this you will be accorded with expected three         visits to KAU.
4. Each visit will be at least for one week long but extendable as suited for research needs.
5. Air tickets entitlement will be in Business-class and stay in Jeddah will be in a five star hotel. The KAU will cover all travel and living             expenses of your visits.
6. You have to collaborate with KAU local researchers to work on KAU funded (up to $100,000.00) projects.
7. It is highly recommended to work with KAU researchers to submit an external funded project by different agencies in Saudi Arabia.
8. May submit an international patent.
9. It is expected to publish some papers in ISI journals with KAU affiliation.
10. You will be required to amend your ISI highly cited affiliation details at the ISI highlycited.com web site to include your employment and         affiliation with KAU.   

Kindly let me know your acceptance so that the official contract may be preceded.
Sincerely,
Mansour

I promtly forwarded this to my brother with a note:

One way to make some extra money … Sell your reputation / ISI index  

Sent from my iPhone

And my brother eventually shared this with Lior  …
UPDATE 1: 12/5/2014

One key question is – what are the rules and guidelines and ehitcs of listing affiliations on papers.  Here are some tidbits on this

From Nature Communications:

The primary affiliation for each author should be the institution where the majority of their work was done.

From Taylor and Francis

The affiliations of all named co-authors should be the affiliation where the research was conducted.

From SAGE

Present the authors’ affiliation addresses (where the actual work was done) below the names.

UPDATE 2: Some other posts of relevance

UPDATE 3: A Storify

A year ago, I came up with a great acronym for our project – ICIS – what do I do now?

About a year ago, when we starting meeting and discussing our new project on the future of scholarly communications, we decided it would be important to have a title and if possible a brief acronym. Now – I love coming up with such project names and acronyms and think I have done a good job with such tasks in the past.  Somehow I wanted to find a cool acronym for a project on scholarly communication, and publishing, and openness, and social media and more.  And I just could not come up with anything great.  

And this was starting to get semi urgent since we had a meeting coming up and needed to make a web site and needed names and titles and such.  So the group of us involved in the project (myself, Mackenzie Smith and Mario Biagioli) started sending around ideas to each other. Among the ideas first proposed:

  • IFHA Innovating Academic Publishing Project
  • The UC Davis IFHA project on Innovating Academic Publishing” with a project acronym of IAP

But I did not like this so I started doodling:

Screenshot 2014-10-23 14.36.48

Screenshot 2014-10-23 14.35.45

So I compiled some ideas and sent around a list

  • ISP: Innovating Scholarly Publishing
  • iSP: Innovating Scholarly Publishing
  • i2SP: Innovations in Scholarly Publishing  (with the 2 being a superscript)
  • i2AP: Innovations in Academic Publishing  (with the 2 being a superscript)
  • i2SP2: Innovations in Scholarly Publishing Project  (with the 2 being a superscript)
  • i2AP2: Innovations in Academic Publishing Project  (with the 2 being a superscript)
  • IN-A-PUB: Innovating Academic Publishing
  • SP2.0: Scholarly Publishing 2.0

Other ideas circulated (some mine, some others)

  • LIAP: Lab for Innovating Scholarly Communication, pronounced LEAP?
  • Innovating Academic Communication – IACOM
  • Innovating Academic Publishing – IAPUB
  • Innovating Scholarly Publishing – ISPUB
  • Innovating Scholarly Communication – ISCOM or INSCOM
  • COMMIS: communications innovation in scholarship

And then I came up with one I loved and sent around a suggestion:

“ok here is my favorite so far – innovating communication in scholarship – ICIS (pronounced isis —)”

Other ideas circulated:

  • pubs 2.0
  • innopubs
  • scholar 2.0

But I said I still liked ICIS best and, well, it won.  Yay.  We had a project name.   And we used it for our meetings and our website: icis.ucdavis.edu.  I was so proud of this name.  It sounded nice.  It conjured up images of ISIS the Goddess.  And every time we discussed the project I could remember the struggle to come up with a name and how happy I was when it came to me.

And then the ICIS name got, well, polluted.  Or, at least, the sound of the name got polluted by the group known variously as the Islamic State of Iraq and the Levant (aka ISIL), Islamic State of Iraq and Syria (aka ISIS).  And though I would hope it would be clear that our ICIS project is not connected in any way to ISIL/ISIS.  But yet, when I mention the project name to people, I invariable get some comment.  And Alison Fish, the first post doc working on the project just told me that she also gets some murmuring when she mentions the name during talks.

So we are in the midst of our annual retreat for the project now.  We have six people working on the project – myself, Mario Biagioli, Mackenzie Smith, Allison Fish, Alessandro Delfanti, and Alexandra Lippman.  And we all liek the ICIS name but are not sure what to do now?  So – in the interest of openness and communication I proposed to everyone (and they at least seemed to agree) that it might be good to ask the broader community for input here.

So – any suggestions?  What do you recommend we do?  Should we change the name?  Change how we say it?  Should we stick to our name and pronunciation?  And ideas, thoughts, or suggestions would be welcome.

Curating a Storify About this

Suggestion of the week: Create Project Specific Pages on ImpactStory #AltMetrics

So – I have been doing a little “hacking” of the Impact Story system to create pages specific for individual projects rather than for me or other researchers.  I did this last week for my microBEnet project: Made a project page (hack?) for microBEnet on ImpactStory.  And been playing around with the concept some more.

For example see this page I made for the “iSEEM2: Environmental Niche Atlas” project that is a collaboration between my lab and the lab of Katie Pollard at UCSF (supported by the Gordon and Betty Moore Foundation).  To do this, I registered a new account in ImpactStory (with the first name i and last name SEEM2; using an alternative email address I have). I then used the “upload individual products” and loaded up Pubmed IDs, DOIs, Github web addresses, Slideshare web addresses and more.  And Voila I get I nice page with Altmetrics for our project rather than for myself.

Now I have not loaded everything done on this project yet, but already this is a helpful way to post results from our project and look at some of their metrics. I also updated the website for the project: http://iseem2.wordpress.com.

I think making such project specific pages will end up being useful in many ways. I discovered one this AM in an email I got from Impact Story.  I have appended it below.  Turns out they give weekly updates on how your metrics have changed for that week.  This is the best thing I have seen regarding “Alt Metrics” anywhere.  Very very useful.  Still not sure if this is an “acceptable” use of ImpactStory but I figure they should be OK with it.


Impactstory logo

Your new research impacts this week

user avatari SEEM2impactstory.org/iSEEM2

20+ profile SlideShare downloads

on https://impactstory.org/iSEEM2

One or more of the 31 products on your profile attracted a combined 8 new SlideShare downloads this week, bringing you to 22 total.
Congrats on passing the 20 mark!
profile milestone

Welcome to the SlideShare favorites club!

on https://impactstory.org/iSEEM2

Congratulations, you just got your first SlideShare favorites!
profile milestone
That brings this video up to 232 YouTube views total.
It marks your 1st product to get this many views on YouTube. Nice work!
video new metrics
That brings this video up to 221 YouTube views total.
It marks your 2nd product to get this many views on YouTube. Nice work!
video new metrics
This article attracted 4 new Scopus citations this week, bringing it up to 40 total.
It marks your 1st product to get this many citations on Scopus. Nice work!
article milestone
That brings this article up to 29 Scopus citations total.
Impressive! Only 1% of 2012 article have reached that many citations.
It marks your 2nd product to get this many citations on Scopus. Nice work!
article new metrics
This slides attracted 83 new SlideShare views this week, bringing it up to 83 total.
It marks your 3rd product to get this many views on SlideShare. Nice work!
slides milestone

First Delicious bookmarks

on Systematic identification of gene families for use as “markers” for phylogenetic and phylogeny-driven ecological studies of bacteria and archaea and their major subgroups.

This article attracted 1 new Delicious bookmarks this week, bringing it up to 1 total.
It marks your 4th unique product to get a bookmarks on Delicious. Nice work!
article milestone

First SlideShare downloads

on Phylogeny-Driven Approaches to Genomics and Metagenomics – talk by Jonathan Eisen at Fresno State May 6, 2013

This slides attracted 7 new SlideShare downloads this week, bringing it up to 7 total.
It marks your 3rd unique product to get a downloads on SlideShare. Nice work!
slides milestone

First SlideShare favorites

on Phylogeny-Driven Approaches to Genomics and Metagenomics – talk by Jonathan Eisen at Fresno State May 6, 2013

This slides attracted 2 new SlideShare favorites this week, bringing it up to 2 total.
It marks your 1st unique product to get a favorites on SlideShare. Nice work!
slides milestone

Some additional details of my discussion w/ reporter John Bohannon for his Science story on Google Scholar

There is a story in today’s Science magazine on Google Scholar by John Bohannon.  Entitled “Google Scholar Wins Raves, But Can It Be Trusted” the article discusses some of the pros and cons of using Google Scholar.  The author of the article interviewed me on and off over a few weeks about Google Scholar because I have written multiple blog posts on how I use it.  For example see:

And also a diverse array of posts on Twitter which I will not rehash here.

Anyway – the new article covers some interesting points but is very very short (ahh — the fun with page length restrictions).  So I thought I would post here some of the comments I made about Google Scholar in emails with the reporter.

Bohannon wrote to me on December 1, 2013

Dear Dr. Eisen-

I’m writing a news story for Science about Google Scholar. Have you continued using their article recommendation engine? (I saw your blog post about it from last year.)

(and then he wrote some details about what he was working on which I am not sure he would want me to post here and I have not asked so I am leaving them out)

I’d very much like to hear your thoughts on how Google Scholar has developed, and how well it works as a replacement for traditional library/proprietary/non-open literature databases.

cheers and thanks in advance,
John Bohannon

I wrote back, that same day (unusual for me)

Well, was just hacking around with Google Scholar this AM.

Have you dug into their new function – Scholar Library? I am playing around with it but have not quite figured it out. What I am hoping to do is to figure out how to get recommendations based on lists in the Library. Currently, the recommendation engine has one very very big limitation. It bases recommendations on one’s own publications. And if you are trying to move into a new area – well that is pretty useless.

So – some comments

1. I find the recommendation engine to be very very useful still. One of the best ways to find out about new papers. With the limitation mentioned above.

2. I use automated Google Scholar searches to find all sorts of papers of interest. This helps cover topics more broadly than the recommendation engine.

3. There are other recommendation systems out there – but I have not used them too much.

4. As for “free and open” – I don’t think I would use such terminology here. Yes, Google Scholar is free. But is it open? I don’t think so. For example, I am not sure if they publish / release all their code that works behind the scenes. And I am not sure how open the results are (not saying it is not open – I just don’t know).

5. The citation information is a wonderful tool – and it is great to have this information be freely available. I use it routinely for all sorts of purposes including getting around the massive limitations of IF. One issue however with GS is that it takes citations from ALL sorts of sources including non peer reviewed material and even material that people may not have been aware was even publicly available. So – citation counts are generally higher – sometimes much higher – in GS than with other metrics. I note – I put info for my Citations based on GS on my CV and various other places.

And this allows GS to be seriously gamed in terms of citation counts.

6. GS is still clunky in a few ways and I hope that Google puts more effort into it. It could become THE tool for academic scholarship searches and tracking but it has some bugs and minor annoyances still.

Just some quick thoughts. Let me know if you have any other questions.

Jonathan Eisen

 

He then wrote back with some questions.  Again, I won’t share the whole email here at this time but I will share the specific questions he asked.  And I then wrote back with some answers

Question:

** Care to share a recent example? Something nicely illustrates its usefulness.

Answer:

 

Question regarding my comment on other systems

Me: There are other recommendation systems out there –  but I have not used them too much.

Bohannon: ** Are there?  For example?  I should take a gander.

Me: Mendeley has one – called recommended or something like that

Question regarding my comment on free vs. open
Bohannon: Right, good point. Free but not really open. Do you feel like this is a worrying limitation? Is it realistic to make it open?  Or at least, more open?
Me: Well, I am certainly not one to tell others how to run a business.  So I am thankful for any free or open material released / produced by for profits.  But there is a major worry here which is – if we do not know how the system works – we do not know if it is biased or how it can be gamed, etc.  And if it is not fully open then if we invest in making use of it, Google could simply kill it at any time and we would have no source material to use for other purposes.
Question regarding my comment on Impact Factor
Bohannon: What is an example of a massive limitation of IF that GS solves? And I wonder why GS can’t tell the difference between peer-reviewed and non-peer-reviewed sources. I’ll add that to questions below.
Me: GS helps with article level metrics as opposed to journal level metrics like IF.  I can look up citations to all my papers quickly.  And I can track metrics like H index and I10 and others.  IF – being a journal level metric – is not really that informative in my opinion (and that of many others)

Question on other things GS could do:

Bohannon: What do you see as the most important things that need to be fixed before it could really take over?

Me:

1. Transparancy in how it works and it what they are planning
2. Make it open source
3. Ability to create reference collations easily (e.g., like Mendeley or Zotero or CiteULike or Endnote).

Questions for Google

Bohannon: * Any other questions you’d like Google to answer, or features to request?

Me:

Well many features to request.

1. Better hot linking of all authors of a paper. Right now they only seem to link the 1st author to their google scholar page.

2. Better handling of long author lists.

3. Better ability to upload collections exported from other tools.

And many more …

After this Bohannon then wrote me another email with additional questions which I answered, briefly

Bohannon: Questions I should have asked but forgot:
Bohannon: When was the last time you exclusively used paper journals to find articles? (Ever?)
Answer from me:
never – or a very very very long time ago
since I started using email / the web 20+ years ago I have tried to use electronic / digital searches to find papers
Bohannon: What electronic services did you use before GS?
Answer from me:
Pubmed searches of course
And pubmed had some automated email alerts that I used to use
I also used lots of searches of journal web sites
And some journals offered automated searches too
The best thing out there was probably “related articles” in Pubmed
Bohannon: Have you switched entirely to GS?
Answer from me:
No – I still use pubmed searches quite a bit – partly because they are less cluttered than GS searches
I still occasionally search journal web sites but that is rare
And I actually use straight google searches a lot

And then another question

Bohannon: Oh, and when did you start using GS?  And when did it become your main search service?

(Sorry for the shotgun interrogation!)

Answer 1 from me:

heh – I have no idea …

Answer 2 from me:

I note – I think right now Twitter is the best source of information about new papers — far better than GS …

And I looked through my email and found that for a few years I used F1000 automated searches

And then in response to a follow up comment about Twitter from Bohannon I said

Yes, did not say Twitter was fun … but if you follow the right people, they can crowdsource for you all of the right literature better than GS

 

And then on Dec 6 Bohannon wrote me about visualizations

Hi Jonathan-

We’re sorting out some art for the article about Google Scholar.  I’m thinking that the best option would be a cropped screen shot of one of your Google Scholar automatic article recommendations.  Would you be OK with that?
Do they come as an email or appear in browser? If email, you can just fwd it.  If browser, could you take a screen shot and send it?
The idea is to show people what GS article recommendations look like, using your own experience for a visual.

 

So I sent a few things including these

 

Screen Shot 2013-12-06 at 12.09.09 PM

 

Screen Shot 2013-12-06 at 12.05.40 PM

We had a few other discussions about related topics but I thought it might be useful to see that general gist of the discussion and some of my thoughts on Google Scholar in more detail.  I note, at our meeting in February on the Future of Scholarly Publishing and Careers here at UC Davis Anurag Acharya from Google Scholar will be talking and will be on a discussion panel that will focus broadly on “metrics” for scholars so should be interesting.

How Open Are You? Part 1: Metrics to Measure Openness and Free Availability of Publications

For many many years I have been raising a key questions in relation to open access publishing – how can we measure how open someone’s publications are.  Ideally we would have a way of measuring this in some sort of index.  A few years ago I looked around and asked around and did not find anything out there of obvious direct relevance to what I wanted so I started mapping out ways to do this.

When Aaron Swartz died I started drafting some ideas on this topic.  Here is what I wrote (in January 2013) but never posted:

With the death of Aaron Swartz on Friday there has been much talk of people posting their articles online (a short term solution) and moving more towards openaccess publishing (a long term solution).  One key component of the move to more openaccess publishing will be assessing people on just how good a job they are doing of sharing their academic work.

I have looked around the interwebs to see if there is some existing metric for this and I could not find one.  So I have decided to develop one – which I call the Swartz Openness Index (SOI).

Let A = # of objects being assessed (could be publications, data sets, software, or all of these together). 

Let B = # of objects that are released to the commons with a broad, open license. 

A simple (and simplistic) metric could be simply 

OI = B / A


This is a decent start but misses out on the degree of openness of different objects. So a more useful metric might be the one below.

A and B as above. 

Let C = # of objects available free of charge but not openly 

OI = ( B + (C/D) ) / A  

where D is the “penalty” for making material in C not openly available


This still seems not detailed enough.  A more detailed approach might be to weight diverse aspects of the openness of the objects.  Consider for example the “Open Access Spectrum.”  This has divided objects (publications in this case) into six categories in terms of potential openness: reader rights, reuse rights, copyrights, author posting rights, automatic posting, and machine readability.  And each of these is given different categories that assess the level of openness.  Seems like a useful parsing in ways.  Alas, since bizarrely the OAS is released under a somewhat restrictive CC BY-NC-ND  license I cannot technically make derivatives of it.  So I will not.  Mostly because I am pissed at PLoS and SPARC for releasing something in this way.  Inane.

But I can make my own openness spectrum.

And then I stopped writing because I was so pissed off at PLOS and SPARC for making something like this and then restricting it’s use.  I had a heated discussion with people from PLOS and SPARC about this but not sure if they updated their policy.  Regardless, the concept of an Openness Index of some kind fell out of my head after this buzzkill.  And it only just now came back to me. (Though I note – I did not find the Draft post I made until AFTER I wrote the rest of this post below … ).

To get some measure of openness in publications maybe a simple metric would be useful.  Something like the following

  • P = # of publications
  • A = # of fully open access papers
  • OI = Openness index

A simple OI would be

  • OI = 100 * A/P
However, one might want to account for relative levels of openness in this metric.  For example
  • AR = # of papers with a open but somewhat restricted license
  • F = # of papers that are freely available but not with an open license
  • C = some measure of how cheap the non freely available papers are
And so on.
Given that I am not into library science myself and not really familiar with playing around with this type of data I thought a much simpler metric would be to just go to Pubmed (which of course works only for publications in the arenas covered by Pubmed).
From Pubmed one can pull out some simple data. 
  • # of publications (for a person or Institution)
  • # of those publications in PubMed Central (a measure of free availability)
Thus one could easily measure the “Pubmed Central” index as
PMCI = 100 * (# publications in PMC / # of publications in Pubmed)
Some examples of the PMCI for various authors including some bigger names in my field, and some people I have worked with.
            Name                        #s                 PMCI    
Eisen JA
224/269  
83.2
Eisen MB 
76/104
73.1
Collins FS
192/521
36.8
Lander ES
160/377
42.4
Lipman DJ
58/73
79.4
Nussinov R
170/462
36.7
Mardis E
127/187
67.9
Colwell RR
237/435
54.5
Varmus H
165/408
40.4
Brown PO
164/234
70.1
Darling AE
20/27
74.0
Coop G
23/39
59.0
Salzberg SL
107/162
61.7
Venter JC
53/237
22.4
Ward NL
24/58
41.4
Fraser CM
78/262
29.8
Quackenbush J
95/225
42.2
Ghedin E
47/82
57.3
Langille MG
10/14
71.4

And so on.  Obviously this is of limited value / accuracy in many ways.  Many papers are freely available but not in Pubmed Central.  Many papers are not covered by Pubmed or Pubmed Central.  Times change, so some measure of recent publications might be better than measuring all publications.  Author identification is challenging (until systems like ORCID get more use).  And so on.

Another thing one can do with Pubmed is to identify papers with free full text available somewhere (not just in PMC).  This can be useful for cases where material is not put into PMC for some reason.  And then with a similar search one can narrow this to just the last five years.  As openaccess has become more common maybe some people have shifted to it more and more over time (I have — so this search should give me a better index).

Lets call the % of publications with free full text somewhere the “Free Index” or FI.  Here are the values for the same authors.

Name
PMC 
%
Pudmed 
PMCI 
Free
%
Pubmed
5 years
FI – 5 
Free
%
Pubmed
All
FI-ALL
Eisen JA
224/269
83.2
178/180
98.9
237
88.1
Eisen MB 
76/104
73.1
32/34
94.1
83 79.8
Collins FS
192/521
36.8
104/128
81.3
263 50.5
Lander ES
160/377
42.4
78/104
75.0
200 53.1
Lipman DJ
58/73
79.4
20/22
90.9
59 80.8
Mardis E
127/187
67.9
90/115
78.3
135 72.2
Colwell RR
237/435
54.5
31/63
49.2
258 59.3
Varmus H
165/408
40.4
21/28
75.0
206 50.5
Brown PO
164/234
70.1
20/21
95.2
185 79.0
Darling AE
20/27
74.0
18/21
85.7
21 77.8
Coop G
23/39
59.0
16/20
80.0
28 71.8
Salzberg SL
107/162
61.7
54/58
93.1
128 79.0
Venter JC
53/237
22.4
20/33
60.6
85 35.9
Ward NL
24/58
41.4
18/27
66.6
30 51.7
Fraser CM
78/262
29.8
9/13
69.2
109 41.6
Quackenbush J
95/225
42.2
54/75
72.0
131 58.2
Ghedin E
47/82
57.3
30/36
83.3
56 68.3
Langille MG
10/14
71.4
11/13
84.6
11 78.6

Very happy to see that I score very well for the last five years. 180 papers in Pubmed.  178 of them with free full text somewhere that Pubmed recognizes. The large number of publications comes mostly from genome reports in the open access journals Standards in Genomic Sciences and Genome Announcements.  But most of my non genome report papers are also freely available.

I think in general it would be very useful to have measures of the degree of openness.  And such metrics should take into account sharing of other material like data, methods, etc.  In a way this could be a form of the altmetric calculations going on.

But before going any further I decided to look again into what has been done in this area. When I first thought of doing this a few years ago I searched and asked around and did not see much of anything.  (Although I do remember someone out there – maybe Carl Bergstrom – saying there were some metrics that might be relevant – but can’t figure out who / what this information in the back of my head is).

So I decided to do some searching anew.  And lo and behold there was something directly relevant. There is a paper in the Journal of Librarianship and Scholarly Communication called: The Accessibility Quotient: A New Measure of Open Access.  By Mathew A. Willmott, Katharine H. Dunn, and Ellen Finnie Duranceau from MIT.

Full Citation: Willmott, MA, Dunn, KH, Duranceau, EF. (2012). The Accessibility Quotient: A New Measure of Open Access. Journal of Librarianship and Scholarly Communication 1(1):eP1025. http://dx.doi.org/10.7710/2162-3309.1025
Here is the abstract:

Abstract
INTRODUCTION The Accessibility Quotient (AQ), a new measure for assisting authors and librarians in assessing and characterizing the degree of accessibility for a group of papers, is proposed and described. The AQ offers a concise measure that assesses the accessibility of peer-reviewed research produced by an individual or group, by incorporating data on open availability to readers worldwide, the degree of financial barrier to access, and journal quality. The paper reports on the context for developing this measure, how the AQ is calculated, how it can be used in faculty outreach, and why it is a useful lens to use in assessing progress towards more open access to research.
METHODS Journal articles published in 2009 and 2010 by faculty members from one department in each of MIT’s five schools were examined. The AQ was calculated using economist Ted Bergstrom’s Relative Price Index to assess affordability and quality, and data from SHERPA/RoMEO to assess the right to share the peer-reviewed version of an article.
RESULTS The results show that 2009 and 2010 publications by the Media Lab and Physics have the potential to be more open than those of Sloan (Management), Mechanical Engineering, and Linguistics & Philosophy.
DISCUSSION Appropriate interpretation and applications of the AQ are discussed and some limitations of the measure are examined, with suggestions for future studies which may improve the accuracy and relevance of the AQ.
CONCLUSION The AQ offers a concise assessment of accessibility for authors, departments, disciplines, or universities who wish to characterize or understand the degree of access to their research output, capturing additional dimensions of accessibility that matter to faculty.

I completely love it.  After all. it is directly related to what I have been thinking about and, well, they actually did some systematic analysis of their metrics.  I hope more things like this come out and are readily available for anyone to calculate.  Just how open someone is could be yet another metric used to evaluate them …

And then I did a little more searching and found the following which also seem directly relevant

So – it is good to see various people working on such metrics.  And I hope there are more and more.
Anyway – I know this is a bit incomplete but I simply do not have time right now to turn this into a full study or paper and I wanted to get these ideas out there.  I hope someone finds them useful …

Playing with Impact Story to look at Alt Metrics for my papers, data, etc

The future of science will include in part better evaluations of the impact of individual scientists, individual papers and individual other units such as data sets, software, presentations, etc.

 There are many efforts in this area of “Alt Metrics” and one I have been playing around with recently is Impact Story. It used to be called Total Impact but they changed their name and some of their focus. It is pretty easy to use.

 One thing you can do is to create “A Collection.” To do this you go to their site, you register, and then you select “Create Collection“. And you add some information there

Among the information you can include: 
  • ORCID ID: ORCID is a new system for unique author IDs.  Once you get your unique ID you can curate / update your papers at the site (the site needs some work … some issues there with duplication).  I have gotten my ORCID ID and updating my publications there.
  • Articles from Google Scholar profile.  This allows one to upload a Bibtext fuile of one’s publication list from Google Scholar.  To get this, you need a Google Scholar page.  I have one here.  I have been playing a lot with Google Scholar recently: The Tree of Life: Wow – Google Scholar “Updates” a big step forward … and The Tree of Life: Thank you Google Scholar Updates for finding me … but did not realize it had a Bibtex export function until now.  From the drop down menu one selects “Export” and then can export ones publications (in the screen capture below the default option is Actions).  Once you get a Bibtex file you can upload it to ImpactStory.
  • Article and Dataset IDs.  Here one can Pubmed IDs or DOIs for other publications or datasets. Since most / all of my papers are in my Bibtext export and Orcid ID what I imagine using this for is data from places like Figshare and DataDryad
  • Webpage URLs.  One can include URLs here.  But so far my experience has been that they do not have a good system of assessing webpages.
  • Slideshare username.  If you are not posting slides and other materials on Slideshare, get with the program.  I post all my talks there.  And other things.  
  • Github Username.  A good place to post code/software.  We are doing this more and more in my lab.  I have a username though I don’t do much there myself.  
And then give your collection a name and click go.  It takes a bit of time to finish the initial collection creation with my list of materials.  But it is fascinating and very useful once done.  Here is a link to a collection “Jonathan Eisen try #3” I recently made.  I have not added everything to it but it is still a good record of how many of my contributions are being used.
My favorite thing to do so far is to click “expand all” from the menu which then shows the detailed Alt Metrics for everything.  

  • PDF views.
  • HTML Views. 
  • Facebook shares.  
  • Twitter shares.  
  • And much more. 
It does not seem perfect – not sure how the metrics are quantified for things like Twitter and Facebook.  But it gives a decent indication of how much chatter and use there is of various materials.
And you can export all the information for your own private use.  I can imagine this being VERY useful for promotion/tenure/other review actions.
I also sniffed around the site and found some nice features from their api page.  I especially like the embed function for specific DOIs.  You copy their text and change the DOI and you get a nice graphical summary of Alt Metrics for that DOI.  See an example at the bottom of the post.  Am probably going to add this to my publication lists on the web.

It is important to realize this is a BETA version. Still needs some work. But LOTS of cool things to play with. The future is here and I like it. Time to end reliance on indirect measures of the impact of papers and data (e.g., Journal Impact Factor). Time to measure actual impact. And this is a good tool to help do that.

http://impactstory.org/static/js/total-impact-item.jsdoi:10.1371/journal.pone.0018011


More playing around with Total Impact

I am continuing to play around with Total Impact (see for example total-impact: Jonathan Eisen). This is a new (beta) system for tracking individual impact of scientific productivity including papers, presentations, data, etc.

So far I like the general things I am seeing there.  They ask for feedback on their site and in the interest of openness I am posting some things I would love to see here

1. Sorting by publication date or any of the metadata categories (e.g., citations, downloads)
2. Better way of saving DOI lists such that if you get a new publication you can just add to the list

Lots of other things obviously but it is an early beta version so I am willing to be patient.  Definitely worth playing around with.