Some of you may remember oaDOI, a tool that searches for an Open Access (OA) version of an article when provided with a digital object identifier (DOI). Impactstory, the team behind oaDOI, has just announced the upcoming release of a new tool based on the oaDOI API called Unpaywall and it looks to be very useful indeed.
Unpaywall operates on the same principle as oaDOI: it hooks onto an article’s DOI and searches a number of sources for an OA version of that article. The main difference here is that, while oaDOI requires you to enter a DOI into a search box, Unpaywall is a Google Chrome extension and performs that search in the background automatically when you visit the landing page for an article. An unobtrusive tab will pop up on the page, as can be seen in the example to the right, which will be green if an OA link is found and gray if not. Users can then click the tab to be taken to an OA version of the article (if one was found). Finding a DOI and entering into a search isn’t exactly an onerous task, but users are often unwilling to take those few extra steps; the streamlining that Unpaywall provides means that it’s far more likely for researchers to bring into their usual workflow.
Unpaywall officially releases on April 4th, but Impactstory has made the extension available now and I have yet to run into any issues. There are still a few limitations, though, to how useful Unpaywall might prove to be:
As I pointed out when talking about oaDOI, not all articles hosted online have DOIs and most that do will not be available on an OA platform; a more detailed discussion of these issues can be found here. Another important point, one that I failed to notice last time, is that neither oaDOI nor Unpaywall provide information on what version of the article users will be linked to. Many OA versions of published articles are either pre-prints (i.e. the authors’ original manuscript before any revisions suggested by reviewers) or post-prints (i.e. the final accepted version of the manuscript, but often not reflecting final typesetting or copyediting done by the publisher). As you might expect, these versions can differ from the final published version of an article, sometimes substantially.
Most researchers understand this and are familiar with institutional repositories and pre-print servers. Those that aren’t, though, may not know that the version of the article they arrive at through Unpaywall might differ from the published version in some substantive way. If I have one complaint about Unpaywall, it’s that I’d like to see them implement some system for letting users know up front where the article they are being linked to is coming from. This would make it a bit easier for users to follow up on sources and check to see if they are indeed getting a pre- or post-print; their only option currently is to play around with the URL provided to try and arrive at an info page for the article or repository. Justin Priem, one of the founders of Impactstory, has assured me that this is on their radar, though, and that they plan to implement this in future versions of Unpaywall.
Regardless, this is a minor complaint about an overall great product, and I’m very excited to try and get users here at Wayne to start using the extension. This is two great tools in a row from Impactstory, as well, so I’m looking forward to see what they do next.
For years now, Jeffrey Beall’s list of predatory Open Access publishers has served as an important resource for librarians and scholars alike. Though his methodology has always been fairly opaque (as I touched on in my previous blog post), being able to check publishers against Beall’s list made it easier for scholars and librarians to avoid being swindled. Apparently, sometime on or before January 15th, Beall’s website https://scholarlyoa.com/ (and the list as well) was scrubbed of information and now exists only as a shell without any content. The Support for Open Access Publishing blog discussed the takedown and its ramifications a bit in a recent post.
There are rumors on Twitter that Cabell’s, a subscription-based directory of journals, may be subsuming Beall’s list. These rumors have not been confirmed but, to add to the confusion, Lacey E. Earle (Cabell’s VP of Business Development) tweeted
— Lacey E. Earle (@lacey_earle) January 17, 2017
No word from Beall himself yet, as reported in Inside Higher Ed. Beall’s list has been replicated and is still available (and easily discoverable) elsewhere online.
Over the winter break, the New York Times ran an article titled A Peek Inside the Strange World of Fake Academia in which the author, Kevin Carey, discusses a number of topics likely familiar to many working in and around institutes of higher learning. The first (and most egregious) example of “fake academia” called out by Carey is the OMICS group, which you may remember from a blog post I wrote back in September. I’m not sure why exactly this article is surfacing now since the FTC filing was months ago; I assume it has something to do with the prevalence of reports on “fake news” sources. Carey’s description essentially mirrors what I wrote there, that OMICS accepts articles with little to no screening, charges exorbitant fees, and lies about who is serving as editors or speakers for their journals and conferences. A fairly amusing example of this article screening process (or lack thereof) comes from Christoph Bartneck, a professor in New Zealand, who used the autocomplete feature on Apple’s iOS to write a paper on Atomic Physics. This paper was accepted only three hours later by the International Conference on Atomic and Nuclear Physics, an OMICS-run conference.
A slightly more interesting point comes later in the article, though. Carey brings up the World Conference on Special Needs Education (WCSNE), which seems to occupy a sort of limbo between legitimate and predatory. The fees for WCSNE attendance are quite high, even for presenters, ranging from $380 to $650. It also indicates that submitted research papers must be between 4 and 6 pages (including tables and figures), with a $30-$50 per-page fee for articles with longer page counts. It also operated similarly to OMICS conferences, claiming several high-profile speakers who, when contacted, said they were in no way involved with the WCSNE. Still, Carey discovered something interesting when asking around about the WCSNE: many defended it as a legitimate academic conference.
One of the founders, Richard Cooper, is the director of disability services at Harcum College in Pennsylvania, who claimed that the conference is worthwhile to the (primarily International) attendees. Barba Patton, a professor at the University of Houston-Victoria in Texas, also defended the WCSNE. She has attended the conference year after year, and has no complaints. Indeed, even Carey admits that the papers presented at the WCSNE are “well within the bounds of what gets published in many scholarly journals that, while not prestigious, have never been called a fraud.” This is, I feel, more than anything else an example of how the publish-or-perish mentality has affected scholars, especially those working outside of the hard sciences. Scholars need outlets for their work, they need to be published and to attend conferences in order to retain their positions. Article or conference attendance fees seem like a small price to pay when compared to the prospect of losing one’s job.
There is, as a result, some grey area here. This also underlines one of the major issues with Beall’s List, the list of so-called “predatory” open access publishers maintained by Jeffrey Beall. The list is that and nothing else: no context, no real explanations, nothing but the names of publishers which fit Beall’s posted guidelines. Unfortunately, for many scholars, their need to publish does not provide them the luxury of being so black-and-white in how they view publishers.
Before diving into CiteScore, it’s a good idea to briefly discuss the current journal metric it most closely resembles, the Impact Factor. Those of you familiar with the world of scholarly journals are surely familiar with Impact Factor, a metric which ranks scientific journals based on (roughly speaking) the average number of citations received by articles in that journal. It has been more or less an industry standard since it was first introduced by Eugene Garfield in a 1972 paper. The Impact Factor was originally based on the Science Citation Index, but now relies on citation information harvested from the Web of Science database. Below is an explanation of how a journal’s impact factor is calculated, borrowed from the library’s own guide on measuring research impact:
Elsevier’s CiteScore is broadly similar to the Impact Factor, with a few key differences:
- CiteScore pulls data from the Scopus database and considers about 22,000 different items, about double that of the Impact Factor
- CiteScore pulls citation data from the three previous years, as opposed to Impact Factor’s two
- Impact Factor only looks at what it considers to be citable items, meaning articles or reviews. CiteScore, on the other hand, pulls citation data from any available items in the journal, including front matter
- CiteScore is provided free of charge, and is openly available on the web
- CiteScore metrics are calculated monthly, whereas Impact Factors are calculated annually
Impact Factor (and journal metrics in general) have never received total acceptance from the scientific community (with good reason), but the reaction to CiteScore has been a bit more hostile than may be expected. The openness and transparency of their methods has generally been praised, as has the fact that it is provided at no charge. Many criticize item number 3 above, the fact that CiteScore pulls citation data from any and all available documents. This can be a problem because many prestigious journals include non-citable items, like editorials, letters from researchers, or subject-specific news, which increase the number of items appearing in the journal without providing any additional citations.
Since CiteScore is calculated on a monthly basis, Eslevier hopes, perhaps, that it can provide a bit more currency in subject areas where this is important. I’m not convinced, though, that this is necessary. Monthly updates seems to be more than could be considered useful. Perhaps if a journal produces one or two very impactful articles, or if a journal adjusts is publication schedule or practices, CiteScore will reflect this a bit sooner than Impact Factor would. Aside from situations like that, most metrics change only gradually, and as many journals publish four or fewer issues a year this is to be expected.
In the end, the greatest strength of CiteScore is that it is free. Journal Citation Reports, the service that provides Impact Factor data, is an expensive subscription service that is out of reach of many. CiteScore provides an alternative that is accessible by all, and is (I think) to be commended for that if nothing else. For further discussion, see posts on the NFAIS blog and on the Scholarly Kitchen.
I’ve written before about the Center for Open Science (COS), and they’ve been busy since first opening up submissions for SocArXiv back in August. In addition to the social sciences,they’ve also launched PsyArXiv (for the psychological sciences) and engrXiv (for engineering), along with the broader implementation of the Open Science Framework (OSF). The OSF allows interested parties to develop their own preprint archives.
Possibly more interesting, though, is the search functionality built into OSF|Preprints, the platform that brings together SocArXiv, PsyArXiv, and engrXiv. It uses SHARE to aggregate search results from a wide array of open preprint servers, not just those built on OSF. Popular archives searchable through OSF|Preprints include arXiv, bioRXiv, PeerJ, and CogPrints. The OSF|Preprints search platform is very attractively-designed, and allows users to filter results by subject area and provider (i.e. source repository). It is still early on in its implementation, though, and it shows. There is no advanced search function, and the number of results shown next to each provider isn’t updated when a search is performed.
Still, like oaDOI, it is a powerful tool that brings together open content in a wide range of subject areas. It certainly appears that the COS has been making great strides in ensuring the discoverability of OA preprints, and I expect that they’ll continue to do so in the future.
oaDOI, a new tool for locating the Open Access version of an article (when available) announced at the end of last week that they were live, and initial reactions to the service have been very positive. It was created by Heather Piwowar and Jason Priem, two of the co-founders of Impactstory, an altmetric tracking site, and uses a host of data sources to locate openly-accessible versions of articles based on their DOIs. This looks to be an incredibly powerful tool for researchers and librarians alike for a few different reasons. No tool is perfect, however, so I will outline the main pros and cons of oaDOI below:
First, and probably most obvious, is that oaDOI provides researchers with an easy way to determine if there is an openly accessible version of an article available. You paste the DOI on the page, perform your search, and oaDOI either provides you a link to an OA version of the article or lets you know it couldn’t find one. It crawls through well-known sources of OA content, such as the Directory of Open Access Journals and the arXiv, but also checks institutional repositories (like our own DigitalCommons@WayneState or the University of Michigan’s Deep Blue) and other resources that might otherwise require piecemeal investigation.
oaDOI also provides an openly available API for their service, meaning that librarians (and others) can build tools that make use of oaDOI’s search system. This seems especially helpful when it comes to processing inter-library loan (ILL) requests. If an ILL request is made for an article that is openly available in some form, that open version can be provided to patrons immediately. Though ILL don’t necessarily take a long time to process, this can help to eliminate that wait time in certain situations.
oaDOI’s responsiveness to issues with the platform has also been impressive. Problems pointed out on twitter were acknowledge and worked on in short order, which is always a good sign when it comes to a new and exciting tool like this one.
There are two glaring issues with oaDOI, but both are actually issues with the systems upon which oaDOI are built.
First, oaDOI’s search keys off of DOIs, Digital Object Identifiers. These are URL-like strings of characters that are given to published articles in order to uniquely identify them. A more robust description of DOIs can be found in my previous post on the scholarly publisher Wiley, but what is important for this discussion is that not every published article has a DOI. Registering with CrossRef and creating DOIs does involve a fee and, as a result, many smaller and societal publishers opt not to use it. Any such articles will not be searchable in oaDOI.
Second, as oaDOI themselves will tell you, the vast majority of scholarly articles in existence are not available via any OA platform. Scholars, librarians, and others have been calling for a shift to OA for years now but there is still a great deal of ground to be covered. Until OA becomes the norm, a service like oaDOI will serve more often as an intermediate step in the process of searching for an article than as the finish line.
A final con is that oaDOI seems to have some problems functioning on mobile platforms. As their interface prohibits users from typing in a DOI and instead requires the DOI to be pasted, this doesn’t play all that well with (for example) the current version of iOS. As mentioned above, however, their responsiveness to issues has been great so far, and I expect this to be resolved in the near future.
For me and, I would imagine, for most, the pros far outweigh the cons when it comes to oaDOI. That many articles do not have a DOI is not as problematic as it may seem since almost all large publishers do provide DOIs for their articles, and the OA movement continues to grow. I personally look forward to incorporating oaDOI into the library services that I work with, and am very excited that we now have such a powerful OA tool at our disposal.
On August 25, the Federal Trade Commission (FTC) filed a complaint against three related academic publishers, OMICS Group, iMedPub, and Conference Series, along with their president and director, Srinubabu Gedela. The complaint provides a laundry list of extremely concerning behaviors on the part of the publishers, most of which involve lying to submitting authors. After investigating these complaints, I’ll take a brief look at what this filing means for the world of scholarly publication.
The FTC filing claims that Gedela participated in deceptive business practices in order to solicit academic articles from authors. These publishers claimed that their journals had academic experts on editorial boards and serving as peer reviews, had high impact factors, and were indexed in reputable databases such as PubMed Central. Authors whose work was accepted by these journals, operating under the impression that they had submitted to legitimate academic publishers, would then be informed of previously undisclosed fees that needed to be paid before publication. These fees would range from a few hundred to a few thousand dollars, and authors attempting to withdraw their manuscripts from publication would not be allowed to do so. Once an article has been accepted for publication, it is against academic practice to submit that article elsewhere, meaning that articles submitted to these journals were essentially stuck.
The claims made by the publishers were, in this case, false. Academics listed as editors or peer reviewers had no affiliation with journal, the impact factors provided by the publishers were not calculated by Thompson Reuters, and the journals did not show up in PubMed Central or other reputable databases. The publishers were, in essence, luring academics in under false pretenses and trapping their articles in limbo until exorbitant fees were paid.
This behavior was not limited to publications, however. The FTC filing also alleges that Gedela would organize conferences and claim that certain leading academics would be in attendance or participating in some way. Unsuspecting academics would register for these conferences, often paying large registration fees, only to discover that none of these experts had ever agreed to participate.
So what does this mean in the larger world of scholarly publishing? First and foremost it indicates that the FTC is growing more willing to pursue legal action against so-called “predatory publishers,” publishing companies that claim to adhere to usual academic standards but do not, in fact, do so. Though this problem is not a new one, but the FTC’s reaction is new. As Ioana Rusu, a staff attorney for the FTC, stated in an interview, this filing serves as a sort of announcement that the commission will be paying closer attention to the field of scholarly publishing. Though it does not have the resources to pursue action against all unscrupulous publishers in operation, the FTC does plan to target key offenders in order to set a precedent.
Though OMICS, iMedPub, and Conference Series were ostensibly Open Access (OA) publishers, it should be kept in mind Gedela and his ilk are not representative of OA as a whole. Many OA publishers are indexed in reputable and well-known databases and many do have impact factors. Smaller OA publications that are not indexed in large databases or do not have impact factors can nonetheless implement thorough peer review. This FTC action should, in fact, allow authors to feel more secure submitting to OA publications, as those publishers operating under false pretenses may no longer feel that it’s worth running their scam under the threat of federal legal action.
I’ll end here for now, but look for another post soon that will provide some simple actions that can help authors avoid falling prey to publishers like OMICS.
It would seem that I am doomed to continue writing about Elsevier. It was announced yesterday that the academic publishing giant had been awarded the patent for “online peer review system and method” by the United States Patent and Trademark Office. The full patent is available here, but the abstract for the patent reads about as vaguely as possible:
“An online document management system is disclosed. In one embodiment, the online document management system comprises: one or more editorial computers operated by one or more administrators or editors, the editorial computers send invitations and manage peer review of document submissions; one or more system computers, the system computers maintain journals, records of submitted documents and user profiles, and issue notifications; and one or more user computers; the user computers submit documents or revisions to the document management system; wherein one or more of the editorial computers coordinate with one or more of the system computers to migrate one or more documents between journals maintained by the online document management system.”
This patent is concerning for a few reasons. First and foremost, I am reminded of the case of Soverain Software in the mid-2000s to early-2010s. Soverain was (and perhaps still is) a “patent troll,” a company whose entire business model relies on the filing of patents in order to extract money from other entities who are using technologies covered by these patents. In the case of Soverain, the company owned a patent on the online shopping cart, a near-ubiquitous bit of online shopping technology. Soverain would make its money by suing any company whose online store used an online shopping cart, including such giants as Amazon. In the end, Soverain bit off more than they could chew when pursuing legal action against online retailer NewEgg, whose lawyers essentially showed that some of the key patents behind the suit were invalid. You see, a patent is only valid if the technology being patented is new; if someone came up with it before you (which is known as the existence of “prior art”) then you can’t legally patent it. NewEgg showed that another entity had come up with the idea of an online shopping cart before Sovrain’s patent was filed, thereby invalidating it.
What does this have to do with Elsevier’s patent? Well, as you may suspect, many in the scholarly publishing community have reacted to the patent with claims that prior art for online peer review exists. Martin Paul Eve (Professor of Literature, Technology, and Publishing, Birkbeck, University of London) scoffed at the notion that no prior art exists, and David Crotty (Editorial Director, Journals Policy, Oxford University Press) replied to Eve’s tweet by pointing out that much of what is claimed to be innovative in Elsevier’s patent is covered by the system developed by the Neuroscience Peer Review Consortium. And, though Eve indicated that he thinks the patent may be legally unenforceable, he is also concerned that other entities may not have the resources to legally challenge Elsevier’s claims.
Therein lies the problem. Even if the patent isn’t legally enforceable, Elsevier is a very large academic publisher who is not afraid to use its lawyers when it feels that such action is necessary. Much of the innovation happening in peer review workflows is a result of smaller entities, entities that do not have the resources to fight a legal battle against Elsevier even if it was likely that they would win. The difference between this case and the Soverain case above is that the scholarly publishing world does not have a NewEgg to push back against Elsevier’s claims. Elsevier can essentially run roughshod over any other scholarly publishing entity who wishes to implement online peer review. Whether it does remains to be seen but, as I mentioned above, Elsevier’s track record is cause for concern.
There is another, possibly more concerning issue, though, one which Brandon Butler (Director of Information Policy, University of Virginia Library) called out on Twitter and one that has been a recurring theme in this blog as of late. Elsevier has begun to hedge its bets in the event that Open Access (OA) publishing becomes standard practice for academics. Since a movement towards OA will presumably make control over the end result of the publishing process less profitable, Elsevier is seeking to profit off of the rest of the scholarly publishing pipeline. Several months ago, Elsevier acquired the OA repository SSRN; the depositing of pre- and post-prints into SSRN has been an essential step in the publishing process for authors in a wide range of subject areas. Now Elsevier hopes to profit off of the peer review process as well. And, as was the case with their acquisition of SSRN, this latest move by Elsevier has me worried as to what they might do next.
In my last post for the Scholars Cooperative, I gave a brief overview of a major issue occurring as a result of Elsevier’s takeover of SSRN, that many papers on SSRN were being taken down due to “copyright concerns.” It seems that articles uploaded without a statement indicating explicit permission from the copyright holder to deposit the article in SSRN are being taken down without warning. At the end, I indicated that a possible alternative to SSRN, SocArXiv, was in the process of beginning operations. Philip Cohen of SocArXiv recently gave an interview with the scholarly communication blog In the Open discussing the project and its future.
The entire interview is worth a read, but it should be noted that the Center for Open Science has set up a temporary site where authors can begin submitting articles to SocArXiv.