Skip to content

Wayne State University

Aim Higher

Jan 18 / Clayton Hayes

Beall’s List Taken Down

For years now, Jeffrey Beall’s list of predatory Open Access publishers has served as an important resource for librarians and scholars alike. Though his methodology has always been fairly opaque (as I touched on in my previous blog post), being able to check publishers against Beall’s list made it easier for scholars and librarians to avoid being swindled. Apparently, sometime on or before January 15th, Beall’s website https://scholarlyoa.com/ (and the list as well) was scrubbed of information and now exists only as a shell without any content. The Support for Open Access Publishing blog discussed the takedown and its ramifications a bit in a recent post.

There are rumors on Twitter that Cabell’s, a subscription-based directory of journals, may be subsuming Beall’s list. These rumors have not been confirmed but, to add to the confusion, Lacey E. Earle (Cabell’s VP of Business Development) tweeted

No word from Beall himself yet, as reported in Inside Higher Ed. Beall’s list has been replicated and is still available (and easily discoverable) elsewhere online.

Jan 4 / Clayton Hayes

New York Times Reports on “Fake Academia”

Over the winter break, the New York Times ran an article titled A Peek Inside the Strange World of Fake Academia in which the author, Kevin Carey, discusses a number of topics likely familiar to many working in and around institutes of higher learning. The first (and most egregious) example of “fake academia” called out by Carey is the OMICS group, which you may remember from a blog post I wrote back in September. I’m not sure why exactly this article is surfacing now since the FTC filing was months ago; I assume it has something to do with the prevalence of reports on “fake news” sources. Carey’s description essentially mirrors what I wrote there, that OMICS accepts articles with little to no screening, charges exorbitant fees, and lies about who is serving as editors or speakers for their journals and conferences. A fairly amusing example of this article screening process (or lack thereof) comes from Christoph Bartneck, a professor in New Zealand, who used the autocomplete feature on Apple’s iOS to write a paper on Atomic Physics. This paper was accepted only three hours later by the International Conference on Atomic and Nuclear Physics, an OMICS-run conference.

A slightly more interesting point comes later in the article, though. Carey brings up the World Conference on Special Needs Education (WCSNE), which seems to occupy a sort of limbo between legitimate and predatory. The fees for WCSNE attendance are quite high, even for presenters, ranging from $380 to $650. It also indicates that submitted research papers must be between 4 and 6 pages (including tables and figures), with a $30-$50 per-page fee for articles with longer page counts. It also operated similarly to OMICS conferences, claiming several high-profile speakers who, when contacted, said they were in no way involved with the WCSNE. Still, Carey discovered something interesting when asking around about the WCSNE: many defended it as a legitimate academic conference.

One of the founders, Richard Cooper, is the director of disability services at Harcum College in Pennsylvania, who claimed that the conference is worthwhile to the (primarily International) attendees. Barba Patton, a professor at the University of Houston-Victoria in Texas, also defended the WCSNE. She has attended the conference year after year, and has no complaints. Indeed, even Carey admits that the papers presented at the WCSNE are “well within the bounds of what gets published in many scholarly journals that, while not prestigious, have never been called a fraud.” This is, I feel, more than anything else an example of how the publish-or-perish mentality has affected scholars, especially those working outside of the hard sciences. Scholars need outlets for their work, they need to be published and to attend conferences in order to retain their positions. Article or conference attendance fees seem like a small price to pay when compared to the prospect of losing one’s job.

There is, as a result, some grey area here. This also underlines one of the major issues with Beall’s List, the list of so-called “predatory” open access publishers maintained by Jeffrey Beall. The list is that and nothing else: no context, no real explanations, nothing but the names of publishers which fit Beall’s posted guidelines. Unfortunately, for many scholars, their need to publish does not provide them the luxury of being so black-and-white in how they view publishers.

Dec 19 / Clayton Hayes

CiteScore, a new journal metric from Elsevier

Before diving into CiteScore, it’s a good idea to briefly discuss the current journal metric it most closely resembles, the Impact Factor. Those of you familiar with the world of scholarly journals are surely familiar with Impact Factor, a metric which ranks scientific journals based on (roughly speaking) the average number of citations received by articles in that journal. It has been more or less an industry standard since it was first introduced by Eugene Garfield in a 1972 paper. The Impact Factor was originally based on the Science Citation Index, but now relies on citation information harvested from the Web of Science database. Below is an explanation of how a journal’s impact factor is calculated, borrowed from the library’s own guide on measuring research impact:

Impact Factor Calculation

 

Elsevier’s CiteScore is broadly similar to the Impact Factor, with a few key differences:

  1. CiteScore pulls data from the Scopus database and considers about 22,000 different items, about double that of the Impact Factor
  2. CiteScore pulls citation data from the three previous years, as opposed to Impact Factor’s two
  3. Impact Factor only looks at what it considers to be citable items, meaning articles or reviews. CiteScore, on the other hand, pulls citation data from any available items in the journal, including front matter
  4. CiteScore is provided free of charge, and is openly available on the web
  5. CiteScore metrics are calculated monthly, whereas Impact Factors are calculated annually

Impact Factor (and journal metrics in general) have never received total acceptance from the scientific community (with good reason), but the reaction to CiteScore has been a bit more hostile than may be expected. The openness and transparency of their methods has generally been praised, as has the fact that it is provided at no charge. Many criticize item number 3 above, the fact that CiteScore pulls citation data from any and all available documents. This can be a problem because many prestigious journals include non-citable items, like editorials, letters from researchers, or subject-specific news, which increase the number of items appearing in the journal without providing any additional citations.

Since CiteScore is calculated on a monthly basis, Eslevier hopes, perhaps, that it can provide a bit more currency in subject areas where this is important. I’m not convinced, though, that this is necessary. Monthly updates seems to be more than could be considered useful. Perhaps if a journal produces one or two very impactful articles, or if a journal adjusts is publication schedule or practices, CiteScore will reflect this a bit sooner than Impact Factor would. Aside from situations like that, most metrics change only gradually, and as many journals publish four or fewer issues a year this is to be expected.

In the end, the greatest strength of CiteScore is that it is free. Journal Citation Reports, the service that provides Impact Factor data, is an expensive subscription service that is out of reach of many. CiteScore provides an alternative that is accessible by all, and is (I think) to be commended for that if nothing else. For further discussion, see posts on the NFAIS blog and on the Scholarly Kitchen.

Dec 6 / Clayton Hayes

The Center for Open Science launches OSF|Preprints

I’ve written before about the Center for Open Science (COS), and they’ve been busy since first opening up submissions for SocArXiv back in August. In addition to the social sciences,they’ve also launched PsyArXiv (for the psychological sciences) and engrXiv (for engineering), along with the broader implementation of the Open Science Framework (OSF). The OSF allows interested parties to develop their own preprint archives.

Possibly more interesting, though, is the search functionality built into OSF|Preprints, the platform that brings together SocArXiv, PsyArXiv, and engrXiv. It uses SHARE to aggregate search results from a wide array of open preprint servers, not just those built on OSF. Popular archives searchable through OSF|Preprints include arXivbioRXivPeerJ, and CogPrints. The OSF|Preprints search platform is very attractively-designed, and allows users to filter results by subject area and provider (i.e. source repository). It is still early on in its implementation, though, and it shows. There is no advanced search function, and the number of results shown next to each provider isn’t updated when a search is performed.

Still, like oaDOI, it is a powerful tool that brings together open content in a wide range of subject areas. It certainly appears that the COS has been making great strides in ensuring the discoverability of OA preprints, and I expect that they’ll continue to do so in the future.

Oct 25 / Clayton Hayes

oaDOI: A New Tool for Discovering OA Content

oaDOI, a new tool for locating the Open Access version of an article (when available) announced at the end of last week that they were live, and initial reactions to the service have been very positive. It was created by Heather Piwowar and Jason Priem, two of the co-founders of Impactstory, an altmetric tracking site, and uses a host of data sources to locate openly-accessible versions of articles based on their DOIs. This looks to be an incredibly powerful tool for researchers and librarians alike for a few different reasons. No tool is perfect, however, so I will outline the main pros and cons of oaDOI below:

Pros

First, and probably most obvious, is that oaDOI provides researchers with an easy way to determine if there is an openly accessible version of an article available. You paste the DOI on the page, perform your search, and oaDOI either provides you a link to an OA version of the article or lets you know it couldn’t find one. It crawls through well-known sources of OA content, such as the Directory of Open Access Journals and the arXiv, but also checks institutional repositories (like our own DigitalCommons@WayneState or the University of Michigan’s Deep Blue) and other resources that might otherwise require piecemeal investigation.

oaDOI also provides an openly available API for their service, meaning that librarians (and others) can build tools that make use of oaDOI’s search system. This seems especially helpful when it comes to processing inter-library loan (ILL) requests. If an ILL request is made for an article that is openly available in some form, that open version can be provided to patrons immediately. Though ILL don’t necessarily take a long time to process, this can help to eliminate that wait time in certain situations.

oaDOI’s responsiveness to issues with the platform has also been impressive. Problems pointed out on twitter were acknowledge and worked on in short order, which is always a good sign when it comes to a new and exciting tool like this one.

Cons

There are two glaring issues with oaDOI, but both are actually issues with the systems upon which oaDOI are built.

First, oaDOI’s search keys off of DOIs, Digital Object Identifiers. These are URL-like strings of characters that are given to published articles in order to uniquely identify them. A more robust description of DOIs can be found in my previous post on the scholarly publisher Wiley, but what is important for this discussion is that not every published article has a DOI. Registering with CrossRef and creating DOIs does involve a fee and, as a result, many smaller and societal publishers opt not to use it. Any such articles will not be searchable in oaDOI.

Second, as oaDOI themselves will tell you, the vast majority of scholarly articles in existence are not available via any OA platform. Scholars, librarians, and others have been calling for a shift to OA for years now but there is still a great deal of ground to be covered. Until OA becomes the norm, a service like oaDOI will serve more often as an intermediate step in the process of searching for an article than as the finish line.

A final con is that oaDOI seems to have some problems functioning on mobile platforms. As their interface prohibits users from typing in a DOI and instead requires the DOI to be pasted, this doesn’t play all that well with (for example) the current version of iOS. As mentioned above, however, their responsiveness to issues has been great so far, and I expect this to be resolved in the near future.

For me and, I would imagine, for most, the pros far outweigh the cons when it comes to oaDOI. That many articles do not have a DOI is not as problematic as it may seem since almost all large publishers do provide DOIs for their articles, and the OA movement continues to grow. I personally look forward to incorporating oaDOI into the library services that I work with, and am very excited that we now have such a powerful OA tool at our disposal.

Sep 6 / Clayton Hayes

FTC Files Complaint Against Publisher OMICS Group, Among Others

On August 25, the Federal Trade Commission (FTC) filed a complaint against three related academic publishers, OMICS Group, iMedPub, and Conference Series, along with their president and director,  Srinubabu Gedela. The complaint provides a laundry list of extremely concerning behaviors on the part of the publishers, most of which involve lying to submitting authors. After investigating these complaints, I’ll take a brief look at what this filing means for the world of scholarly publication.

The FTC filing claims that Gedela participated in deceptive business practices in order to solicit academic articles from authors. These publishers claimed that their journals had academic experts on editorial boards and serving as peer reviews, had high impact factors, and were indexed in reputable databases such as PubMed Central. Authors whose work was accepted by these journals, operating under the impression that they had submitted to legitimate academic publishers, would then be informed of previously undisclosed fees that needed to be paid before publication. These fees would range from a few hundred to a few thousand dollars, and authors attempting to withdraw their manuscripts from publication would not be allowed to do so. Once an article has been accepted for publication, it is against academic practice to submit that article elsewhere, meaning that articles submitted to these journals were essentially stuck.

The claims made by the publishers were, in this case, false. Academics listed as editors or peer reviewers had no affiliation with journal, the impact factors provided by the publishers were not calculated by Thompson Reuters, and the journals did not show up in PubMed Central or other reputable databases. The publishers were, in essence, luring academics in under false pretenses and trapping their articles in limbo until exorbitant fees were paid.

This behavior was not limited to publications, however. The FTC filing also alleges that Gedela would organize conferences and claim that certain leading academics would be in attendance or participating in some way. Unsuspecting academics would register for these conferences, often paying large registration fees, only to discover that none of these experts had ever agreed to participate.

So what does this mean in the larger world of scholarly publishing? First and foremost it indicates that the FTC is growing more willing to pursue legal action against so-called “predatory publishers,” publishing companies that claim to adhere to usual academic standards but do not, in fact, do so. Though this problem is not a new one, but the FTC’s reaction is new. As Ioana Rusu, a staff attorney for the FTC, stated in an interview, this filing serves as a sort of announcement that the commission will be paying closer attention to the field of scholarly publishing. Though it does not have the resources to pursue action against all unscrupulous publishers in operation, the FTC does plan to target key offenders in order to set a precedent.

Though OMICS, iMedPub, and Conference Series were ostensibly Open Access (OA) publishers, it should be kept in mind Gedela and his ilk are not representative of OA as a whole. Many OA publishers are indexed in reputable and well-known databases and many do have impact factors. Smaller OA publications that are not indexed in large databases or do not have impact factors can nonetheless implement thorough peer review. This FTC action should, in fact, allow authors to feel more secure submitting to OA publications, as those publishers operating under false pretenses may no longer feel that it’s worth running their scam under the threat of federal legal action.

I’ll end here for now, but look for another post soon that will provide some simple actions that can help authors avoid falling prey to publishers like OMICS.

Aug 31 / Clayton Hayes

Elsevier: Patent Troll as well as Publisher?

It would seem that I am doomed to continue writing about Elsevier. It was announced yesterday that the academic publishing giant had been awarded the patent for “online peer review system and method” by the United States Patent and Trademark Office. The full patent is available here, but the abstract for the patent reads about as vaguely as possible:

“An online document management system is disclosed. In one embodiment, the online document management system comprises: one or more editorial computers operated by one or more administrators or editors, the editorial computers send invitations and manage peer review of document submissions; one or more system computers, the system computers maintain journals, records of submitted documents and user profiles, and issue notifications; and one or more user computers; the user computers submit documents or revisions to the document management system; wherein one or more of the editorial computers coordinate with one or more of the system computers to migrate one or more documents between journals maintained by the online document management system.”

This patent is concerning for a few reasons. First and foremost, I am reminded of the case of Soverain Software in the mid-2000s to early-2010s. Soverain was (and perhaps still is) a “patent troll,” a company whose entire business model relies on the filing of patents in order to extract money from other entities who are using technologies covered by these patents. In the case of Soverain, the company owned a patent on the online shopping cart, a near-ubiquitous bit of online shopping technology. Soverain would make its money by suing any company whose online store used an online shopping cart, including such giants as Amazon. In the end, Soverain bit off more than they could chew when pursuing legal action against online retailer NewEgg, whose lawyers essentially showed that some of the key patents behind the suit were invalid. You see, a patent is only valid if the technology being patented is new; if someone came up with it before you (which is known as the existence of “prior art”) then you can’t legally patent it. NewEgg showed that another entity had come up with the idea of an online shopping cart before Sovrain’s patent was filed, thereby invalidating it.

What does this have to do with Elsevier’s patent? Well, as you may suspect, many in the scholarly publishing community have reacted to the patent with claims that prior art for online peer review exists. Martin Paul Eve (Professor of Literature, Technology, and Publishing, Birkbeck, University of London) scoffed at the notion that no prior art exists, and David Crotty (Editorial Director, Journals Policy, Oxford University Press) replied to Eve’s tweet by pointing out that much of what is claimed to be innovative in Elsevier’s patent is covered by the system developed by the Neuroscience Peer Review Consortium. And, though Eve indicated that he thinks the patent may be legally unenforceable, he is also concerned that other entities may not have the resources to legally challenge Elsevier’s claims.

Therein lies the problem. Even if the patent isn’t legally enforceable, Elsevier is a very large academic publisher who is not afraid to use its lawyers when it feels that such action is necessary. Much of the innovation happening in peer review workflows is a result of smaller entities, entities that do not have the resources to fight a legal battle against Elsevier even if it was likely that they would win. The difference between this case and the Soverain case above is that the scholarly publishing world does not have a NewEgg to push back against Elsevier’s claims. Elsevier can essentially run roughshod over any other scholarly publishing entity who wishes to implement online peer review. Whether it does remains to be seen but, as I mentioned above, Elsevier’s track record is cause for concern.

There is another, possibly more concerning issue, though, one which Brandon Butler (Director of Information Policy, University of Virginia Library) called out on Twitter and one that has been a recurring theme in this blog as of late. Elsevier has begun to hedge its bets in the event that Open Access (OA) publishing becomes standard practice for academics. Since a movement towards OA  will presumably make control over the end result of the publishing process less profitable, Elsevier is seeking to profit off of the rest of the scholarly publishing pipeline. Several months ago, Elsevier acquired the OA repository SSRN; the depositing of pre- and post-prints into SSRN has been an essential step in the publishing process for authors in a wide range of subject areas. Now Elsevier hopes to profit off of the peer review process as well. And, as was the case with their acquisition of SSRN, this latest move by Elsevier has me worried as to what they might do next.

Aug 18 / Clayton Hayes

SocArXiv Begins Accepting Article Submissions

In my last post for the Scholars Cooperative, I gave a brief overview of a major issue occurring as a result of Elsevier’s takeover of SSRN, that many papers on SSRN were being taken down due to “copyright concerns.” It seems that articles uploaded without a statement indicating explicit permission from the copyright holder to deposit the article in SSRN are being taken down without warning. At the end, I indicated that a possible alternative to SSRN, SocArXiv, was in the process of beginning operations. Philip Cohen of SocArXiv recently gave an interview with the scholarly communication blog In the Open discussing the project and its future.

The entire interview is worth a read, but it should be noted that the Center for Open Science has set up a temporary site where authors can begin submitting articles to SocArXiv.

Jul 19 / Clayton Hayes

SSRN Begins Removing Papers Over “Copyright Concerns”

In May of this year, academic publishing giant Elsevier acquired the Social Sciences Research Network (SSRN), an extremely popular Open Access (OA) repository for the social sciences, economics, law, and the humanities. I wrote then about some concerns I had with Elsevier, historically a profit-oriented and anti-OA company, operating a popular OA platform. It seems that, almost exactly two months in, we have caught our first glimpses of just how well-founded these concerns were:

Howard Wasserman, a professor of law at Florida International University, posted on the site Prawfs Blawg about an email that had gone out to a ListServ for law professors. In it, Stephen Henderson (a law professor at the University of Oklahoma) detailed a recent experience he had had with SSRN in which the site had, completely unannounced, taken down the PDF of a paper he and his co-author had posted to the site. Despite the fact that the authors had retained copyright to the article, and that their contract included explicit permission to post the article to SSRN, they received this message as a comment on SSRN’s back end:

It appears that you do not retain copyright to the paper, and the PDF has been removed from public view. Please provide us with the copyright holder’s written permission to post. Alternatively, you may replace this version with a working paper or preprint version, if you so desire.

From the rest of Henderson’s discussion with SSRN, and from comments on the blog post linked above, we can gather the following pieces of information:

SSRN has begun taking down full-text postprint or published versions of articles if they were not provided with explicit proof of permission from the copyright holder. This is being done more or less without warning to the authors. When attorney and academic Andrew Selbst asked about the policy, SSRN stated that they had not previously (i.e., pre-Elsevier) been enforcing their copyright policy correctly. This is in stark contrast to how almost all other academic repositories function, relying on submitting authors to do their due diligence in determining which version of their article they may archive, if any.

This isn’t exactly surprising coming from Elsevier, though. It famously served Academia.edu with thousands of takedown notices and adjusted the sharing policies of its journals to specifically prevent authors from posting the published versions of their articles to sites like Academia.edu and ResearchGate. Those social/academic sites have their own issues, of course, which I’ve talked about on this blog before, but it still establishes a pattern of behavior on the part of Elsevier: it does not seem to trust academics to be responsible for their own work. Everything they have done, in fact, indicates that it does not seem to think that academics should be responsible for their own work.

In the case of SSRN, we see the ideology of Elsevier (a commercial entity and chiefly concerned with its profit) clashing with the ideology of an OA repository (which exists to serve the needs of authors and of scholarship as a whole). Securing copyright permissions for all of the articles on SSRN is so essential in the eyes of Elsevier because it is a commercial concern: if there is no copyright agreement for an article, then the article’s publisher may not be getting its money’s worth for that article as a result. This is so important in their eyes that, without any warning, they have opted to remove the full-text version of any article for which no agreement was provided. Even if the authors provide a copyright agreement in the future, any download counts for those articles will have been eliminated, a fact which is of genuine concern for faculty (especially non-tenured faculty) members working in subject areas which view SSRN as a primary space for tracking scholarly output.

Perhaps this is just a bump in the road, so to speak, and the relationship between SSRN, Elsevier, and scholars will smooth as time goes on. It has only been a few months, after all. Some scholars are not so patient, however, and (perhaps prophetically) it was announced two weeks ago that SocArXiv, a new open platform for social science research, is in development and will soon be providing the kind of author-focused service that scholars had seen in SSRN before the Elsevier takeover.

Jun 6 / Clayton Hayes

Dissecting the EU’s Open Access resolution

Over the long Memorial Day weekend, member states of the European Union agreed on a resolution that all scientific research papers produced in EU would be Open Access by the year 2020. This is obviously welcome news for both OA advocates and for scientific researchers the world over, as they can ostensibly look forward to broader access to research. It bears a bit of further scrutiny, though, especially since write-ups from The Guardian and Science Magazine get quite a bit wrong about OA. I also want to quickly say that, though the rest of this post is critical of the resolution, I don’t think it’s a bad thing. I’m happy to see awareness of OA being raised abroad, and I hope to see scholarship in the US follow suit. Instead, I just hope to explore what this resolution does and does not mean for OA.

There is some skepticism as to whether or not achieving this goal is possible, and that skepticism is justified. As reported in the Science Magazine article, even The Netherlands, considered by many to be the EU’s OA frontrunner, had targeted 2024 for its own attempt at going 100% OA for scientific articles. The EU’s League of European Research Universities, while enthusiastically supportive of the 2020 goal, says that it will not be easy to achieve; the EU’s Competitiveness Council, the group of science, trade, and industry ministers responsible for the resolution, provided little concrete information on moving towards this goal.

On the surface, it does seem as though this has potential to be impactful for scientific researchers all over the world, not just in Europe. Stevan Hanard of the University of Québec, an advocate for OA, told Science Magazine that he sees Green OA methods, such as deposit in institutional repositories (IRs) like our own DigitalCommons@WayneState, as the best way for the EU to achieve their goal. Green OA has long been the preferred method for many libraries, including here at WSU, as it does not require the authors or institutions to pay fees in order to make the work OA. If the EU pushes for IR deposit to be the primary means of achieving this OA resolution, that will certainly spell significant change for much of the scholarly publishing world. Why? Well, that requires a quick aside to talk about how IR deposits and copyright interact.

In brief, an author can only deposit their work in an IR (or other repositories) if the copyright holder permits it. Many (though not all) academic publishers require authors to sign over copyright (or at least to give the publisher an exclusive license to distribute), and hence it is often the publisher who gets to decide if an author can deposit their work in an IR. There is currently no standard among publishers, and it can in fact be quite a chore to determine the specifics of a publisher’s policies with regards to IR deposits. Were all EU researchers required to publish in journals that permitted IR deposit, this would cause a significant shift in the practices of these academic publishers. They would be forced to re-evaluate their policies regarding IR deposits (and possibly copyright), or risk missing out on submissions coming from the EU.

There are, of course, some very important caveats:

Most importantly, Green OA is not currently the preferred OA method for much of the research emanating from the EU. As pointed out in the Science Magazine article, the resolution did not express any preference as to OA method, and governments such as that of The Netherlands have long been supporting Gold OA methods instead. Putting things simplistically, Gold OA is a system whereby the author or authors of an article pay an article processing charge (APC) to the publisher to offset the percieved loss of subscription revenue on the publisher’s part. In the EU (and in the US), these APCs can be quite expensive and are often written into grant proposals. It is hard to know what the scholarly publishing landscape would look like were the EU to push Gold OA, but it would likely put an increased financial strain on any researchers unable to secure grant funding for their research.

Possibly just as important is the fact that the Competitiveness Council has been very vague about this resolution. Anyone who has some familiarity with the ins and outs of OA knows that many publishers who do allow authors to deposit works in IRs require that they be embargoed for a certain amount of time, anywhere from several months to several years. The council’s statement, as reported in Science Magazine, specified that scientific research should be published “without embargoes or with as short as possible embargoes.” This sadly leaves a lot of wiggle room, and it will remain to be seen if the Council specifies something more concrete in the future. It is perhaps a bit telling that a representative of the Council said specifically that the resolution “[…] is not a law, but it’s a political orientation for the 28 governments.”

Finally, as a bit of a post-script, this resolution seems to be concerned solely with scientific research and does not apply to research outside of the STEM fields. This is not entirely surprising, since the Competitiveness Council is comprised of ministers of science and industry, and drafting resolutions on research in the arts and humanities may be outside of the council’s purview. Still, it is unfortunate that no similar resolution has been released for research in non-STEM fields.