Iteration

The classic image of the mandelbrot set above represents one of the best known attempts to ride the boundary between order and chaos in a complex, iterating systems. When Linda chose the image as a way of representing the work of this Journal Research Data Policy Bank project, she talked about the need to find patterns in the ways that journals deal with data. But it was iteration and complexity that became the confounding issues that mean that – while there are others things we can do to help move this area forward – now is not the time for a full journal policy registry.

In developing a proof-of-concept for a journal data policy registry, complexity and iteration was expected in what is, after all, a fast-developing field. Similar attempts around OA research policy (the SHERPA toolset) were able to rely on policy being set primarily at a publisher level, and having only a small set of underlying decisions (can you archive? – if so when?). In terms of the benefit to the end user (the researcher, clasping a newly written paper in one hand and a research grant in the other) this provided a “good enough” approximation that could help them in making publication decisions.

Our project started by looking at 250 journal policies, based on a candidate data model and question set developed in consultation with the sector which was iterated repeatedly as issues were identified with the data we were collecting.

The majority of these issues related to a lack of standard definitions of terms.  The NISO definition for supplementary material makes the distinction between integral and additional content. The first relates to material which is ‘essential for full understanding of the work’, the second relates to that which ‘provides additional, relevant and useful expansion of the work’ but these guidelines have not been widely adopted by journals. A similar problem exists with regards to terms such as ‘data-sharing’, the ‘data-set’ and ‘peer-review of data’. These terms are commonly used in research data policies but are often defined by community practice or via domain or subject area norms with respect to particular types of data. This creates an enormous problem when trying to codify information at the generic policy level. (While there are very few commonly applied definitions there are moves towards the development of common principles via the UK Draft Concordat on Open Research Data and standards via initiatives like the TOP Guidelines).

Without common definitions, it became apparent that much data we were collecting was based on the individual interpretations of the (excellent) team we were working with. Whilst a valuable exercise in itself, it became apparent that it would not meet the needs of researchers seeking a SHERPA-style quick reference.

Just over half of the journals we surveyed had a research data policy (65% of the science journals, 40% of the social science journals). Slightly under a third of journals mandated “data-sharing” (deposit in a public repository), with 45.8% of science journals and only 10.5% of social science journals doing so. This is a small (but not significant) change since the JORD survey.

But the overall driving force behind the initiative was to provide researchers with access to clear guidelines on the academic journal’s expectations when it comes to deposit and access of the supporting data, especially in light of the increasing specificity from funders around data sharing.

So now what? Clearly the time is not right for a policy registry, but it is an idea that we may well want to come back to. With such a variation of practice and definitions, we feel that the best use of what we have learned from the project so far would be to support good practice and work towards a greater standardisation both of definitions and of policy.

This is the way forward that has been agreed by our Expert Advisory Group. We therefore intend to:

  • carry out further consultation with stakeholders to agree what the immediate priorities should be.
  • develop and document exemplars
  • develop checklists and vocabularies, eventually policy templates
  • use a case-study approach to investigate disciplinary barriers to data sharing
  • continue to monitor the possibility of a central, SHERPA-like, service for journal data policies. Ideally standardised policies and the use of template will make it easier to automate the collection of information – meaning that we can provide a reliable and up-to-date service for researchers.

I’ll be posting updates here as this work progresses.

Parts of this post are glosses of a paper by Linda (primarily) and myself that is currently in press at UKSG Insights.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *