FAIR dues to the Research Data Alliance

It has been a while since we’ve blogged about the Research Data Alliance (RDA), and as an organisation it has come into its own since its beginnings in 2013. One can count on discovering the international state of the art in a range of data-related topics covered by its interest groups and working groups which meet at its plenary events, held every six months. That is why I attended the 13th RDA Plenary held in Philadelphia earlier this month and I was not disappointed.

I arrived Monday morning in time for the second day of a pre-conference sponsored by CODATA on FAIR and Responsible Research Data Management at Drexel University. FAIR is a popular concept amongst research funders for illustrating data management done right: by the time you complete your research project (or shortly after) your data should be Findable, Accessible, Interoperable and Reusable.

Fair enough, but we data repository providers also want to know how to build the ecosystems that will make it super-easy for researchers to make their data FAIR, so we need to talk to each other to compare notes and decide exactly what each letter means in practice.

Borrowed from OpenAire 

Amongst the highlights were some tools and resources for researchers or data providers mentioned by various speakers.

  • The Australian Research Data Commons (ARDC) has created a FAIR self-assessment tool.
  • For those who like stories, the Danish National Archives have created a FAIRytale to help understand the FAIR principles.
  • ARDC with Library Carpentry conducted a sprint that led to a disciplinary smorgasbord called Top Ten Data and Software Things.
  • DataCite offers a Repository Finder tool through its union with re3data.org to find the most appropriate repository in which to deposit your data.
  • Resources for “implementation networks” from the EU-funded project GO FAIR, including training materials under the rubric of GO TRAIN.
  • The Geo-science focused Enabling FAIR Data Project is signing up publishers and repositories to commitment statements, and has a user-friendly FAQ explaining why researchers should care and what they can do.
  • A brand new EU-funded project, FAIRsFAIR (Fostering FAIR Data Practice in Europe) is taking things to the next level, building new networks to certifying learners and trainers, researchers and repositories in FAIRdom.

That last project’s ambitions are described in this blog post by Joy Davidson at DCC. Another good blog post I found about the FAIR pre-conference event is by Rebecca Springer at Ithaka S+R. If I get a chance I’ll add another brief post for the main conference.

Robin Rice
Data Librarian & Head of Research Data Support
Library & University Collections

Share

Research Data Workshop Series 2019

Over the spring of 2019 the Research Data Service (RDS) is holding a series of workshops with the aim of gathering feedback and requirements from our researchers on a number of important Research Data topics.

Each workshop will consist of a small number of short presentations from researchers and research support staff who have experience of the topic. These will then be followed by guided discussions so that the RDS can gather your input on the tools we currently provide, the gaps in our services, and how you go about addressing the challenges and issues raised in the talks.
The workshops for 2019 are:

Electronic Notebooks 1
14th March at King’s Buildings (Fully Booked)

DataVault
1200-1400, 10th April at 6301 JCMB, King’s Buildings, Map
Booking Link – https://www.events.ed.ac.uk/index.cfm?event=book&scheduleID=34308
The DataVault was developed to offer UoE staff a long-term retention solution for research data collected by research projects that are at the completion stage. Each ‘Vault’ can contain multiple files associated with a research project that will be securely stored for an identified period, such as ten years. It is designed to fill in gaps left by existing research data services such as DataStore (active data storage platform) and DataShare (open access online data repository). The service enables you to comply with funder and University requirements to preserve research data for the long-term, and to confidently store your data for retrieval at a future date. This workshop is intended to gather the views of researchers and support staff in schools to explore the utility of the new service and discuss potential practicalities around its roll-out and long-term sustainability.

Sensitive Data Challenges and Solutions
1200-1430, 16th April in Seminar Room 2, Chancellors Building, Bioquarter, Map
Booking Link – https://www.events.ed.ac.uk/index.cfm?event=book&scheduleID=34321
Researchers face a number of technical, ethical and legal challenges in creating, analysing and managing research data, including pressure to increase transparency and conduct research openly. But for those who have collected or are re-using sensitive or confidential data, these challenges can be particularly taxing. Tools and services can help to alleviate some of the problems of using sensitive data in research. But cloud-based tools are not necessarily trustworthy, and services are not necessarily geared for highly sensitive data. Those that are may not be very user-friendly or efficient for researchers, and often restrict the types of analysis that can be done. Researchers attending this workshop will have the opportunity to hear from experienced researchers on related topics.

Electronic Notebooks 2
1200-1430, 9th May at Training & Skills Room, ECCI, Central Area, Map
Booking Link – https://www.events.ed.ac.uk/index.cfm?event=book&scheduleID=34287
Electronic Notebooks, both computational and lab-based, are gaining ground as productivity tools for researchers and their collaborators. Electronic notebooks can help facilitate reproducibility, longevity and controlled sharing of information. There are many different notebook options available, either commercially or free. Each application has different features and will have different advantages depending on researchers or lab’s requirements. Jupyter Notebook, RSpace, and Benchling are some of the platforms that are used at the University and all will be represented by researchers who use them on a daily basis.

Data, Software, Reproducibility and Open Research
Due to unforeseen circumstances this event has been postponed. We will update with the new event details as soon as they are confirmed.
In this workshop we will examine real-life use cases wherein datasets combine with software and/or notebooks to provide a richer, more reusable and long-lived record of Edinburgh’s research. We will also discuss user needs and wants, capturing requirements for future development of the University’s central research support infrastructure in line with (e.g.) the LERU Roadmap for Open Science, which the Library Research Support team has sought to map its existing and planned provision against, and domain-oriented Open Research strategies within the Colleges.

Kerry Miller
Research Data Support Officer
Library & University Collections

Share

DwD2018 – Videos now on Media Hopper

Dealing with Data 2018 was once again a great success in November last year with over 100 university staff and Post-Graduate students joining us to hear presentations on topics as diverse as sharing data in clinical trials and embedding sound files in linguistics research papers.

As promised the videos of each presentation have now been made publicly available on Media Hopper (https://media.ed.ac.uk/channel/Dealing%2BWith%2BData%2BConference/82256222), while the PDFs can be found on https://www.era.lib.ed.ac.uk/handle/1842/25859. You can also read Martin Donnelly’s reflections on the day http://datablog.is.ed.ac.uk/2018/11/28/dealing-with-data-2018-summary-reflections/.

We hope that these will prove both useful and interesting to all of our colleagues who were unable to attend.

We look forward to seeing you at Dealing with Data 2019.

Share

Dealing With Data 2018: Summary reflections

The annual Dealing With Data conference has become a staple of the University’s data-interest calendar. In this post, Martin Donnelly of the Research Data Service gives his reflections on this year’s event, which was held in the Playfair Library last week.

One of the main goals of open data and Open Science is that of reproducibility, and our excellent keynote speaker, Dr Emily Sena, highlighted the problem of translating research findings into real-world clinical interventions which can be relied upon to actually help humans. Other challenges were echoed by other participants over the course of the day, including the relative scarcity of negative results being reported. This is an effect of policy, and of well-established and probably outdated reward/recognition structures. Emily also gave us a useful slide on obstacles, which I will certainly want to revisit: examples cited included a lack of rigour in grant awards, and a lack of incentives for doing anything different to the status quo. Indeed Emily described some of what she called the “perverse incentives” associated with scholarship, such as publication, funding and promotion, which can draw researchers’ attention away from the quality of their work and its benefits to society.

However, Emily reminded us that the power to effect change does not just lie in the hands of the funders, governments, and at the highest levels. The journal of which she is Editor-in-Chief (BMJ Open Science) has a policy commitment to publish sound science regardless of positive or negative results, and we all have a part to play in seeking to counter this bias.

Photo-collage of several speakers at the event

A collage of the event speakers, courtesy Robin Rice (CC-BY)

In terms of other challenges, Catriona Keerie talked about the problem of transferring/processing inconsistent file formats between heath boards, causing me to wonder if it was a question of open vs closed formats, and how could such a situation might have been averted, e.g. via planning, training (and awareness raising, as Roxanne Guildford noted), adherence to the 5-star Open Data scheme (where the third star is awarded for using open formats), or something else? Emily earlier noted a confusion about which tools are useful – and this is a role for those of us who provide tools, and for people like myself and my colleague Digital Research Services Lead Facilitator Lisa Otty who seek to match researchers with the best tools for their needs. Catriona also reminded us that data workflow and governance were iterative processes: we should always be fine-tuning these, and responding to new and changing needs.

Another theme of the first morning session was the question of achieving balances and trade-offs in protecting data and keeping it useful. And a question from the floor noted the importance of recording and justifying how these balance decisions are made etc. David Perry and Chris Tuck both highlighted the need to strike a balance, for example, between usability/convenience and data security. Chris spoke about dual testing of data: is it anonymous? / is it useful? In many cases, ideally it will be both, but being both may not always be possible.

This theme of data privacy balanced against openness was taken up in Simon Chapple’s presentation on the Internet of Things. I particularly liked the section on office temperature profiles, which was very relevant to those of us who spend a lot of time in Argyle House where – as in the Playfair Library – ambient conditions can leave something to be desired. I think Simon’s slides used the phrase “Unusual extremes of temperatures in micro-locations.” Many of us know from bitter experience what he meant!

There is of course a spectrum of openness, just as there are grades of abstraction from the thing we are observing or measuring and the data that represents it. Bert Remijsen’s demonstration showed that access to sound recordings, which compared with transcription and phonetic renderings are much closer to the data source (what Kant would call the thing-in-itself (das Ding an sich) as opposed to the phenomenon, the thing as it appears to an observer) is hugely beneficial to linguistic scholarship. Reducing such layers of separation or removal is both a subsidiary benefit of, and a rationale for, openness.

What it boils down to is the old storytelling adage: “Don’t tell, show.” And as Ros Attenborough pointed out, openness in science isn’t new – it’s just a new term, and a formalisation of something intrinsic to Science: transparency, reproducibility, and scepticism. By providing access to our workings and the evidence behind publications, and by joining these things up – as Ewan McAndrew described, linked data is key (this the fifth star in the aforementioned 5-star Open Data scheme.) Open Science, and all its various constituent parts, support this goal, which is after all one of the goals of research and of scholarship. The presentations showed that openness is good for Science; our shared challenge now is to make it good for scientists and other kinds of researchers. Because, as Peter Bankhead says, Open Source can be transformative – Open Data and Open Science can be transformative. I fear that we don’t emphasise these opportunities enough, and we should seek to provide compelling evidence for them via real-world examples. Opportunities like the annual Dealing With Data event make a very welcome contribution in this regard.

PDFs of the presentations are now available in the Edinburgh Research Archive (ERA). Videos from the day are published on MediaHopper.

Other resources

Martin Donnelly
Research Data Support Manager
Library and University Collections
University of Edinburgh

Share