DataVault is now live

After extended development, the Research Data Service’s DataVault system is now operational, adding value to research data for principal investigators and their funders alike by offering a long-term retention solution for important datasets.

DataVault is a companion service to DataShare, the institutional digital repository for researchers to openly license and share datasets and related outputs via the Web. DataVault comprises an online interface connected to the university’s data centre infrastructure and cloud storage.

Each research project can store data in a single vault made up of any number of deposits. DataVault is currently able to accept individual deposits (groups of files) of up to 2 TB each; this will increase over time as project development continues.

DataVault sprint meeting before launch

Immutable

DataVault is designed for long-term retention of research data, to meet funder requirements and ensure future access to high value datasets. It meets digital preservation requirements by storing three copies in different locations (two on tape, one in the cloud) with integrity checking built-in, so that the data owner can retrieve their data with confidence until the end of the retention period (typically ten years).

Secure

The DataVault interface helps to guide users in how to deposit personal and sensitive data, using anonymisation or pseudonymisation techniques whenever possible, as prescribed by the University’s Data Protection Officer (DPO). Because all data are encrypted before deposit, they are protected from unauthorised disclosure. Only the data owner or their nominated delegate is allowed to retrieve data during the retention period. Any decisions about allowing access to others are made by the data owner and are conducted outside the DataVault system, once they have been retrieved onto a private area on DataStore and decrypted.

Discoverable

Although DataVault offers a form of closed archive, the design encourages good research data management practice by requiring a metadata record for each vault in Pure. These records are discoverable on the Web, and linked to the respective data creators, projects and publications.

In exchange for creating this high level public metadata record, the Principal Investigator benefits from the assignment of a unique digital object identifier (DOI) which can be used to cite the data in publications.

The open nature of the metadata means that any reader may make a request to access the dataset. The data owner decides who may have access and under what conditions. Advice can be provided by the Research Data Support team and the DPO.

University data assets

DataVault’s workflow takes into account the possibility/likelihood that the original data owner will have left the university when the period of retention comes to an end. Each vault will be reviewed by representatives of the university in schools, colleges or the Library, acting as the data owner, to make decisions on disposal or further retention and curation. If kept, the vault contents become university data assets.

Plan ahead for data archiving

The Research Data Support team encourages researchers to plan ahead for data archiving, right from the earliest conception stages of the project, so that appropriate costs are included in bids, and enabling the appropriate steps to be carried out to prepare data for either open or closed long-term archiving.

The team can be contacted through the IS Helpline and offers assistance with writing data management plans and making archival decisions. See our service website and contact information at https://www.ed.ac.uk/is/research-data-service or go straight to the DataVault page to learn more about it, get instructions for use, or look up charges. An introductory demo video is available  at  https://media.ed.ac.uk/media/Getting+started+with+the+DataVault/1_h4r4glf7 .

Robin Rice
Data Librarian and Head, Research Data Support
Library & University Collections

Share

Dealing With Data 2018: Summary reflections

The annual Dealing With Data conference has become a staple of the University’s data-interest calendar. In this post, Martin Donnelly of the Research Data Service gives his reflections on this year’s event, which was held in the Playfair Library last week.

One of the main goals of open data and Open Science is that of reproducibility, and our excellent keynote speaker, Dr Emily Sena, highlighted the problem of translating research findings into real-world clinical interventions which can be relied upon to actually help humans. Other challenges were echoed by other participants over the course of the day, including the relative scarcity of negative results being reported. This is an effect of policy, and of well-established and probably outdated reward/recognition structures. Emily also gave us a useful slide on obstacles, which I will certainly want to revisit: examples cited included a lack of rigour in grant awards, and a lack of incentives for doing anything different to the status quo. Indeed Emily described some of what she called the “perverse incentives” associated with scholarship, such as publication, funding and promotion, which can draw researchers’ attention away from the quality of their work and its benefits to society.

However, Emily reminded us that the power to effect change does not just lie in the hands of the funders, governments, and at the highest levels. The journal of which she is Editor-in-Chief (BMJ Open Science) has a policy commitment to publish sound science regardless of positive or negative results, and we all have a part to play in seeking to counter this bias.

Photo-collage of several speakers at the event

A collage of the event speakers, courtesy Robin Rice (CC-BY)

In terms of other challenges, Catriona Keerie talked about the problem of transferring/processing inconsistent file formats between heath boards, causing me to wonder if it was a question of open vs closed formats, and how could such a situation might have been averted, e.g. via planning, training (and awareness raising, as Roxanne Guildford noted), adherence to the 5-star Open Data scheme (where the third star is awarded for using open formats), or something else? Emily earlier noted a confusion about which tools are useful – and this is a role for those of us who provide tools, and for people like myself and my colleague Digital Research Services Lead Facilitator Lisa Otty who seek to match researchers with the best tools for their needs. Catriona also reminded us that data workflow and governance were iterative processes: we should always be fine-tuning these, and responding to new and changing needs.

Another theme of the first morning session was the question of achieving balances and trade-offs in protecting data and keeping it useful. And a question from the floor noted the importance of recording and justifying how these balance decisions are made etc. David Perry and Chris Tuck both highlighted the need to strike a balance, for example, between usability/convenience and data security. Chris spoke about dual testing of data: is it anonymous? / is it useful? In many cases, ideally it will be both, but being both may not always be possible.

This theme of data privacy balanced against openness was taken up in Simon Chapple’s presentation on the Internet of Things. I particularly liked the section on office temperature profiles, which was very relevant to those of us who spend a lot of time in Argyle House where – as in the Playfair Library – ambient conditions can leave something to be desired. I think Simon’s slides used the phrase “Unusual extremes of temperatures in micro-locations.” Many of us know from bitter experience what he meant!

There is of course a spectrum of openness, just as there are grades of abstraction from the thing we are observing or measuring and the data that represents it. Bert Remijsen’s demonstration showed that access to sound recordings, which compared with transcription and phonetic renderings are much closer to the data source (what Kant would call the thing-in-itself (das Ding an sich) as opposed to the phenomenon, the thing as it appears to an observer) is hugely beneficial to linguistic scholarship. Reducing such layers of separation or removal is both a subsidiary benefit of, and a rationale for, openness.

What it boils down to is the old storytelling adage: “Don’t tell, show.” And as Ros Attenborough pointed out, openness in science isn’t new – it’s just a new term, and a formalisation of something intrinsic to Science: transparency, reproducibility, and scepticism. By providing access to our workings and the evidence behind publications, and by joining these things up – as Ewan McAndrew described, linked data is key (this the fifth star in the aforementioned 5-star Open Data scheme.) Open Science, and all its various constituent parts, support this goal, which is after all one of the goals of research and of scholarship. The presentations showed that openness is good for Science; our shared challenge now is to make it good for scientists and other kinds of researchers. Because, as Peter Bankhead says, Open Source can be transformative – Open Data and Open Science can be transformative. I fear that we don’t emphasise these opportunities enough, and we should seek to provide compelling evidence for them via real-world examples. Opportunities like the annual Dealing With Data event make a very welcome contribution in this regard.

PDFs of the presentations are now available in the Edinburgh Research Archive (ERA). Videos from the day will be published on MediaHopper in the coming weeks.

Other resources

Martin Donnelly
Research Data Support Manager
Library and University Collections
University of Edinburgh

Share

“Archiving Your Data” – new videos from the Research Data Service

In three new videos released today, researchers from the University of Edinburgh talk about why and how they archive their research data, and the ways in which they make their data openly available using the support, tools and resources provided by the University’s Research Data Service.

Professor Richard Baldock from the MRC Human Genetics Unit explains how he’s been able to preserve important research data relating to developmental biology – and make it available for the long term using Edinburgh DataShare – in a way that was not possible by other means owing to the large amount of histology data produced.

YouTube Preview Image

Dr Marc Metzger from the School of GeoSciences tells how he saves himself time by making his climate mapping research data openly available so that others can download it for themselves, rather than him having to send out copies in response to requests. This approach represents best practice – making the data openly available is also more convenient for users, removing a potential barrier to the re-use of the data.

YouTube Preview Image

Professor Miles Glendinning from Edinburgh College of Art talks about how his architectural photographs of social housing are becoming more discoverable as a result of being shared on Edinburgh DataShare. And Robin Rice, the University’s Data Librarian, discusses the difference between the open (DataShare) and restricted (DataVault) archiving options provided by the Research Data Service.

YouTube Preview Image

For more details about Edinburgh’s Research Data Service, including the DataShare and DataVault systems, see:

https://www.ed.ac.uk/is/research-data-service

Pauline Ward
Research Data Service Assistant
Library and University Collections
University of Edinburgh

Share

DataShare 3.0: The ‘Download Release’ means deposits up to 100 GB

With the DataShare 3.0 release, completed on 6 October, 2017, the data repository can manage data items of 100 GB. This means a single dataset of up to 100 GB can be cited with a single DOI, viewed at a single URL, and downloaded through the browser with a single click of our big red “Download all files” button. We’re not saying the system cannot handle datasets larger than this, but 100 GB is what we’ve tested for, and can offer with confidence. This release joins up the DSpace asset store to our managed filestore space (DataStore) making this milestone release possible.

How to deposit up to 100 GB

In practice, what this means for users is:

– You can still upload up to 20 GB of data files as part of a single deposit via our web submission form.

– For sets of files over 20 GB, depositors may contact the Research Data Service team on data-support@ed.ac.uk to arrange a batch import. The key improvement in this step is that all the files can be in a single deposit, displayed together on one page with their descriptive metadata, rather than split up into five separate deposits.

Users of DataShare can now also benefit from MD5 integrity checking

The MD5 checksum of every file in DataShare is displayed (on the Full Item view), including historic deposits. This allows users downloading files to check their integrity.

For example, suppose I download Professor Richard Ribchester’s fluorescence microscopy of the neuromuscular junction from http://datashare.is.ed.ac.uk/handle/10283/2749. N.B. the “Download all files” button in this release works differently than before. And one of the differences which users will see is that the zip file it downloads is now named with the two numbers from the deposit’s handle identifier, separated by an underscore instead of a forward slash. So I’ve downloaded the file “DS_10283_2749.zip”.

I want to ensure there was no glitch in the download – I want to know the file I’ve downloaded is identical to the one in the repository. So, I do the following:

  • Click on “Show full item record”.
  • Scroll down to the red button labelled “Download all files”, where I see “zip file MD5 Checksum: a77048c58a46347499827ce6fe855127” (see screenshot). I copy the checksum (highlighted in yellow).

    screenshot from DataShare showing where the MD5 checksum hash of the zip file is displayed

    DataShare displays MD5 checksum hash

  • On my PC, I generate the MD5 checksum hash of the downloaded copy, and then I check that the hash on DataShare matches. There are a number of free tools available for this task: I could use the Windows command line, or I could use an MD5 utility such as the free “MD5 and SHA Checksum Utility”. In the case of the Checksum Utility, I do this as follows:
    • I paste the hash I copied from DataShare into the desktop utility (ignoring the fact the program confusingly displays the checksum hashes all in upper case).
    • I click the “Verify” button.

In this case they are identical – I have a match. I’ve confirmed the integrity of the file I downloaded.

Screenshot showing result of MD5 match

The MD5 checksum hashes match each other.

More confidence in request-a-copy for embargoed files

Another improvement we’ve made is to give depositors confidence in the request-a-copy feature. If the files in your deposit are under temporary embargo, they will not be available for users to download directly. However, users can send you a request for the files through DataShare, which you’ll receive via email, as described in an earlier blogpost. If you then agree to the request using the form and the “Send” button in DataShare, the system will attempt to email the files to the user. However, as we all know, some files are too large for email servers.

If the email server refuses to send the email message because the attachment is too large, DataShare 3.0 will immediately display an error message for you in the browser saying “File too large”. Thus allowing you to make alternative arrangements to get those files to the user. Otherwise, the system moves on to offer you a chance to change the permissions on the file to open access. So, if you see no error after clicking “Send”, you’ll have peace of mind the files have been sent successfully.

Pauline Ward, Research Data Service Assistant
EDINA and Data Library

Share