Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

DDI is a flexible standard and different users are at will to implement the standard in slightly different ways. This is a strength, but when moving between different implementations some adjustments need to be made.

Data files should meet certain criteria, this generates a standard SledgeHammer output, which is then lightly edited to provide a consistent structure for ingest into Colectica Repository.

Variable metadata workflow

Data Files

Studies should be encouraged to generate data files that are of the standards the UK Data Service

  • use meaningful and self-explanatory variable names, codes and abbreviations
  • variable and value labels must be clear and consistent, avoiding truncation of variable and value labels
  • non-compliant characters, such as &, @ and <>, should be removed
  • ensure no repetition of variables, especially redundancy in derived variables
  • ensure consistent treatment and labelling of missing values
  • extraneous information such Document entries should be stripped out

The workflow expects a simple mapping of dataset to questionnaire. The naming of the datafile should be consistent with that of the questionnaire it is collected from.

Data files should ideally be in SPSS format. Some guidance on SPSS file preparation is given below.

SPSS File Preparation

SPSS will hold lots of hidden information, Sledgehammer will try to use this and can lead to issues when outputting the DDI-L XML. We would recommend using something like this to get rid of this extraneous information This replaces a file label (often the location of the original file) with the bundle name, and to drop any document(s):

get file="G:\DB\closer_data\bcs70\bcs_1975\bcs_1975_masc.sav".
FILE LABEL "bcs_75_msc".
DROP DOCUMENT.
EXECUTE.
sysfile info file="G:\DB\closer_data\bcs70\bcs_1975\bcs_1975_masc.sav".
save outfile="G:\DB\closer_data\bcs70\bcs_1975\bcs_1975_masc.sav".

Use of SledgeHammer

SledgeHammer is a product released by Metadata Technology North America (MTNA) and allows the extraction of metadata from a wide range of data formats. Although it can be run interactively, the project uses batch files to allow a consistent generation of output.

The project uses a restricted set of these commands:

CommandExamplesExplanation
-aguk.cls.mcs, uk.alspacAgency
-renamealspac_00_ayc nshd_46_tcsThis should be the name of the bundle with which the data is associated
-ddiAlways 3.2-RPDDI Version - this should not be changed
-ddipdAlways proprietaryOnly output proprietary format not ascii
-har“No options”This creates unified codelists i.e. single Yes/No
-ddilangAlways en-GBThis should stay as en-GB as this is the default setting we will be using
-ddirefAlways URNInternal ddi URN definition
-ddiurnAlways canonicalCanonical - this should not be changed
-pretty“No options”This is so it looks half decent if you look at by hand
-statsmin, max, valid, invalidDescription of statistics generated per variable Optional: stddev and freq
-optAlways fullOptimised output
-scan“No options”Outputs metadata or entire full and includes no of cases and variables

../bcs70/bcs_1970.savName and path of input data file This is always the last line

Batch File

Each dataset should have a batch or command file which calls the sledgehammer-cl.bat file, and lists the options above An example is:

sledgehammer-cl.bat" ^
-ag uk.cls.bcs70        ^
-rename bcs_75_mcs ^
-ddi 3.2-RP               ^
-ddipd proprietary ^
-har                ^
-ddilang en-GB        ^
-ddiref urn   ^
-ddiurn canonical ^
-pretty      ^
-opt full ^
-scan ^
-stats max,min,mean,mode,valid,invalid,freq,stdev ^
../bcs70/bcs_1975/bcs_1975_masc.sav

Metadata Edits

For display purposes and for ease of navigation and ingest, a consistent set of names should be applied to the output from SledgeHammer prior to ingest through a series of edit scripts. These are written in python, and if they cannot be run at the study, can be run at CLOSER prior to ingest.

Edit scriptExplanation
fandr.pyInsert <r:String> where absent from output
fandr2.pyNames the DDI Instance
fandr3.pyNames the Physical Instance
fandr4.pyNames the Logical Product
fandr5.pyNames the Code List scheme
fandr6.pyNames the Data Product Name
fandr7.pyAdd Dataset URI and whether public
fandr8.pyAdds Title and Alternate Title to DDI Instance
fandr9.pyCorrects Valid to be ValidCases
fandr10.pyCorrects Invalid to be InvalidCases
fandr11.pyAdds naming to DataRelationship

The scripts uses a tab delimited file called rename_list.txt which includes the following:

Short Name - is the name of the metadata bundle with which the dataset of associated with

Long Name - the name you want to display as a human readable description

DOI - if available, this allows the user to navigate to the DOI and relevant citation and is provided for the user. If this is not available a website address of where the data can be accessed can be used instead.

Public - 1 is to be used when an website address or DOI has been provided. 

An example of this file is shown below. 

Short nameLong NameDOIPublic
mcs_03_naMCS2 Neighbourhood Assessmenthttp://dx.doi.org/10.5255/UKDA-SN-5350-31
mcs4_teacherMCS4 Teacher Surveyhttp://dx.doi.org/10.5255/UKDA-SN-6848-11
mcs5_scMCS5 Child Paper Self-Completionhttp://dx.doi.org/10.5255/UKDA-SN-7464-21
mcs5_teacherMCS5 Teacher Surveyhttp://dx.doi.org/10.5255/UKDA-SN-7464-21
ncds8_scNCDS8 Paper Self-Completionhttp://dx.doi.org/10.5255/UKDA-SN-6137-21
pmsPerinatal Mortality Studyhttp://dx.doi.org/10.5255/UKDA-SN-5565-21

Control File

A control file can be used to batch up the batch files and then run the edits across all the files:

call pms.bat
call ncds8_sc.bat
call mcs_03_na.bat
call bcs_1970.bat
call mcs4_teacher.bat
call mcs5_teacher.bat
call mcs5_sc.bat
call bcs_75_mcs.bat
python g://db//bin//fandr.py
python g://db//bin//fandr2.py
python g://db//bin//fandr3.py
python g://db//bin//fandr4.py
python g://db//bin//fandr5.py
python g://db//bin//fandr6.py
python g://db//bin//fandr7.py
python g://db//bin//fandr8.py
python g://db//bin//fandr9.py
python g://db//bin//fandr10.py

Outputs

For each dataset a DDI 3.2 file called [shortname].ddi32.rp.xml will be generated.

Checking

If the edits are run, the file can be imported into Colectica Designer to check that it is well formed. Alternatively you can check that DDI-Flavour has been run by checking the XML has the new title at the beginning of the file. 


Please see the step by step guide for other software requirements and example files. 






  • No labels