Enter Metadata into Archivist
There are multiple ways to enter questionnaire metadata into Archivist: Manual Entry and Automated Entry (including Parsed Entry, Tagged Documents, Copies, and Tables). Each of the methods are summarised below. A different approach to entering and editing questionnaire metadata in Archivist is required for each method - these methods are explained on each page.
The Manual Entry method is the most common questionnaire metadata entry method at CLOSER, and it is also the most hands-on. This process involves entering the questionnaire metadata into Archivist from scratch. That is, creating all the elements of the questionnaire (code lists, response domains, question items/constructs, sequences, conditions, loops, and statements) and positioning them in the correct order.Â
In some instances, a study will provide a version of a questionnaire in a structured format (e.g., XML, HTML). These questionnaires can be parsed using GitLab to produce tables which are then parsed directly into Archivist. This speeds up the questionnaire metadata entry process and reduces human error, as much less information is entered manually. Like questionnaires which are manually entered into Archivist, parsed questionnaires should be checked to ensure that the content is included, is in the right order, and to identify and correct errors. However, particular attention should be paid to the quirks of each parser.Â
The most recently developed method of entering questionnaire metadata into Archivist is the tagging method, which was initially devised for entering large CAPI questionnaires. This method involves tagging the questionnaire elements in the unstructured document (e.g. PDF), before using a parser to generate the XML required to load the questionnaire into Archivist. This method is unique in comparison to other parsed questionnaires, as the bulk of the work occurs before the questionnaire is parsed rather than after.
Sometimes questionnaires are very similar, or even identical, from one wave of data collection to another. In these cases, we can make a copy of a verified questionnaire and then make targeted changes where differences have been identified between questionnaires.
An intermediate solution was used temporarily to generate the tables used for archivist_insert. This involved creating these tables manually by either copying PDFs into excel and creating formulas to pull out the metadata, and add the appropriate tags etc., or by reformatted the csv output from Redcap. This approach has been replaced by the tagging method and the Redcap parser respectively, both produce the tables needed for archivist_insert.Â