Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Once setup you may need to re-fork or pull the latest version of the Gitlab pipeline to ensure you have the latest version. See Updating your Gitlab pipeline for how to do this. 

Setup a Gitlab account

a. If you don’t have a Gitlab account, please register at https://gitlab.com/

b. Once you sign in on Gitlab go to https://gitlab.com/jli755/archivist_insert_workaround

c. Fork the repo on Gitlab.com. (Just click on the “Fork” in the right-hand corner of the window. - Please see the screen shot below)


       

    

      d. You now need credentials for Heroku and Archivist login (Please contact Hayley Mills to obtain the variables)

i. First make sure you are in the right part of the Gitlab (ie in your account – the url should be like https://gitlab.com/(your account name)/archivist_insert.



ii. In Gitlab, go to Settings → CI/CD → Variables → Expand → Add Variable to add the variables.


                     

               

                          


                           

           (Make sure you tick both boxes  - 1. Protect variable and 2. Mask variable.)



Upload tables to database via pipeline


a. Tables - You can use "csv" or "tsv" table formats, but not a mixture of them. (If you are using the "tsv" tables please see step 2 below) 

i. Copy your tables into "archivist_table"

1. Open the “archivist_tables” folder in Gitlab

2. Click “upload file” to copy the files to the folder (you have to upload individual files, therefore repeat the process for to upload all of your files.

3. To stop running the pipeline automatically, you need to add info about when you add or update on of these files.  Otherwise, it will automatically run the pipeline.  You should only run the pipeline when all files have been added; so in order to stop it running when you are not ready add [skip ci] to the comment.




                  

b. tsv tables instead of csv tables

i. Need to change the delimiter in the db_temp.sql file

ii. May need to specify the encoding of the file (Please see below)

\COPY temp_sequence FROM 'archivist_tables/sequence.tsv' DELIMITER E'\t' CSV HEADER encoding 'windows-1251';

iii. If it passes, great.  If not, look at the output cross mark to see what went wrong. Click on cross marks (1&2) on stages column (Please see the screen shot below)


                                      

                                          

Correct the errors on csv files

a. If the csv files have formatting problem, it will not pass stage 1 (run_tests). Make sure all the table formats are compliance with the correct format. See Tables structure

                                       

b. Having extra spaces on the uploaded table create issues. Just delete the extra spaces and run the pipeline again to fix this error.

                              

Download the xml file and export it to archivist

a. If it passes, the XML file is available as an "artifact" (a zip files containing the generated XML)  and this is available for 10 days.  The xml can be viewed temporarily from temp archivist and will need to be loaded into archivist via import, if you have permissions to do so then add to archivist, if not, please ask Hayley Mills to import.




  • No labels