-
Notifications
You must be signed in to change notification settings - Fork 7
Study Setup
Brace yourself, this will be a lot. We need to make this easier in the future... :(
NOTE: make sure the study folder and everything in it is owned by clevis and the group kimel_data!
- Add a folder in
/archive/data/
with the acronym of the new study - Inside this create the following subfolders:
bin
,data
,docs
,logs
,metadata
,pipelines
, andqc
- Add a
README.md
file directly inside the study folder describing the study, who the PI is and any other relevant / useful info.
- Add any new accounts for the PI and any research assistants/collaborators they're known to have (this can be skipped till later)
- Create a new project
- go to https://xnat.imaging-genetics.camh.ca and under 'New' on the menu bar select 'Project'
- Make sure to note the Project ID you use, since you'll need it for the settings file later (this should usually be the same as the study acronym / archive folder name).
- Set the correct investigator, creating a new entry for the PI if they're not present already.
- Set the correct permissions
- Default project permission should be 'private'
- Add 'clevis', the PI, and any employees in the lab who will help manage the study as 'owners'
- Add any RAs from other sites as 'Members' (they can update and add but can't delete data)
- Go to https://edc.camhx.ca/redcap/
- For the 'Scan Completed' survey add the new study to the options for study and add any research assistants to the list
- To fill in this file you will need the following:
- A list of expected scan types for each site in the study
- Knowledge of what will be in the actual SeriesDescription fields for the dicoms received
- (CAMH/TONI only) an FTP username and password as well as the name of the folder(s) associated with the project on the MR FTP server.
- In
/archive/code/config
add a file named$YOURSTUDY_settings.yml
- For detailed info on filling this file see here. See the example at the bottom of that page for a template (or copy one from another study) to make life easier.
- Inside
/archive/code/config/tigrlab_config.yml
in the 'Projects' section add your study to the list.
-
You can check how scans are named by running datman's
archive-manifest.py
on a scan, and seeing if thePatientName
follows the proper convention, if not, request a mapping (or generate one; see "Configuration") -
Configuration
-
Add a key-value pair under
Projects
in/archive/code/config/tigrlab_config.yaml
with this format:<project>: <project>_settings.yml
-
Create
/archive/code/config/<project>_settings.yml
, and fill out all project information- This involves knowing the study name, description, PI, and scan types (which you obtained above)
- Look at settings files for other projects for examples of how to format the expected scan types correctly
-
Create
/archive/code/config/<project>_management.sh
; you can copy this from another project, and just need to change theSTUDYNAME=<project>
line -
Most project directories will be created for you automatically, but you must create /archive/data//metadata
- Within the metadata directory, create a
scans.csv
file. This is a space-delimited file, mapping each subject's name in the DICOM header with a name that follows the TIGRLab naming convention. The first line of this file should read:source_name target_name dicom_PatientName dicom_StudyID
. Each subsequent line should follow this format:<name of archive file, minus file extension> <subject name, following TIGRLab convention> <PatientName field in DICOM header> <StudyID field in DICOM header>
. - If you were provided this mapping from the people responsible for the study, copy and paste it here (editing to follow the format, as necessary)
- If you were not provided this, a tool was created to generate this automatically. However, it is NOT 100% accurate in all cases (for example, it assumes two sessions with the same PatientName are two different timepoints (with the newest session being "timepoint 2"), when it might actually represent a "repeat"). When it doubt, always check with those responsible for the study.
- The tool is found at
/archive/code/datman/datman/generate_scanslist.py
, with the following usage:generate_scanslist.py <archive_dir> <study_name> <site_name>
. It will output a scans.csv file in the current directory.
- Within the metadata directory, create a
-
XNAT
-
Create new study in XNAT, using same
<project>
name -
Ensure "clevis" is given ownership of the new study - this is necessary for automatic upload of data from the MR server
-
REDCap
- Add study name and RA name(s) to 'Scan Completed' survey
-
Dashboard
- To add the new study, run the
add_study_info.py
script (once the settings file exists):$module load /archive/code/dashboard
$/archive/code/dashboard/add_study_info.py
- To add the new study, run the
-
Restart webserver
- In order for session pages to be viewed on the Dashboard,
srv-dashboard
must be restarted (this appears to be a caching issue involving the Dashboard reading an oldtigrlab_config.yaml
file if not restarted)
- In order for session pages to be viewed on the Dashboard,
-
Getting the data
- Unless you have admin privileges, you will need to wait for the nightly run scripts to fetch the data and ensure everything was set-up properly. Come back tomorrow, and ensure all data is in the
<project>/data
folder, all QC pages are generated in<project>/qc
, and all data was uploaded to XNAT. If there were any problems, look at the project-specific logfile in /archive/logs to try to diagnose what went wrong.
- Unless you have admin privileges, you will need to wait for the nightly run scripts to fetch the data and ensure everything was set-up properly. Come back tomorrow, and ensure all data is in the
Once the data is on our system...
-
QC training:
- A GitHub account must be created for all RAs associated with the study (if they don't have one yet)
- A Dashboard admin must give the RAs permissions to access the new study
-
Gold Standards:
- Put the first subject in /metadata/standards
The study should now be fully set up. Congratulations!
- Home
- Onboarding / Introduction
- Technical Skills
- Resources
- Offboarding
- Data
- Other
- Methods