-
Notifications
You must be signed in to change notification settings - Fork 14
Automated Drift Correction GUI
User-friendly drift-correction automation built around the motion-correction software originally implemented and described in http://www.nature.com/nmeth/journal/v10/n6/full/nmeth.2472.html. Although the originally released software developed in Yifan Cheng's lab contains a graphical user interface, we implemented an automation software that is optimized for our hardware setup consisting of a FEI Titan Krios and a Gatan K2 summit.
We successfully installed the software on various Linux systems. Currently no other operation system is supported.
- Fast multicore processor, e.g. Intel Core i7-5820K
- Min 16Gb, better 32Gb fast RAM, e.g. DDR4 (2133 MHz)
- High-end GPU from NVIDIA with a least 2Gb RAM, e.g. NVIDIA GeForce GTX 780 Ti
- Small but fast SSD used for caching
- 10GbE P2P connection from K2-computer to the drift-corrector
- Enough RAM to make a RAMdisk for file conversion
- You can use your preferred Linux operating system, no Windows or OSX support. We successfully tested the software on the most recent Ubuntu and Fedora distributions
- Install the most recent NVIDIA graphics drivers
-
For our setup we installed a P2P-connection between the K2-computer and the drift-corrector. We added a 10GbE card to the drift-corrector and made use of the spare 10GbE-plug on the K2-computer. Our users store raw data on the fast Y-partition of the Gatan computer. This partition is exported to the local network, such that the drift-corrector can monitor the raw data directory directly. Whenever a new image is added, the image is copied to the drift-corrector (via a the fast connection) and later the drift-corrected stack and average are moved automatically from the drift-corrector to our regular storage server.
-
Alternatively you can also store raw data directly from the K2-computer to a network attached storage (NAS). In this case the drift-corrector has to monitor a directory on your NAS. Logically there is no difference to the solution described above, but in practice you could suffer from slower network connections as you are most likely not using a fast 10GbE network.
Download and install the motion correction developed in Yifan Cheng's Lab at UCSF from
http://daa6.msg.ucsf.edu:8080/MotionCorrection/
Note, they have some limitations concerning the supported GPUs (only NVIDIA with a substential amount of memory). For the motion-correction automator it is important that the executable is called motioncorr
and that it is available on the terminal. Copying the executable to /usr/bin
and renaming it is the easiest way to fulfill this constraint.
The motion correction GUI needs EMAN2 for some subtasks, e.g. file format conversions. Please install and source the latest version of EMAN2 from http://blake.bcm.edu/emanwiki/EMAN2.
EDIT: The EMAN2 dependency is in the process of being removed to improve portability.
Make sure that the following list (valid for Ubunutu-13.10) of Python dependencies is installed on your system:
- python-matplotlib
- python-tk
- python-imaging
- python-imaging-tk
- python-unipath
- python-numpy
Download the following files from 2dx repository and store them in a new folder
2dx_automator/motioncorrect_gui.py
2dx_automator/MotionCorrectionWatcher.py
2dx_automator/WatcherBaseClass.py
2dx_automator/driftplotter.py
Go to your new folder in a terminal and launch the motion correction automator by typing motioncorrect_gui.py
to the console. Make sure that EMAN2 is "sourced" in the console-session.
On startup the automator asks you for two directories sequentially. First the software needs an input directory, where you store your raw images. Navigate into the corresponding directory and click on ok. Second you are asked for an output directory where the software stores the aligned averages.
Launch the automation my clicking on the Launch Automation
button.
Once a new image is added to the input folder on the storage server, the automation software copies the new image to the /tmp
folder of the processing machine. Subsequently the image format is changed using e2proc2d.py such that motion-correction software is able to deal with the raw stack. The drift-correction is accelerated by means of general purpose graphic devices. Once the motion correction is done, the resulting drift-corrected average is stored to the selected output folder. Additionally all diagnostic graphs and figures, which are displayed by the GUI, are generated.
-
Starting Frame Number
First frame of the stack used for the motion correction. Numbering starts with zero. -
Ending Frame Number
Last frame of the stack used to correct for drift. Can be used to get rid of beam-induced damage on the sample. Zero means "use all frames up to the end". -
Frame Offset
Offset used for the correlation function. Suggested to use 1/4 of the total number of frames. For a detailed description of the algorithm we refer to the supplementary material of http://www.nature.com/nmeth/journal/v10/n6/full/nmeth.2472.html. -
BFactor
Magic factor applied in Yifan Cheng's executable, evidence-based we use 220 for 4k pixel stacks.
In cases where the automation crashed you can try to fix the process by means of the Troubleshoot
button. In cases where this does not help, just restart the GUI.
To export images to a particular folder you can use the provided export mechanism. Just select an export location (by clicking on Change Export Location
), select the image to export from the list of image names and click on Export Image
.
On a K2 summit you can record images over a uncommon large exposure time, resulting in an image of an over-exposed (burned) sample. Enabling this option will generate two drift-corrected averages: (i) corrected average where the selected parameters are applied and (ii) drift-corrected average on the entire over-exposed stack.
The high-dose average for instance can be used for easier particle picking whereas the 3D refinement is run on the low-dose (high-resolution) data.
Select this option when recording data in "Super-resolution mode". In this mode Gatan's K2 summit records images with 8k x 8k pixels in size. Following Yifan Cheng's publications we binn these images by Fourier-space cropping and correction for drift on the binned raw stack. The final 4k aligned average is stored in the selected output directory.