-
Notifications
You must be signed in to change notification settings - Fork 0
Intro Operation
Each X-ray spectrum will be analyzed to identify the element peaks present in the spectrum and to quantify each of those elements. The net peak area is found by subtracting the background and fitting the known peak shape to each spectral peak. This net area is converted to elemental abundance using a physics-based model (the fundamental parameters method of X-ray fluorescence spectroscopy), aided by a suite of calibration standards and element calibration factors (ECFs). The element calibration factors are determined by measuring a suite of standard reference materials that have well-known composition and can be checked and adjusted on Mars using the PIXL Calibration Target.
The PIXL instrument is designed to collect XRF spectra over a rectangular area by maneuvering the sensor head across the surface of the sample. A hexapod mounting structure with six actuators controlling the lengths of the legs is used to maneuver the sensor head. Each spectrum is integrated for several seconds. Hence one operational science observation can yield many X-ray spectra. PIQUANT will handle this large number of spectra by producing sums, deriving predicted spectra profiles and processing each dataset to yield composition information. The output is a format that is suitable for visualization of the spectra, the resulting fits and diagnostic information, and elemental composition from each spectrum. This information can also be plotted as maps or grids showing the spatial dependence of rock composition as well as data quality.
X-ray spectrum analysis can be automated for spectra that have a large number of X-ray counts and thus a good signal-to-noise ratio from Poisson counting statistics. This will typically be true for the bulk sum spectra and for spectra taken in grid or line scans. However, it may not be true for high-density maps where the integration time per spectrum is limited because of the large number of spectra collected. In this case spectra will have to be categorized in some fashion that reserves as much of the scientific information. Spectra from each category may then be summed to get adequate signal-to-noise for quantitative abundances to be determined. For example, rock components can be identified from a coarse composition of major elements, from visual inspection, or using nonparametric approaches like principal component analysis or t-distributed stochastic neighbor embedding. Separately summing the spectra from a single rock component gives good abundances for that component with little cross-contamination from other components, retaining the element abundance information about each individual component.
PIQUANT predecessors were developed under previous NASA projects and its core components have been tested thoroughly. The automated portions of the analysis are implemented in a C++ command line tool that can be run via an automated pipeline. A graphical user interface, written in Python, facilitates human interaction in cases where human judgment is necessary and for manual processing. The interface calls the same command line tool so that the data analysis is always consistent.
All processing steps generate output that documents what processing was done, when it was completed, and which files were used as inputs and what files were produced. Human-generated inputs from the graphical user interface are written to files before being used for actual processing. These outputs can optionally be appended to a log file to provide saved documentation of the processing performed.
The result of the spectral and image processing will be a large set of data hypercubes, each of which will contain a specific type of information (such as X-ray spectra, net peak intensities, elemental abundances, etc.) with a spatial location for each measurement. Once these data are processed and saved, they are used to produce maps, grids, line scans, or other desired displays for the science team. In addition, diagnostic information for human and/or automated validation of results can be displayed on the same scale as the visual and X-ray measurements for rapid identification of problems. This consistent display is very helpful in assessing the quality of measurements and identifying problems with either the individual spectra or their processing steps. Such problems often occur in only a few spectra and it is important to be able to tell at a glance which spectra are affected.
PIQUANT follows the Model View Controller organization. Each part of this organization method is described in the subsections that follow.
The model is the mathematical calculations and spectrum processing algorithms that perform the actual spectrum analysis. It is centered around a physics-based model commonly referred to as the fundamental parameters method of X-ray fluorescence spectroscopy. This model predicts the emitted intensity of X-rays at the characteristic energy of each element for a specific composition of the target. Inverting this model by adjusting the composition to match the measured intensity of the corresponding peak in the spectrum gives the measured target composition. The intensity of each peak is found from a fit to the peak using the detailed output of the model. The fundamental parameters model also includes such things as the primary excitation spectrum, the detector quantitative response and peak shape, and X-ray interactions in the target rock material. An important step is removal of the background under the peaks to obtain their actual intensity. The background is found empirically using the SNIP algorithm of van Espen [Van Grieken and Markowicz, Handbook of X-Ray Spectrometry 2nd Ed., 2001].
The peaks contributed to the spectrum from each element are calculated separately using the physics model. The expected intensity of each emission line is calculated from the given composition (for a standard) or a guess at the composition (for an unknown). Lines are grouped by principal atomic energy level (K, L, M, or N) and summed to obtain a component of the spectrum arising from that element. The components plus the background are then fit to the spectrum using linear least squares. To obtain a good fit, and thus accurate peak intensities, the energy calibration of the spectrum must match the calculation. Linear least squares cannot adjust the energy calibration because it enters the peak calculations in a nonlinear fashion. The energies of the element emission lines are assumed to be known accurately, and are thus in the right place in the calculation, but the energy calibration of the spectrum may not be accurate enough to obtain a good fit via linear least squares. The energy calibration is adjusted between each iteration of the linear least squares to obtain the best fit but it is not allowed to change by more than the width of the element peaks to avoid misidentification of peaks. The energy-dependent detector energy resolution (peak width) is also adjusted within certain limits to obtain a good fit. This procedure yields accurate intensities for the element peaks, which directly translates into quantitative accuracy for the element abundance.
Quantitative analysis is aided by a suite of calibration standards and element calibration factors (ECFs). The model reads the information on standards from a standards input file. This information is used to fit the spectra from the standards and, by comparing the results to the given composition, a calibration factor for each element in the suite of standards is obtained. These element calibration factors are written to a file and used to quantify unknown targets.
The model is implemented as a command line tool in C++ to provide fast calculations and for easy inclusion in automated workflow pipelines.
The View module displays the results of the analysis performed by the model. The main role of the view component is to plot the spectrum vs X-ray energy together with the fit and any other energy-dependent information to help evaluate the spectrum and its analysis. This plot shows the peaks in the spectrum and their shape. The plot also displays the model results both in the form of the individual peaks from each element, the overall fit, and the background. Residual and expected statistical variance can also be plotted. The viewer displays these parts of the plot in different colors and allows each to be turned on and off in the display.
The controller allows human input and interaction with the model and viewer. It lets the user choose the actions to be performed and the input and output files for the model. It also automatically invokes the viewer to plot the results if a single spectrum is processed. Finally, it allows the user to choose the option of writing the text output to a log file and chooses which file to use for this purpose.
Each type of processing only requires a certain subset of the inputs. All the inputs are visible but for each chosen action only the required inputs are enabled. The inputs that are not used are disabled (which usually appears as grayed-out controls). Once the inputs are specified, the model is invoked via a GO! button at the bottom of the controller. If any required inputs have not been specified a dialog indicating the missing input is displayed.
The inputs are in editable text boxes that allow the user to modify their contents and to specify input either by typing or copying from an outside source. Buttons also allow selection of files using the system file dialogs.