forked from CBICA/CaPTk
-
Notifications
You must be signed in to change notification settings - Fork 0
/
5_TechReference.txt
465 lines (405 loc) · 28.5 KB
/
5_TechReference.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
/**
\page Technical_Reference Technical Reference
This section gives further technical details for all previous documentation material.
This section provides step-by-step guidance to apply the CaPTk functionalities:
- \subpage tr_FeatureExtraction "Feature Extraction"
- \subpage tr_integration "Integrating your Application in CaPTk"
Training videos can be accessed in our <a href="https://www.youtube.com/channel/UC69N7TN5bH2onj4dHcPLxxA"><b>YouTube Channel</b></a>, specifically in the <a href="https://www.youtube.com/playlist?list=PLXdcXDD5czvjFFQGX9Jm3KouP0H9cWowu"><b>CaPTk Playlist</b></a>.
<b>NOTE:</b> The visualization of images is based on the physical coordinate system of each image (i.e., the origin and direction information from within the image file is used for rendering). In practice, use of a consistent coordinate framework results in images with different origins to appear misaligned (shifted) when compared to other neuro-imaging software packages that do rendering based on the Cartesian coordinate information in the image.
*/
/**
\page tr_FeatureExtraction Feature Extraction
\section tr_fe_defaults Defaults
\subsection tr_fe_def_main Main Default Feature Extraction Parameters
For details, please see https://github.com/CBICA/CaPTk/blob/master/src/applications/FeatureExtraction/data/1_params_default.csv.
\subsection tr_fe_def_lattice Lattice Feature Extraction Parameters
For details, please see https://github.com/CBICA/CaPTk/blob/master/src/applications/FeatureExtraction/data/2_params_default_lattice.csv.
\section tr_fe_batch Batch Mode
An example of the list file for batch processing can be found in <b>${CaPTk_InstallDir}/share/featureExtractionBatch/batch_featureExtraction.csv</b> or https://github.com/CBICA/CaPTk/blob/master/src/applications/FeatureExtraction/data/batchMode/batch_featureExtraction.csv. An explanation of the inputs is given below (the headers are case-insensitive):
<table border="1">
<tr>
<td align="center"><b>Header</b></td>
<td align="center"><b>Description</b></td>
</tr>
<tr>
<td>PATIENT_ID</td>
<td>The subject ID of the images getting processed</td>
</tr>
<tr>
<td>IMAGES</td>
<td>The co-registered input images for whom features are to be extracted; separated by '|'</td>
</tr>
<tr>
<td>MODALITIES</td>
<td>The corresponding modalities used in the output CSV; separated by '|'</td>
</tr>
<tr>
<td>ROIFile</td>
<td>The mask file which defines the annotated regions for extracting features</td>
</tr>
<tr>
<td>SELECTED_ROI</td>
<td>The selected annotations from which to extract features; separated by '|'</td>
</tr>
<tr>
<td>ROI_LABEL</td>
<td>The corresponding annotation labels used in the output CSV; separated by '|'</td>
</tr>
<tr>
<td>OUTPUT (OPTIONAL)</td>
<td>The output file for the current subject</td>
</tr>
<tr>
<td>PARAM_FILE (OPTIONAL)</td>
<td>The parameter file for the current subject</td>
</tr>
</table>
\section tr_fe_math Extracted Features
<table border="1">
<tr>
<td align="center"><b>Feature Family</b></td>
<td align="center"><b>Specific <br> Features</b></td>
<td align="center"><b>Parameter<br>Name</b></td>
<td align="center"><b>Range</b></td>
<td align="center"><b>Default</b></td>
<td align="center"><b>Description, Formula and Comments</b></td>
</tr>
<tr>
<td align="center">Intensity Features<br>(First-Order Statistics)</td>
<td>
<ul>
<li>Minimum
<li>Maximum
<li>Mean
<li>Standard Deviation
<li>Variance
<li>Skewness
<li>Kurtosis
</ul>
<td>N.A.
<td>N.A.
<td>N.A.
<td>
<ul>
<li>Minimum Intensity = \f$ Min (I_{k}). \f$ where \f$ I_{k} \f$ is the intensity of pixel or voxel at index k.
<li>Maximum Intensity = \f$ Max (I_{k}). \f$ where \f$ I_{k} \f$ is the intensity of pixel or voxel at index k.
<li>Mean=\f$ \frac{\sum(X_{i})}{N} \f$ where N is the number of voxels/pixels.
<li>Standard Deviation =\f$ \sqrt{\frac{\sum(X-\mu)^{2}}{N}}\f$ where \f$\mu\f$ is the mean of the data.
<li>Variance =\f$ \frac{\sum(X-\mu)^{2}}{N} \f$ where \f$\mu\f$ is the mean intensity.
<li>Skewness =\f$ \frac{\sum_{i=1}^{N}(X_{i} - \bar{X})^{3}/N} {s^{3}} \f$ where \f$\bar{X}\f$ is the mean, s is the standard deviation and N is the number of pixels/voxels.
<li>Kurtosis =\f$ \frac{\sum_{i=1}^{N}(X_{i} - \bar{X})^{4}/N}{s^{4}} \f$ where \f$\bar{X}\f$ is the mean, s is the standard deviation and N is the number of pixels/voxels.
</ul>
All features in this family are extracted from the raw intensities.
</td>
</tr>
<tr>
<td align="center">Histogram<br>-based
<td>
<ul>
<li>Bin Frequency
</ul>
<td>Bins
<td>N.A.
<td>10
<td>
<ul>
<li>Uses number of bins as input and the number of pixels in each bin would be the output.
</ul>
All features in this family are extracted from the discretized intensities.
</tr>
<tr>
<td align="center">Volumetric
<td><ul><li>Volume/Area</ul>
<td>Dimensions <br>Axis
<td>2D:3D <br>x,y,z
<td>3D <br>z
<td><ul><li>Volume/Area (depending on image dimension) and number of voxels/pixels in the ROI. </ul>
</tr>
<tr>
<td align="center">Morphologic
<td>
<ul>
<li>Elongation
<li>Perimeter
<li>Roundness
<li>Eccentricity
<li>Ellipse Diameter
<li>Equivalent Spherical Radius
</ul>
<td>Dimensions <br>Axis
<td>2D:3D <br>x,y,z
<td>3D <br>z
<td>
<ul>
<li>Elongation =\f$ \sqrt{\frac{i_{2}}{i_{1}}} \f$ where \f$i_{n}\f$ are the second moments of particle around its principal axes.
<li>Perimeter = \f$ 2 \pi r \f$ where r is the radius of the circle enclosing the shape.
<li>Roundness = \f$ As/Ac = (Area of a shape)/(Area of circle) \f$ where circle has the same perimeter.
<li>Eccentricity =\f$ \sqrt{1 - \frac{a*b}{c^{2}}} \f$ where c is the longest semi-principal axis of an ellipsoid fitted on an ROI, and a and b are the 2nd and 3rd longest semi-principal axes of the ellipsoid.
</ul>
</tr>
<tr>
<td align="center">Local Binary<br>Pattern (LBP)
<td>
<td>Radius <br>Neighborhood
<td>N.A. <br>2:4:8
<td>N.A. <br>8
<td>
<ul>
<li>The LBP codes are computed using N sampling points on a circle of radius R and using mapping table.
</ul>
</tr>
<tr>
<td align="center">Grey Level<br>Co-occurrence<br>Matrix<br> (GLCM)
<td>
<ul>
<li>Energy (Angular Second Moment)
<li>Contrast (Inertia)
<li>Joint Entropy
<li>Homogeneity (Inverse Difference Moment)
<li>Correlation
<li>Variance
<li>SumAverage
<li>Variance
<li>Auto<br>Correlation
</ul>
<td>Bins <br><br>Radius <br><br>Dimensions <br><br>Offset <br><br>Axis
<td>N.A. <br><br>N.A. <br><br>2D:3D <br><br>Individual/Average/Combined<br><br>x,y,z
<td>10 <br><br>13 <br><br>2 <br><br>3D <br><br>Average <br><br>z
<td>
For a given image, a Grey Level Co-occurrence Matrix is created and \f$ g(i,j) \f$ represents an element in matrix
<ul>
<li>Energy = \f$ \sum_{i,j}g(i, j)^2 \f$
<li>Contrast = \f$ \sum_{i,j}(i - j)^2g(i, j) \f$
<li>Joint Entropy = \f$ -\sum_{i,j}g(i, j) \log_2 g(i, j) \f$
<li>Homogeneity =\f$ \sum_{i,j}\frac{1}{1 + (i - j)^2}g(i, j) \f$
<li>Correlation =\f$ \sum_{i,j}\frac{(i - \mu)(j - \mu)g(i, j)}{\sigma^2} \f$
<li>Sum Average =\f$ \sum_{i,j}i \cdot g(i, j) = \sum_{i,j}j \cdot g(i, j)\f$(due to matrix symmetry)
<li>Variance = \f$ \sum_{i,j}(i - \mu)^2 \cdot g(i, j) = \sum_{i,j}(j - \mu)^2 \cdot g(i, j)\f$ (due to matrix symmetry)
<li>AutoCorrelation = \f$\frac{\sum_{i,j}(i, j) g(i, j)-\mu_t^2}{\sigma_t^2}\f$ where \f$\mu_t\f$ and \f$\sigma_t\f$ are the mean and standard deviation of the row (or column, due to symmetry) sums.
</ul>
All features are estimated within the ROI in an image, considering 26-connected neighboring voxels in the 3D volume.
<b>Note</b> that the creation of the GLCM and its corresponding aforementioned features for all offsets are calculated using an existing ITK filter. The <b>Individual</b> option gives features for each individual offset, <b>Average</b> estimates the average across all offsets and assigns a single value for each feature and <b>Combined</b> combines the GLCM matrices generated across offsets and calculates a single set of features from this matrix.
</tr>
<tr>
<td align="center">Grey Level<br>Run-Length<br>Matrix<br> (GLRLM)
<td>
<ul>
<li>SRE
<li>LRE
<li>GLN
<li>RLN
<li>LGRE
<li>HGRE
<li>SRLGE
<li>SRHGE
<li>LRLGE
<li>LRHGE
</ul>
<td>Bins <br><br>Radius <br><br>Dimensions <br><br>Axis <br><br>Offset
<td>N.A. <br><br>N.A. <br><br>2D:3D <br><br>x,y,z <br><br>Individual/Average/Combined
<td>10 <br><br>13 <br><br>2 <br><br>3D <br><br>z <br><br>Average <br><br>1
<td>
For a given image, a run-length matrix \f$ P(i; j)\f$ is defined as the number of runs with pixels of gray level i and run length j. Please note that some features are only extracted in DebugMode (by using the "-d" parameter from the command line); these defines features that are mathematically formulated in previous published material but not completely aligned with The Image Biomarker Standardisation Initiative.
<ul>
<li>[COMPLETE MODE] Short Run Emphasis (SRE) = \f$ \frac{1}{n_r}\sum_{i,j}^{N}\frac{p(i,j)}{j^2} \f$
<li>[COMPLETE MODE] Long Run Emphasis (LRE) =\f$ \frac{1}{n_r}\sum_{j}^{N}p(i,j) \cdot j^2\f$
<li>[COMPLETE MODE] Grey Level Non-uniformity (GLN) =\f$ \frac{1}{n_r}\sum_{i}^{M}\Big(\sum_{j}^{N}p(i,j) \Big)^2 \f$
<li>[COMPLETE MODE] Run Length Non-uniformity (RLN) =\f$ \frac{1}{n_r}\sum_{j}^{N}\Big(\sum_{i}^{M}p(i,j) \Big)^2 \f$
<li>Low Grey-Level Run Emphasis (LGRE)=\f$ \frac{1}{n_r}\sum_{i}^{M}\frac{p_g(i)}{i^2} \f$
<li>High Grey-Level Run Emphasis (HGRE)=\f$ \frac{1}{n_r}\sum_{i}^{M}p_g(i) \cdot i^2 \f$
<li>Short Run Low Grey-Level Emphasis (SRLGE)= \f$\frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}\frac{p(i,j)}{i^2 \cdot j^2} \f$
<li>Short Run High Grey-Level Emphasis (SRLGE) =\f$ \frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}\frac{p(i,j) \cdot i^2 }{j^2}\f$
<li>[COMPLETE MODE] Long Run Low Grey-Level Emphasis (LRLGE) =\f$ \frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}\frac{p(i,j) \cdot j^2 }{i^2} \f$
<li>[COMPLETE MODE] Long Run High Grey-Level Emphasis (LRHGE) =\f$ \frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}p(i,j) \cdot i^2 \cdot j^2 \f$
</ul>
All features are estimated within the ROI in an image, considering 26-connected neighboring voxels in the 3D volume.
<b>Note</b> that the creation of the GLRLM and its corresponding aforementioned features for all offsets are calculated using an existing ITK filter. The <b>Individual</b> option gives features for each individual offset, <b>Average</b> estimates the average across all offsets and assigns a single value for each feature and <b>Combined</b> combines the GLRLM matrices generated across offsets and calculates a single set of features from this matrix.
</tr>
<tr>
<td align="center">Neighborhood<br>Grey-Tone<br>Difference<br>Matrix<br> (NGTDM)
<td>
<ul>
<li>Coarseness
<li>Contrast
<li>Busyness
<li>Complexity
<li>Strength
</ul>
<td>Bins <br><br>Dimensions <br><br>Axis
<td>N.A. <br><br>2D:3D <br><br>x,y,z
<td>10 <br><br>13 <br><br>3D <br><br>N.A. <br><br>1
<td>
<ul>
<li>Coarseness =\f$ \Big[ \epsilon + \sum_{i=0}^{G_{k}} p_{i}s(i) \Big]\f$
<li>Contrast =\f$ \Big[\frac{1}{N_{s}(N_{s}-1)}\sum_{i}^{G_{k}}\sum_{j}^{G_{k}}p_{i}p_{j}(i-j)^2\Big]\Big[\frac{1}{n^2}\sum_{i}^{G_{k}}s(i)\Big] \f$
<li>Busyness =\f$ \Big[\sum_{i}^{G_{k}}p_{i}s(i)\Big]\Big/ \Big[\sum_{i}^{G_{k}}\sum_{j}^{G_{k}}i p_{i} - j p_{j}\Big] \f$
<li>Complexity =\f$ \sum_{i}^{G_{k}}\sum_{j}^{G_{k}} \Big[ \frac{(|i-j|)}{(n^{2}(p_{i}+p_{j}))} \Big] \Big[ p_{i}s(i)+p_{j}s(j) \Big]\f$
<li>Strength =\f$ \Big[\sum_{i}^{G_{k}}\sum_{j}^{G_{k}}(p_{i}+p_{j})(i-j)^{2}\Big]/\Big[\epsilon + \sum_{i}^{G_{k}} s(i)\Big]\f$
</ul>
Where \f$p_{i}\f$ is the probability of occurrence of a voxel of intensity i and \f$s(i)\f$ represents the NGTDM value of intensity i calculated as:
\f$ \sum │i - Ai│\f$. Ai indicates the average intensity of the surrounding voxels without including the central voxel.
</tr>
<tr>
<td align="center">Grey Level<br>Size-Zone<br>Matrix<br> (GLSZM)
<td>
<ul>
<li>SZE
<li>LZE
<li>GLN
<li>ZSN
<li>ZP
<li>LGZE
<li>HGZE
<li>SZLGE
<li>SZHGE
<li>LZLGE
<li>LZHGE
<li>GLV
<li>ZLV
</ul>
<td>Bins <br><br>Radius <br><br>Dimensions <br><br>Axis
<td>N.A. <br><br>N.A. <br><br>2D:3D <br><br>x,y,z
<td>10 <br><br>13 <br><br>2 <br><br>3D <br><br>z <br><br>4
<td>
For a given image, a run-length matrix \f$ P(i; j)\f$ is defined as the number of runs with pixels of gray level i and run length j.
<ul>
<li>Small Zone Emphasis (SZE) = \f$ \frac{1}{n_r}\sum_{i,j}^{N}\frac{p(i,j)}{j^2} \f$
<li>Large Zone Emphasis(LZE) =\f$ \frac{1}{n_r}\sum_{j}^{N}p(i,j) \cdot j^2\f$
<li>Gray-Level Non-uniformity (GLN) =\f$ \frac{1}{n_r}\sum_{i}^{M}\Big(\sum_{j}^{N}p(i,j) \Big)^2 \f$
<li>Zone-Size Non-uniformity (ZSN) =\f$ \frac{1}{n_r}\sum_{j}^{N}\Big(\sum_{i}^{M}p(i,j) \Big)^2 \f$
<li>Zone Percentage (ZP) =\f$ \frac{n_{r}}{n_p} \f$ where \f$ n_r \f$ is the total number of runs and \f$ n_p \f$ is the number of pixels in the image.
<li>Low Grey-Level Zone Emphasis (LGZE)=\f$ \frac{1}{n_r}\sum_{i}^{M}\frac{p_g(i)}{i^2} \f$
<li>High Grey-Level Zone Emphasis (HGZE)=\f$ \frac{1}{n_r}\sum_{i}^{M}p_g(i) \cdot i^2 \f$
<li>Short Zone Low Grey-Level Emphasis (SZLGE)= \f$\frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}\frac{p(i,j)}{i^2 \cdot j^2} \f$
<li>Short Zone High Grey-Level Emphasis (SZLGE) =\f$ \frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}\frac{p(i,j) \cdot i^2 }{j^2}\f$
<li>Long Zone Low Grey-Level Emphasis (LZLGE) =\f$ \frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}\frac{p(i,j) \cdot j^2 }{i^2} \f$
<li>Long Zone High Grey-Level Emphasis (LZHGE) =\f$ \frac{1}{n_r}\sum_{i}^{M}\sum_{j}^{N}p(i,j) \cdot i^2 \cdot j^2 \f$
</ul>
All features are estimated within the ROI in an image, considering 26-connected neighboring voxels in the 3D volume.
</tr>
<tr>
<td align="center">Lattice<br>-based
<td>
<ul>
<li>Selected features
<li>Feature Maps
</ul>
<td>
<ul>
<li>FullImage
<li>Window
<li>Step
<li>Boundary
<li>PatchBoundary
</ul>
</td>
<td>
<ul>
<li>0-1
<li>(mm)0:ImageSize
<li>(mm)0:ImageSize
<li>NoPadding:ZeroPadding
<li>Full:ROI:None
</ul>
</td>
<td>
<ul>
<li>0
<li>6.3
<li>6.3
<li>NoPadding
<li>Full
</ul>
</td>
<td>
Please see <b>${CaPTk_InstallDir}/data/features/2_params_default_lattice.csv</b> for detailed descriptions.
</td>
</tr>
</table>
The parameterization of the <b>lattice-based strategy</b> for feature extraction is defined by:
- The grid spacing representing the distance between consecutive lattice points (Default: 6.3mm).
- The size of the local region centered at each lattice point (Default: 6.3mm).
*/
/**
\page tr_integration Integrating your Application in CaPTk
CaPTk is developed and maintained by the Center for Biomedical Image Computing and Analytics (CBICA) at the University of Pennsylvania, and draws upon research from several groups within the Center.
New applications, written in any programming language, can be integrated into CaPTk at different levels. These applications can then run within CaPTk, while having direct access to the full breadth of CaPTk’s interactive capabilities.
<ul>
<li><b>Source level integration</b>: At this level, the new application source code (C++) is compiled alongside CaPTk, ensuring the most optimized integration. Source-level integration is straight-forward (only requiring additions to relevant CMake files and minor additions to the interactive base) if the new application relies on a subset of CaPTk’s dependencies (i.e., ITK, VTK, OpenCV, Qt). </li>
<li><b>Executable level integration</b>: This level provides a graphical interface to an existing command-line application (not necessarily developed in C++), allowing users to leverage CaPTk’s functionality (e.g., interaction, feature extraction). Executable-level integration requires only minor additions to CaPTk to create a menu option for the new application. </li>
</ul>
Almost every application of CaPTk has an accompanying command-line executable (with more on the way). Those programs can be called directly, making the CaPTk applications available as components within a larger pipeline or for efficient batch processing of large numbers of images.
--------
We will provide the technical details of the Cancer Imaging Phenomics Toolkit (<b>CaPTk</b>) using which new applications can be integrated into the global framework and also optimize/improve the code. For any questions/details, please feel free to email <a href="mailto:[email protected]">[email protected]</a>.
\image html 10_integration_resize.png "Different layers of application integration in CaPTk"
\image latex 10_integration_resize.png "Different layers of application integration in CaPTk"
Applications written in any language can be integrated with CaPTk via calls to stand-alone command line executables, but deeper integration (including data passing via objects in-memory and access to full breadth of interactive capabilities of the CaPTk Console) is only possible with applications written in C++ (https://isocpp.org/).
- LIBRA, in its current form (MATLAB executable) has the least possible integration with the CaPTk Console; the console can call the executable, which launches its own UI for the user.
- ITK-SNAP is an example of integration where CaPTk Console communicates with the application using the provided API; in this case CaPTk ensures that the loaded images, ROIs, masks, etc. are all passed to ITK-SNAP and the result from the segmentation step there gets passed to the console for visualization after ITK-SNAP is closed.
- SBRT Lung is an example of integration with a stand-alone CLI application. The Console calls the executable in accordance with CLI, passes the loaded images and ROIs, etc. and visualizes the result when the application is done.
- EGFRvIII Surrogate Index provides the tightest possible integration with the CaPTk Console. All data (loaded images, ROIs, etc.) is passed in-memory and the visualization of results happens instantaneously.
- All functions available in native C++ libraries (ITK (https://itk.org/), OpenCV (http://opencv.org/), VTK (http://www.vtk.org/)) are available for native applications to use.
You need to compile CaPTk from source to proceed. The following sections give that information.
--------
\section tr_sourceBuid Building CaPTk from Sources
\subsection tr_sb_prerequisites Prerequisites
Source code for the CaPTk graphical interface and applications is distributed for sites that wish to examine the code, collaborate with CBICA in future development, and for compatibility.
For end-to-end build examples for Windows/Linux (Ubuntu Xenial)/macOS (10.13), please see our [Azure Pipelines configuration file](https://github.com/CBICA/CaPTk/blob/master/azure-pipelines.yml).
Before building CaPTk, the following software libraries are required to be installed. <strong>Please note</strong> that to build in Windows, CMake needs to be used an appropriate compiler (Win64 version of Visual Studio is recommended). The selected solution platform is needed to match with dependent libraries.
| Package | Version | Description |
|-----------------------------------|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------|
| Archiver or Git | N.A. | - **Windows**: Built-in unzip tool<br>- **Linux**: [gzip](https://www.gzip.org/) |
| C++ compiler | C++11 compliance needed | - **Windows**: Visual Studio 2015/2017<br>- **Linux**: GCC/4.9.2-7.4.0<br>- **macOS**: CLang 10.0.0, LLVM 6.0.1 |
| OpenGL | 3.2+ (hardware-dependent) | - **Windows/macOS**: Update graphics drivers<br>- **Ubuntu**: apt-get install libgl-dev<br>- **CentOS**: yum install mesa-libGL-devel<br>- [Details in FAQ](FAQ.html#gs_FAQ_3) |
| [CMake](https://cmake.org/files/) | 3.10 - 3.12 | To configure the CaPTk compilation along with its dependencies (via Superbuild). |
| X11 [Linux-only] | N.A. | - **Ubuntu**: apt-get install libxkbcommon-x11-0<br>- **CentOS**: yum install libXt-deve |
| dos2unix [Linux-only] | N.A. | - **Ubuntu**: apt-get install dos2unix<br>- **CentOS**: yum install dos2unix |
| [Doxygen](http://doxygen.nl/) | 1.8+ | OPTIONAL: For documentation only. |
Ensure all dependencies are met before proceeding.
--------
\subsection tr_sb_actualBuild Build
Please follow commands below in a shell/terminal (e.g., Bash (http://www.gnu.org/software/bash/)). They will configure and build CaPTk using GNU Make (http://www.gnu.org/software/make/). The main CMake configuration file (CMakeLists.txt) is located in the root directory of the package. <b>Windows</b> users need to follow the equivalent graphical interface or from PowerShell.
Extract source files and create the build directory:
\verbatim
git clone https://github.com/CBICA/CaPTk.git
git lfs fetch --all # downloads all the precompiled binaries for different applications and Qt
cd CaPTk
mkdir build # this is where we will build all the binaries
cd build
cmake .. # configure the superbuild first - builds ITK, VTK and OpenCV based on specific Qt version which is downloaded
make -j # multi-threaded compilation: use 'make -j${N}' to specify number of threads to use; on Windows, compile the ALL_BUILD project
cmake -DCMAKE_INSTALL_PREFIX=${path_to_where_you_want_to_install} .. # configure CaPTk
make -j # multi-threaded compilation: use 'make -j${N}' to specify number of threads to use; on Windows, compile the ALL_BUILD project
make install/strip # installs CaPTk and all its files to ${path_to_where_you_want_to_install}; on Windows, compile the INSTALL project
\endverbatim
--------
\section tr_cppIntegration Integrating your C++ application into CaPTk
Let’s say the name of your application is <b>yourSourceApp</b>. The following steps highlight the steps required for you to integrate your application into CaPTk:
<ul style="list-style-type:disc">
<li>Input and Output of files is handled by the graphical layer, which takes into account file handling, directory sorting, etc. so you can worry about the important stuff, i.e., your algorithm. </li>
<li>We are currently developing actively on MSVC/12 - Visual Studio 2013 (https://www.microsoft.com/en-US/download/details.aspx?id=44914) and GCC/4.9.2. Plans are in motion to migrate to MSVC/14 - Visual Studio 2015 (https://www.visualstudio.com/en-us/products/visual-studio-express-vs.aspx) and GCC/5.x very soon, upon which all C++11 features (https://en.wikipedia.org/wiki/C%2B%2B11) will be enabled by default. </li>
<li>The graphical layer reads DICOM or NIfTI files and passes it on as an ITK-Image (http://www.itk.org/Doxygen/html/classitk_1_1Image.html). The data type defaults to the same type in which the original image was written. This is done using the ITK-ImageIOBase class (http://www.itk.org/Doxygen/html/classitk_1_1ImageIOBase.html). </li>
<li>Your application should read either a single ITK-Image or a vector (http://www.cplusplus.com/reference/vector/vector/) of ITK-Images and give output in a similar format (either a single ITK-Image or a vector of ITK-Images). </li>
<li><b>yourSourceApp</b> should be structured as a single class, i.e., a collection of <b>yourSourceApp.h</b> and <b>yourSourceApp.cpp</b> (ensure that the extensions match up otherwise CMake won’t pick them up as valid applications). Put the class implementation in the following folder - <b>$PROJECT_SOURCE_DIR/src/applications</b> and let CMake pick your application up in the next compilation step. </li>
<li>If your algorithm does any kind of compilation or builds dependencies (libraries, executables, etc.), ensure that all the new files (non-source code files) are generated out-of-source. This is done to ensure that packaging the final application can happen in a concurrent manner.</li>
<li>CaPTk is able to handle application-specific dependencies well. For example, if you prefer the SVM implementation of LibSVM (https://github.com/cjlin1/libsvm) rather than that of OpenCV (http://docs.opencv.org/2.4/modules/ml/doc/support_vector_machines.html), you can simply create a new folder called /yourSourceApp_includes under <b>$PROJECT_SOURCE_DIR/src/applications</b> and let CMake take care of the configuration. <b>yourSourceApp</b> will see the includes as if it was present in the same folder (i.e., no need to specify folder when including these dependencies). </li>
</ul>
--------
\section tr_pyIntegration Integrating your Python application into CaPTk
Let’s say the name of your application is <b>yourPythonApp</b>. The following steps give a brief high-level overview regarding the steps required for integrating it with CaPTk:
<ul style="list-style-type:disc">
<li>Input and Output of files is handled by the graphical layer, which takes into account file handling, directory sorting, etc. so you can worry about the important stuff, i.e., your algorithm. </li>
<li>For interpreted languages such as Python, the graphical layer writes a NIfTI file (yes, there is disk I/O, yes it is inefficient, yes it is stupid but there is no other way around it so just deal with it). </li>
<li>Your application should be structured as a single <b>yourPythonApp.py</b> script (can be a class or pipeline). This should be present in the <b>$PROJECT_SOURCE_DIR/src/applications/py</b> directory. Your application should also have a file analogous to the setup.py (https://pythonhosted.org/an_example_pypi_project/setuptools.html) file used in Python projects by the name <b>yourPythonApp_setup.py</b>. </li>
<li>The Python configuration creates a virtual environment in the <b>$PROJECT_BINARY_DIR/py</b> directory, where all the dependencies are constructed. </li>
<li>Your application should support command line interfaces properly (verbose parameters throughout at the very least). </li>
<li>Add a new application under the <b>APP_LIST_PY_GUI</b> in <code>CaPTk_source_code/src/applications/CMakeLists.txt</code> and then make the corresponding addition in the <code>fMainWindow</code> and <code>ui_fMainWindow</code> class (take LIBRA as a starting point).</li>
<li>Once you have a menu item and the corresponding function call for your application, you can recompile CaPTk and then it will be able to pick up your application</li>
</ul>
--------
--------
\htmlonly
<div align="right"><a href="Download.html"><b>Next (Download)<b></a>
\endhtmlonly
--------
*/