-
Notifications
You must be signed in to change notification settings - Fork 0
/
pbalapra-software.html
172 lines (148 loc) · 6.9 KB
/
pbalapra-software.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
<table>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="deephyper2018">1</a>]
</td>
<td class="bibtexitem" align="justify">
P. Balaprakash, R. Egele, M. Salim, V. Vishwanath, and S. M. Wild.
DeepHyper: Scalable Asynchronous Neural Architecture and
Hyperparameter Search for Deep Neural Networks, 2018.
DeepHyper: Scalable automated machine learning package with two
components: 1) Reinforcement-learning-based neural architecture search for
automatically constructing high-performing the deep neural network
architecture; 2) Asynchronous model-based search for finding high-performing
hyperparameters for deep neural networks.
[ <a href="pbalapra-software_bib.html#deephyper2018">bib</a> |
<a href="https://github.com/deephyper/deephyper">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="supuds2017">2</a>]
</td>
<td class="bibtexitem" align="justify">
P. Agarwal, P. Balaprakash, S. Leyffer, and S. M. Wild.
SPUDS: Smart Pipeline for Urban Data Science, 2017.
SPUDS is a machine-learning pipeline to build classification models
to rank food establishments that are at most risk for the types of violations
most likely to spread food-borne illness. The pipeline balances the the
training data with resampling techniques, identifies the most important
factors leading to critical violations via variable importance and variable
selection methods, evaluates several state-of-the-art learning with cross
validation, and finally combines the best performing ones via bagging. It is
a customized version of AutoMOMML, exclusively designed for City of Chicago
Smart Data Platform, and implemented in Python.
[ <a href="pbalapra-software_bib.html#supuds2017">bib</a> |
<a href="https://xgitlab.cels.anl.gov/uda/ml-city">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="automomml2016">3</a>]
</td>
<td class="bibtexitem" align="justify">
P. Balaprakash, A. Tiwari, S. M. Wild, L. Carrington, and P. D. Hovland.
AutoMOMML: Automatic Multi-objective Modeling with Machine
Learning, 2016.
AutoMOMML is an end-to-end, machine-learning-based framework to build
predictive models for objectives such as performance, and power. The
framework adopts statistical approaches to reduce the modeling complexity and
automatically identifies and configures the most suitable learning algorithm
to model the required objectives based on hardware and application
signatures.
[ <a href="pbalapra-software_bib.html#automomml2016">bib</a> |
<a href="https://xgitlab.cels.anl.gov/pbalapra/automomml">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="fpgaopt2016">4</a>]
</td>
<td class="bibtexitem" align="justify">
P. Balaprakash, A. Mametjanov, C. Choudary, P. D. Hovland, S. M. Wild, and
G. Sabin.
FPGAtuner: Autotuning FPGA Design Parameters for Performance and
Power, 2016.
A machine-learning-based approach to tune FPGA design parameters. It
performs sampling-based reduction of the parameter space and guides the
search toward promising parameter configurations.
[ <a href="pbalapra-software_bib.html#fpgaopt2016">bib</a> |
<a href="https://xgitlab.cels.anl.gov/pbalapra/fpgaopt">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="surf2015">5</a>]
</td>
<td class="bibtexitem" align="justify">
P. Balaprakash.
SuRF: Search using Random Forest, 2015.
SuRF is a model-based search module for automatic performance tuning.
It adopts random forest supervised learning algorithm for modeling the
performances as a function of input parameters within the search. SuRF
samples a small number of parameter configurations, empirically evaluating
the corresponding code variants to obtain the corresponding performance
metrics, and fitting a surrogate model over the input-output space. The
surrogate model is then iteratively refined by obtaining new output metrics
at unevaluated input configurations predicted to be high-performing by the
model. Implemented in Python and available with the Orio autotuning
framework.
[ <a href="pbalapra-software_bib.html#surf2015">bib</a> |
<a href="https://github.com/brnorris03/Orio/tree/master/orio/main/tuner/search/mlsearch">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="spapt2011">6</a>]
</td>
<td class="bibtexitem" align="justify">
P. Balaprakash, S. M. Wild, and B. Norris.
SPAPT: Search Problems in Automatic Performance Tuning,
2011.
A set of extensible and portable search problems in automatic
performance tuning whose goal is to aid in the development and improvement of
search strategies and performance-improving transformations. SPAPT contains
representative implementations from a number of lower-level, serial
performance-tuning tasks in scientific applications. Available with the Orio
autotuning framework.
[ <a href="pbalapra-software_bib.html#spapt2011">bib</a> |
<a href="https://github.com/brnorris03/Orio/tree/master/testsuite/SPAPT">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="irace2010">7</a>]
</td>
<td class="bibtexitem" align="justify">
M. L. Ibanez, J. D. Lacoste, T. Stützle, M. Birattari, E. Yuan, and
P. Balaprakash.
The irace Package: Iterated Race for Automatic Algorithm
Configuration, 2010.
The irace package implements the iterated racing procedure, which is
an extension of the Iterated F-race procedure. Its main purpose is to
automatically configure optimization algorithms by finding the most
appropriate settings given a set of instances of an optimization problem. It
builds upon the race package by Birattari, and it is implemented in R.
[ <a href="pbalapra-software_bib.html#irace2010">bib</a> |
<a href="http://cran.r-project.org/web/packages/irace/">http</a> ]
</td>
</tr>
<tr valign="top">
<td align="right" class="bibtexnumber">
[<a name="els-ptsp2009">8</a>]
</td>
<td class="bibtexitem" align="justify">
P. Balaprakash, M. Birattari, and T. Stützle.
ELS-PTSP: Estimation-based Local Search for the Probabilistic
Traveling Salesman Problem, 2009.
This software package provides a high-performance implementation of
the estimation-based iterative improvement algorithm to tackle the
probabilistic traveling salesman problem. A key novelty of the proposed
algorithm is that the cost difference between two neighbor solutions is
estimated by partial evaluation, adaptive, and importance sampling. Developed
in C with GNU scientific library under Linux.
[ <a href="pbalapra-software_bib.html#els-ptsp2009">bib</a> |
<a href="https://github.com/pbalapra/els-ptsp">http</a> ]
</td>
</tr>
</table>