This is an old implementation in 2013, which is provided as is. This package has not been updated since.
This software package provides a sample implementation of accelerated Proximal Stochastic Dual Coordinate Ascent with L1-L2 regularization described in [1] for various loss functions. Please cite the paper if you find the software useful.
The code has not been updated since 2013, and is provided as is.
The code has been tested on Linux, but should compile on other unix systems with g++ and make.
git clone https://github.com/TongZhang-ML/sparseSDCA.git
The source files are located in the src
directory.
To compile:
cd src/
make
This should compile into two binary programs train
and predict
- train: train and save the model;
- predict: apply already trained model on test data.
Use train -h
to see command line options for the training program
Use predict -h
to see command line options for the prediction program
API documentations are in html/index.html
Please go to the example1
and example2
subdirectories and type run.sh
.
The software is distributed under the MIT license. Please read the file LICENSE
.
[1] Shai Shalev-Shwartz and Tong Zhang. Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization, Mathematical Programming, 155:105-145, 2016.
[2] Shai Shalev-Shwartz and Tong Zhang. Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization, JMLR 14:567-599, 2013.