Skip to content

Distributed Logistic Regression Training in Shifu

Zhang Pengshan (David) edited this page Jul 3, 2017 · 6 revisions

Logistic Regression model is natively supported in Shifu, the training process is based on gradient descent similar as process in Neural Network in Shifu, for details regarding training process please refer to Distributed Neural Network Training in Shifu.

Configurations in Logistic Regression Model Training

  "train" : {
    "baggingNum" : 5,
    "baggingWithReplacement" : true,
    "baggingSampleRate" : 1.0,
    "validSetRate" : 0.1,
    "trainOnDisk" : false,
    "numTrainEpochs" : 1000,
    "algorithm" : "LR",
    "params" : {
      "LearningRate" : 0.1,
      "Propagation" : "Q",
      "L1orL2" : "NONE",
      "RegularizedConstant": 0
    },
  },
  • params::LearningRate: learning rate setting, 0.1-2 is a good choice
  • params::Propagation during calculate weights: Q for QuickPropagation, B for BackPropagation, R for ResilientPropagation, this LR leverages NN propagation to update weights by gradients
  • params::L1orL2: Regularization level, NONE, L1 or L2
  • params::RegularizedConstant: Regularized constant, default value 0
Clone this wiki locally