Skip to content

Shifu 0.2.7 Distributed LR Algorithm Improvement (Experimental)

Zhang Pengshan (David) edited this page Aug 31, 2015 · 1 revision
  1. Distributed LR is added if Set Algorithm in Train Step to 'LR'

  2. 'LearningRate' and 'RegularizedConstant' are two parameters in LR Configurations.

  3. Whole process in Shifu, LR is Supported except 'export' Step.

  4. TODO: Performance Comparison.

Clone this wiki locally