Skip to content
David R. Miller edited this page May 14, 2015 · 6 revisions

This is a list of the parameters that affect the operation of the neural net, in no particular order, the operational modes they apply to, and the methods by which they can be set or modified:

Alpha momentum (TRAINING mode)

If the backpropagation algorithm calculates a weight change in the same direction as the most recent change, then the alpha parameter will amplify the change. This can be helpful during early stages of training, but may need to be lowered as the net learns. If not specified, the default is 0.1.

Programmatically -- You can change the alpha parameter at any time by setting the Net::alpha member, e.g.:

 myNet.alpha = 0.0;

GUI -- You can change this field at any time.

Changes to the alpha parameter will take effect at the next input sample.

Color Channel (TRAINING, VALIDATE, TRAINED modes)

This specifies how the color pixels of the input images are converted to floating point values for the input neurons. The current choices are to use the red, green, or blue channel exclusively, or to convert the color values to monochrome. If not specified, the default is monochrome.

Topology config file -- Add the "channel" parameter to the line that defines the input layer. The options are R, G, B, or BW. For example:

 input size 64x64 channel BW

Programmatically -- The color conversion can be specified prior to calling Net::feedForward() for the first time by setting the Net::colorChannel member to NNet::R, NNet::G, NNet::B, or NNet::BW. For example:

 myNet.colorChannel = NNet::BW;

The results are undefined if the channel parameter is programatically changed after the first call to Net::feedForward().

GUI -- You can select the input color channel. If the color channel is changed from the GUI, the neural2d program will flush any cached pixel data so all the images will be re-evaluated as needed.

Convolution filter matrix (TRAINING, VALIDATE, TRAINED modes)

Topology config file -- Any layer can be configured as a convolution filter by specifying the convolution matrix (kernel) with the "convolve" parameter.

For example, to apply a convolution matrix of

  1  0 -1  
  0  0  0  
 -1  0  1

specify the convolve parameter like this:

 layer2 size 32x32 from input convolve {{1,0,-1},{0,0,0},{-1,0,1}}

The convolve parameter cannot be combined with a radius or tf parameter.

Internally, connections are made from source neurons in a pattern defined by the convolution matrix dimensions. Connections are made only for nonzero kernel elements. The weights of the connections are initialized to the kernel element values, and remain constant throughout the life of the net.

Convolution networking kernel size and depth (TRAINING, VALIDATE, TRAINED modes)

Topology config file -- The convolution layer depth is specified by an integer and asterisk in front of the size parameter. The kernel size is specified by width and height after a convolve parameter. For example, to train 20 kernels of size 5x5:

 input size 64x64
 layer2 size 20*64x64 from input convolve 5x5
 . . .

Dynamic eta (TRAINING mode)

If enabled, the neural net program will attempt to modify eta automatically during training to achieve the best possible learning rate. The default is disabled.

Programmatically -- You can enable or disable this option at any time by setting the Net::dynamicEtaAdjust parameter, e.g.:

 myNet.dynamicEtaAdjust = true;

GUI -- There is a checkbox for this parameter.

Changes to the dynamic eta parameter will take effect at the next input sample.

Eta learning rate (TRAINING mode)

The default is 0.1. Higher values cause greater weight adjustments during back propagation. This can be helpful in the early stages of training, but the parameter may need to be adjusted (usually lower) as the net learns.

Programmatically -- You can set the eta parameter at any time by changing the Net::eta member, e.g.:

 myNet.eta = 0.01;

GUI -- You can change the eta field at any time.

Changes in the eta parameter will take effect at the next input sample.

Lambda regularization factor (TRAINING mode)

This is an experimental implementation of a regularization algorithm. Use at your own risk. The optimal values are unknown. Setting this parameter to 0.0 disables the regularization calculation. If not specified, the default is 0.0.

Programmatically -- You can change the lambda parameter at any time by setting the Net::lambda member, e.g.:

 myNet.lambda = 0.0;

GUI -- You can change this field at any time.

Changes to the lambda parameter will take effect at the next input sample.

Pooling operator and size (TRAINING, VALIDATE, TRAINED modes)

Topology config file -- Specify the "pool" parameter, followed by "avg" or "max" and the XxY size of the pooling window. For example,

  layerPool size 20*16x16 from layerConv pool max 4x4

A pooling layer can take its input from any layer of any XxY size of equal depth. A pooling layer may feed a pooling or convolution network layer of any XxY size with the same depth, or can fully forward-connect to a regular layer of any size of depth 1.

Projection shape (TRAINING, VALIDATE, TRAINED modes)

This parameter defines how sparse connections are made to a neuron from its source neurons. Sparse connections are specified by using the radius parameter in the topology config file. If the projection shape is not specified, the radius parameter specifies the major and minor radius of an ellipse that is axes-aligned. If the projection shape is set to rectangular, the radius parameter defines a rectangle. If not changed, the default is elliptical.

See the README file for illustrations of various elliptical projection patterns.

Programmatically -- Currently this is a compile-time setting in Net::Net() and cannot be changed at runtime. A future enhancement may allow this parameter to be specified in the topology config file.

Recent average error smoothing factor (TRAINING, VALIDATE modes)

For some calculations, we use a running average of net error, averaged over N input samples, where N defaults to 125. This is called the smoothing factor and helps us humans see how the net training is progressing. However, for VALIDATE mode of operation, you'll typical want to set this value to 1 so you can see the exact output error for every input sample. This parameter has no effect when in TRAINED mode as the target output values are unknown and the output errors cannot be reported.

Programmatically -- You can change the smoothing factor parameter at any time by setting the Net::recentAverageSmoothingFactor member, e.g.:

 myNet.recentAverageSmoothingFactor = 1.0;

GUI -- You can change this field at any time.

Changes to the smoothing factor parameter will take effect at the next input sample.

Repeat input samples (TRAINING, VALIDATE, TRAINED modes)

This parameter specifies what happens after the neural net has consumed all the input samples specified in the input sample config file. If repeat is enabled, the program will rewind the list and continue processing input images indefinitely. If disabled, the program will stop after the last input sample. Typically you'll want to repeat the input samples during TRAINING mode until the net is trained, but in VALIDATE or TRAINED mode, there's no reason to repeat the inputs.

Programmatically -- You can change the repeat flag by setting the Net::repeatInputSamples member at any time before exhausting the set of input samples, e.g.:

 myNet.repeatInputSamples = false;

GUI -- You can change this setting at any time by selecting the appropriate run mode.

Reporting interval (TRAINING, VALIDATE, TRAINED modes)

This parameter specifies if the network outputs should be reported for every sample, or just for every Nth input sample. This reduces screen clutter during TRAINING mode. During VALIDATE and TRAINED modes of operation, you'll usually want to set this parameter to 1.0 so you can see the results of every input sample.

Programmatically -- You can change the reporting interval parameter by setting the Net::reportEveryNth member at any time, e.g.:

 myNet.reportEveryNth = 1;

GUI -- You can change this field at any time.

Changes to this parameter will take effect at the next input sample.

Shuffle input samples (TRAINING, VALIDATE, TRAINED modes)

This parameter specifies whether the set of input samples should be randomized if the repeat parameter is enabled. The default is to enable shuffling. Also see the Repeat parameter.

Programmatically -- You can change the shuffle flag by setting the Net::shuffleInputSamples member at any time before the set of input samples is exhausted, e.g.:

 myNet.shuffleInputSamples = true;

GUI -- You can change this setting at any time by selecting the option "Run inputs; shuffle; repeat."

Changes to this parameter will take effect at the next input sample.

Transfer function (TRAINING, VALIDATE, TRAINED modes)

This parameter selects which transfer function the neurons in a layer will use. The choices are tanh, logistic, linear, ramp, or gaussian. See TransferFunctions for more information about each one. If not specified, the default is tanh.

Topology config file -- The transfer function can be specified per-layer with the tf parameter, e.g.:

 output size 10 from layer6 tf linear

All the neurons in the layer will use the same transfer function. The tf parameter has no effect on the input layer. The tf parameter cannot be combined with a convolve parameter.