Dear Wiki user,
You have subscribed to a wiki page or wiki category on "Hama Wiki" for change notification.
The "MultiLayerPerceptron" page has been changed by YexiJiang:
https://wiki.apache.org/hama/MultiLayerPerceptron?action=diff&rev1=27&rev2=28
== How to use Multilayer Perceptron in Hama? ==
 MLP can be used for both regression and classification. For both tasks, we need first initialize
the MLP model by specifying the parameters. The parameters are listed as follows:
+ MLP can be used for both regression and classification. For both tasks, we need first initialize
the MLP model by specifying the parameters.
+ For training, the following things need to be specified:
+ * The '''''model topology''''': including the number of neurons (besides the bias neuron)
in each layer; whether current layer is the final layer; the type of squashing function.
+ * The '''''learning rate''''': Specify how aggressive the model learning the training instances.
A large value can accelerate the learning process but decrease the chance of model convergence.
Recommend in range (0, 0.5].
+ * The '''''momemtum weight''''': Similar to learning rate, a large momemtum weight can
accelerate the learning process but decrease the chance of model convergence. Recommend in
range (0, 0.5].
+ * The '''''regularization weight''''': A large value can decrease the variance of the model
but increase the bias at the same time. As this parameter is sensitive, it's better to set
it as a very small value, say, 0.001.
 <rowbgcolor="#DDDDDD"> Parameter  Description 
 model path  The path to specify the location to store the model. 
 learningRate  Control the aggressive of learning. A big learning rate can accelerate
the training speed,<<BR>> but may also cause oscillation. Typically in range (0,
1). 
 regularization  Control the complexity of the model. A large regularization value can
make the weights between<<BR>> neurons to be small, and increase the generalization
of MLP, but it may reduce the model precision. <<BR>> Typically in range (0, 0.1).

 momentum  Control the speed of training. A big momentum can accelerate the training
speed, but it may<<BR>> also mislead the model update. Typically in range [0.5,
1) 
 squashing function  Activate function used by MLP. Candidate squashing function: ''sigmoid'',
''tanh''. 
 cost function  Evaluate the error made during training. Candidate cost function: ''squared
error'', ''cross entropy (logistic)''. 
 layer size array  An array specify the number of neurons (exclude bias neurons) in each
layer (include input and output layer). <<BR>> 
 The following is the sample code regarding model initialization.
+ The following is the sample code regarding how to train a model.
{{{
SmallLayeredNeuralNetwork ann = new SmallLayeredNeuralNetwork();
@@ 88, +85 @@
trainingParameters.put("training.batch.size", "300"); // the number of training instances
read per update
ann.train(new Path(trainingDataPath), trainingParameters);
}}}
+
+ The parameters related to training are listed as follows: (All these parameters are optional)
+ <rowbgcolor="#DDDDDD"> Parameter  Description 
+  training.max.iterations  The maximum number of iterations (a.k.a. epoch) for training.

+  training.batch.size  As the minibatch update is leveraged for training, this parameter
specify how many training instances are used in one batch. 
+  convergence.check.interval  If this parameters is set, then the model will be checked
every time when the iteration is a multiple of this parameter. If the convergence condition
is satisfied, the training will terminate immediately. 
+  tasks  The number of concurrent tasks. 
+
=== Two class learning problem ===
To be added...
