Select the *Quick*
tab of the *SANN
- Custom Neural Network**
*dialog box to access the options described here. For information on
the options that are common to all tabs (located at the top and on the
lower-right side of the dialog box), see *SANN - Custom Neural Network*.

**Network type.** Use the options in this
group box to specify the type of network (multilayer perceptron or radial
basis function).

**Multilayer
perceptron (MLP).** Select the *Multilayer
perceptron (MLP)* option button to generate multilayer perceptron networks.
The multilayer perceptron is the most common form of network. It requires
iterative training and the networks are quite compact, execute quickly
once trained, and in most problems yield better results than the other
types of networks.

**Radial
basis function (RBF).** Select
the *Radial basis function (RBF)*
option button to generate radial basis function networks. Radial
basis function networks tend to be slower and larger than multilayer perceptron
and often have inferior performance, but they can be trained faster than
MLP for large data sets and linear output activation functions.

**Error function.** Specify the error function
to be used in training a network.

**Sum
of squares. **Select the *Sum
of squares *option button to generate networks using the sum of squares
error function. Note that this is the only error function available for
regression type analyses.

**Cross entropy.** Select
the *Cross entropy* option button
to generate networks using cross entropy error functions. This error function
assumes that the data is drawn from the exponential family of distributions
(see Bishop 1995 for more details) and supports a direct probabilistic
interpretation of the network outputs. Note that this error function is
only available for classification problems. The option will be unavailable
for regression type analyses. When the *Cross
entropy* error function is selected, the *Output
neurons* (in the *Activation
functions* group box) will always be set to *Softmax*.

**Activation
functions.** Use the options in this group box to select activation
functions for the hidden and output neurons. The choice of the activation
function, i.e., the precise mathematical function, is crucial in building
a neural network model since it is directly related to the performance
of the model. Generally, it is recommended that you choose the tanh
and identity
functions for the hidden and output neurons for multilayer perceptron
networks (default settings) when the *Sum
of squares* error function is used. For radial basis function networks,
the *Hidden units* are automatically
set to *Gaussian*; and the *Output units* are set to either
*Identity* (when *Sum
of squares* error function is used) or *Softmax*
(when *Cross entropy* error
function is used).

**Hidden units.** Use
the *Hidden units* drop-down
list to select the activation
function for the hidden layer neurons. For multilayer perceptron networks,
these include the identity
function, hyperbolic
tanh (recommended), logistic
sigmoid, exponential,
and sine activation
functions. For radial basis functions networks, a Gaussian activation
function is always used for hidden neurons.

*Identity.* Uses the
identity function.
With this function, the activation level is passed on directly as the
output.

*Tanh.* Uses the hyperbolic tangent
function (recommended). The hyperbolic tangent function (tanh) is
a symmetric S-shaped (sigmoid) function, whose output lies in the range
(-1, +1). Often performs better than the logistic sigmoid function because
of its symmetry.

*Logistic.* Uses the
logistic
sigmoid function. This is an S-shaped (sigmoid) curve, with output
in the range (0, 1).

*Exponential.* Uses
the exponential
activation function.

*Sine.* Uses the standard
sine activation
function.

*Gaussian.*
Uses a Gaussian (or Normal) distribution. This is the only choice available
for RBF neural networks.

**Output units.** Use the Output
units drop-down list to select the activation functions for the
hidden-output neurons. For multilayer perceptron networks, these include
the identity function
(recommended), hyperbolic
tanh, logistic
sigmoid, exponential,
sine, and softmax
activation functions. For Radial basis
function (RBF) networks, the choice of *Output
units* is dependent on the selected *Error
function*. For RBF networks
with *Sum of squares* error
function, an *Identity* activation
function is used. For RBF networks
with *Cross entropy* error function,
the *Softmax* activation function
is always used.

*Identity.* Uses the
identity function
(recommended). With this function, the activation level is passed on directly
as the output.

*Tanh.* Uses the hyperbolic tangent
function. The hyperbolic tangent function (tanh) is a symmetric S-shaped
(sigmoid) function, whose output lies in the range (-1, +1). Often performs
better than the logistic sigmoid function because of its symmetry.

*Logistic.* Uses the
logistic
sigmoid function. This is an S-shaped (sigmoid) curve, with output
in the range (0, 1).

*Exp.* Uses the negative
exponential activation function.

*Sine.* Uses the standard
sine activation
function.

*Softmax.*
Uses a specialized activation function for one-of-N encoded classification
networks. It performs a normalized exponential (i.e., the outputs add
up to 1). In combination with the cross entropy error function, it allows
multilayer
perceptron networks to be modified for class probability estimation
(Bishop, 1995; Bridle, 1990).

**Networks
to train.** Use this option to specify how many networks the
*Custom Neural Network *(*CNN*) should train. The larger the
number of networks trained, the more detailed is the search carried out
by the *CNN*. It is recommended
that you set the value for this option as large as possible depending
on your hardware speed and resources. Although you can create one type
of network type at a time, by training more than one network you can find
multiple solutions provided by the same network. Furthermore, using the
Results dialog, you can combine
the predictions of these networks to create ensembles. Using predictions
drawn from an ensemble of networks can generally yield better results
compared to the predictions of the individual networks (see Bishop 1995).

**No.
of neurons.** Specify the number of neurons in the hidden layer
of the network. The more neurons the hidden layer contains, the more complex
(flexible) it becomes.