GEP Book

  Home
  News
  Author
  Q&A
  Tutorials
  Downloads
  GEP Biblio
  Contacts

  Visit Gepsoft

 

C. FERREIRA

In A. Abraham, B. de Baets, M. Köppen, and B. Nickolay, eds., Applied Soft Computing Technologies: The Challenge of Complexity, pages 517-536, Springer-Verlag, 2006.


Designing Neural Networks Using Gene Expression Programming

Genes with multiple domains for designing NNs
 
The total induction of neural networks (NNs) using GEP, requires further modification of the structural organization developed to manipulate numerical constants (Ferreira 2001, 2003). The network architecture is encoded in the familiar structure of head and tail. The head contains special functions that activate the units and terminals that represent the input units. The tail contains obviously only terminals. Besides the head and the tail, these genes (neural net genes or NN-genes) contain two additional domains, Dw and Dt, encoding, respectively, the weights and the thresholds. Structurally, the Dw comes after the tail and has a length dw equal to the head length h multiplied by maximum arity n, and Dt has a length dt equal to h. Both domains are composed of symbols representing the weights or thresholds of the neural net.

For each NN-gene, the weights and thresholds are created at the beginning of each run, but their circulation is guaranteed by the usual genetic operators of mutation, transposition, and recombination. Nonetheless, a special mutation operator was created that allows the permanent introduction of variation in the set of weights and thresholds.

It is worth emphasizing that the basic genetic operators like mutation or transposition are not affected by Dw and Dt as long as the boundaries of each region are maintained and the alphabets of each domain are not mixed up.

Consider the conventionally represented neural network with two input units (i1 and i2), two hidden units (h1 and h2), and one output unit (o1) (for simplicity, the thresholds are all equal to 1 and are omitted):

It can also be represented as a tree:

where a and b represent, respectively, the inputs i1 and i2 to the network and "D" represents a function with connectivity two. This function multiplies the value of each argument by its respective weight and adds all the incoming activation in order to determine the forwarded output. This output (0 or 1) depends on the threshold which, for simplicity, was set to 1.

We could linearize the above NN-tree as follows:

0123456789012
DDDabab654321

which consists of an NN-gene with the familiar head and tail domains, plus an additional domain Dw for encoding the weights. The values of each weight are kept in an array and are retrieved as necessary. For simplicity, the number represented by the numeral in Dw indicates the order in the array.

Let us now analyze a simple neural network encoding a well-known function, the exclusive-or. Consider, for instance, the chromosome below with h = 3 and containing a domain encoding the weights:

0123456789012
DDDabab393257

Its translation gives:

For the set of weights:

W = {-1.978, 0.514, -0.465, 1.22, -1.686, -1.797, 0.197, 1.606, 0, 1.753}

the neural network above gives:









(2.1)

which is a perfect solution to the exclusive-or problem.

Home | Contents | Previous | Next