GEP Book

  Home
  News
  Author
  Q&A
  Tutorials
  Downloads
  GEP Biblio
  Contacts

  Visit Gepsoft

 

C. FERREIRA

In A. Abraham, B. de Baets, M. Köppen, and B. Nickolay, eds., Applied Soft Computing Technologies: The Challenge of Complexity, pages 517-536, Springer-Verlag, 2006.


Designing Neural Networks Using Gene Expression Programming

Direct mutation of weights and thresholds
 
In the previous sub-sections it was shown that all genetic operators contribute directly or indirectly to move the weights and thresholds around. And, in fact, this constant shuffling of weights and thresholds is more than sufficient to allow an efficient evolution of GEP-nets as long as an appropriate number of weights and thresholds is randomly created at the beginning of each run. However, special mutation operators that replace the value of a particular weight or threshold by another can also be easily implemented (see Figure 3).

This operator randomly selects particular targets in the arrays in which the weights or thresholds are kept, and randomly generates a new real-valued number. Consider for instance the array:

Wi,j = {-0.433, -1.823, 1.255, 0.028, -1.755, -0.036, -0.128, -1.163, 1.806, 0.083}

encoding the weights of gene j on chromosome i. Suppose a mutation occurred at position 7, changing the weight -1.163 occupying that position into -0.494, obtaining:

Wi,j = {-0.433, -1.823, 1.255, 0.028, -1.755, -0.036, -0.128, -0.494, 1.806, 0.083}

The consequences of this kind of mutation are very diverse: they might be neutral in effect (for instance, when the gene itself is neutral or when the weight/threshold has no expression on the sub-neural net) or they might have manifold effects (for instance, if the weight/threshold modified happened to be used more than once in the expression of the sub-NN as shown in Figure 3).


Figure 3. Illustration of direct mutation of weights. a) The mother and daughter chromosomes with their respective weights. In this case, weights at positions 0 and 2 were mutated. Note that the mother and daughter chromosomes are the same. b) The mother and daughter neural nets encoded in the chromosomes. Note that the point mutation at position 2 (-0.17) has manifold effects as this weight appears four times in the neural network. Note also that the mutation at position 0 is an example of a neutral mutation as it has no expression on the neural net (indeed, mutations at positions 4, 6, and 9 would also be neutral).

Interestingly, this kind of mutation seems to have a very limited importance and better results are obtained when this operator is switched off. Indeed, the direct mutation of numerical constants in function finding problems produces identical results (Ferreira 2003). Therefore, we can conclude that a well dimensioned initial diversity of constants, be they numerical constants of a mathematical expression or weights/thresholds of a neural net, is more than sufficient to allow their evolutionary tuning. In all the problems presented in this work, a set of 10 weights W = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} was used.

Home | Contents | Previous | Next