Learning when limited to modification of some parameters has a limited scope: the capability to modify the system structure is also needed to get a wider range of the Iearnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e., number of hidden Iayers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as usually done, be determined by trial-and-error but should be computed by the Iearning algorithm, Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or Iinks. A survey of current connectionist literature is given on this line of thought. The reader is referred to (Alpaydın, 1991) for the author's own contribution to the field.