There are two aspects of randomness in Evolutionary computing that are frequently misunderstood . The first issues is the assumption that the effects of random mutations are always random. No that is not a typo, the effects of random mutation are usually not random but are coordinated into nonrandom distributions based on how genes map to their measured behavior (aka their phenotypes).
A good analogy to explain this concept is the bean machine. As explained in Wikipedia, the bean machine
was invented to demonstrate the law of error and the normal distribution. The machine consists of a vertical board with interleaved rows of pins. Balls, dropped from the top, bounce in random directions on hitting the pins. Not withstanding their random horizontal motions on descent, the balls settle at the bottom of the machine in an approximately normal distribution.
The second misunderstood aspect of randomness has to do with the way it is measured in populations. Some researchers measure the amount of diversity in a population by summing the variance of genetic (or allelic) values for all locations on genomes.Based on the evolutionary landscape this measure can overstate the search potential of a population. A population can be effectively converged (i.e. all of the genomes can have the same fitness and all can be searching the representation space the same way) without there being a low variance between their gene values.
I have a forthcoming paper (accepted by IEEE Transactions on Evolutionary Computing) , which among other things, looks at preferred directions of motion due to random mutation as well as randomness in evolutionary populations. I will blog on this further when it is published.