This repository contains the code examples and content of the lectures. I will be uploading rendered versions as PDF files to StudIP, and include rendered Markdown versions in this repository. If you want to run the notebooks yourself, you will need to install Jupyter. If you need help with setting up Jupyter, here's a tutorial on how to install jupyter notebook on your machine.
The first chapter covers the coding examples from the first two weeks, on basic random search and local search algorithms. Markdown Export
This chapter covers basic evolutionary strategies and genetic algorithms. Markdown Export
This chapter looks into the various search operators of a genetic algorithm: Survivor selection, parent selection, crossover, mutation, and the population itself. Markdown Export
This chapter covers the basics of Pareto optimality, NSGA-II, and comparison of multi-objective search algorithms. Markdown Export
This chapter covers several alternative multi-objective search algorithms: A random baseline, PAES, SPEA2, TwoArchives, and SMS-EMOA. Markdown Export
This chapter looks at how the problem of test input generation can be cast as a search problem, and how to automatically instrument programs for fitness generation. Markdown Export
This chapter continues whole test suite generation, and then moves on to many objective optimisation for test generation.
This chapter introduces classic genetic programming for scenarios assuming type closure, and applies this to symbolic regression and spectrum-based fault localisation. It also looks at grammatical evolution and automated program repair.
This chapter introduces the field of Neuroevolution in which evolutionary algorithms are used to optimise artificial neural networks. We start with the definition of neural networks and the pole balancing problem, a popular reinforcement learning task which will be solved using two different Neuroevolution algorithms. The two algorithms, Symbiotic Adaptive Neuroevolution (SANE) and Cooperative Synapse Neuroevolution (CoSyNE), respectively, evolve a population of hidden neurons and connection weights.
This chapter considers how to choose values for the many parameters that we have introduced in our evolutionary algorithms, how to optimise these values, and how to adapt them to new problems.
This chapter considers several advanced variants of the evolutionary algorithms we have discussed in previous chapter: Memetic algorithms combine global and local search; island model GAs divide the population of a GA into independent subpopulations; estimation of distribution algorithms try to explicitly optimise the probability distribution that is otherwise implicitly represented by the population; differential evolution uses novel search operators unlike the ones we have used in standard GAs; hyper heuristics try to combine different heuristics to adapt to the problem at hand.
This chapter briefly introduces two swarm optimisation techniques: Ant colony optimisation tries to imitate the stigmergic communication of ants for the purpose of optimisation. Particle swarm optimisation simulates the swarm behaviour of birds or fish.