Graphs offer a powerful yet intuitive way to encode information. Given a problem of interest, domain experts have often no problem in identifying the key elements (nodes) and their corresponding relations (edges). Several methods that operate directly on graph representations have been developed in the Machine Learning literature, ranging from artificial neural networks for structured data to graph kernels, from inductive logic programs to probabilistic graphical models. These approaches have been used to tackle supervised and unsupervised problems, but only recently there has been an interest in addressing generative problems, that is the problem where the output is a collection of newly constructed graphs that exhibit some desired property inferred directly from given examples.
The applications of graph learning techniques are far-reaching since they are a way to achieve “design by example”. In the bio-sciences one can formulate the problem of drug design as a graph learning task given a sample of toxic and non-toxic molecular graphs. In the domain of computer games, one could design various assets (e.g. characters or game levels) using a measure of user satisfaction and automatically generate an endless stream of ever challenging environments.
In this lecture we will present some recent work that combines ideas from graph grammars and graph kernels with Bayesian optimization to address the interesting problem of learning to optimize graphs.