Tutorials

Provisional schedule for Sunday, September 14

  Room 4 Room 3 Room 1/2
8:00 Registration
 
10:30 Tutorial 1:
Theory of Evolutionary Computation
Anne Auger, Benjamin Doerr
[slides]
Tutorial 2:
Low or No Cost Distributed Evolutionary Computation
JJ Merelo
Tutorial 3:
Cartesian Genetic Programming
Julian F. Miller
[slides]
11:00
 
 
12:30 Lunch
 
 
14:00 Tutorial 4:
Multimodal Optimization
Mike Preuss
[slides]
Tutorial 5:
Theory of Parallel Evolutionary Algorithms
Dirk Sudholt
[slides]

Tutorial 6:
Automatic Design of Algorithms via Hyper-heuristic Genetic Programming
John R. Woodward, Jerry Swan, Michael Epitropakis
[slides 1, slides 2]
 
 
 
16:00 Coffee Break
16:30 Tutorial 7:
Evolutionary Bilevel Optimization
Ankur Sinha, Pekka Malo, Kalyanmoy Deb
[slides]
Tutorial 8:
Parallel Experiences in Solving Complex Problems
Enrique Alba
Tutorial 9:
Algorithm and Experiment Design with HeuristicLab - An Open Source Optimization Environment for Research and Education
Stefan Wagner, Gabriel Kronberger
[slides]
 
 
 
 


Theory of Evolutionary Computation

Anne Auger, INRIA, France
Benjamin Doerr, Ecole Polytechnique de Paris, France

Anne Auger
Benjamin Doerr

Abstract

Theory has always accompanied the development of evolutionary methods. It aims at detecting and explaining at a deep level the working principles, guiding the design of new algorithms and rigorously proving what has been observed. In this introductory tutorial, we target those researchers that have no or little experience with theoretical work. We will (i) explain the aims of theoretical research in evolutionary computation and give easy-to-understand examples of its success, (ii) teach the audience how to read a theoretical result and gain from it, (iii) present some very elementary theoretical methods that are useful not only for writing theory papers, but also help you in planning your experimental work and foreseeing its success.

Short Biographies

Anne Auger is a permanent researcher at the French National Institute for Research in Computer Science and Control (INRIA). She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects and algorithm designs. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and served as track chair for the theory and ES track in 2011 and 2013. Together with Benjamin Doerr, she is editor of the book "Theory of Randomized Search Heuristics".
Benjamin Doerr is a full professor at Ecole Polytechnique de Paris. He also is a senior researcher at the Max Planck Institute for Informatics (Germany) and an adjunct professor at Saarland University. He received his diploma (1998), PhD (2000) and habilitation (2005) in mathematics from Kiel University. His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, the further development of the drift analysis method, in particular, multiplicative and adaptive drift, as well as several of the current best bounds in the young area of black-box complexity. Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO, served as its co-chair 2007-2009 and serves again in 2014. He is a member of the editorial boards of "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science" and "Information Processing Letters". Together with Anne Auger, he edited the book "Theory of Randomized Search Heuristics".

[return to top of page]

Low or No Cost Distributed Evolutionary Computation

JJ Merelo, University of Granada, Spain

JJ Merelo

Abstract

Having a grid or cluster or money to pay for cloud is great, but the need to do science and the performance it should have is not always in sync with what is provided by your friendly funding agency. However, nowadays there are many resources attached to the Internet which you can tap when free or when they are offered to you voluntarily. In this tutorial we will talk about which resources can be used for performing mid to big scale distributed evolutionary computation experiments, what kind of languages and storage tools are available to do it and how you should adapt your algorithm to leverage those resources. It will include an introduction of how to use cloud computing resources and adapt them to the need of evolutionary algorithms and an invitation to open science and how all of us will profit from it.

Short Biography

JJ Merelo is professor at the University of Granada, where he obtained a degree and PhD in Physics. He has been publishing in the PPSN conference since 1996 and attending since 2000; he was also local organizer in the 2002 edition. His main interests lay in the area of obtaining funds for his research and keeping his associates' jobs and doing so in the areas of evolutionary algorithms and complex systems, while using open source and actively supporting it. He does not believe in long bios, either, but you can find his Google Scholar profile and GitHub profile.

[return to top of page]

Cartesian Genetic Programming

Julian F. Miller, University of York, UK

Julian F. Miller

Abstract

Cartesian Genetic Programming (CGP) is a well-known, popular and efficient form of Genetic Programming. Cartesian Genetic Programming is a highly cited technique that was developed by Julian Miller in 1999 and 2000 from some earlier joint work of Julian Miller with Peter Thomson in 1997. In its classic form, it uses a very simple integer based genetic representation of a program in the form of a directed graph. Graphs are very useful program representations and can be applied to many domains (e.g. electronic circuits, neural networks). In a number of studies, CGP has been shown to be comparatively efficient to other GP techniques. It is also very simple to program. Since then, the classical form of CGP has been developed made more efficient in various ways. Notably by including automatically defined functions (modular CGP) and self-modification operators (self-modifying CGP). SMCGP was developed by Julian Miller, Simon Harding and Wolfgang Banzhaf. It uses functions that cause the evolved programs to change themselves as a function of time. Using this technique it is possible to find general solutions to classes of problems and mathematical algorithms (e.g. arbitrary parity, n-bit binary addition, sequences that provably compute pi and e to arbitrary precision, and so on). This tutorial is will cover the basic technique, advanced developments and applications to a variety of problem domains. The first edited book on CGP was published by Springer in September 2011. CGP has its own dedicated website.

Short Biography

Julian Miller has a BSc in Physics (Lond), a PhD in Nonlinear Mathematics (City) and a PGCLTHE (Bham) in Teaching. He is a Reader in the Department of Electronics at the University of York. He has chaired or co-chaired fifteen international workshops, conferences and conference tracks in Genetic Programming (GP), Evolvable Hardware. He is a former associate editor of IEEE Transactions on Evolutionary Computation and is currently an associate editor of the Journal of Genetic Programming and Evolvable Machines and Natural Computing. He is on the editorial board of the journals: Evolutionary Computation, International Journal of Unconventional Computing and Journal of Natural Computing Research. He has publications in genetic programming, evolutionary computation, quantum computing, artificial life, evolvable hardware, computational development, and nonlinear mathematics. He is a highly cited author with over 4,500 citations and over 210 publications in related areas. He has given nine tutorials on genetic programming and evolvable hardware at leading conferences in evolutionary computation. He received the prestigious EvoStar award in 2011 for outstanding contribution to the field of evolutionary computation. He is the inventor of a highly cited method of genetic programming known as Cartesian Genetic Programming and edited the first book on the subject in 2011.

[return to top of page]

Multimodal Optimization

Mike Preuss, University of Münster, Germany

Mike Preuss

Abstract

Multimodal optimization is currently getting established as a research direction that collects approaches from various domains of evolutionary computation that strive for delivering multiple very good solutions at once. We start with discussing why this is actually useful and therefore provide some real-world examples. From that on, we set up several scenarios and list currently employed and potentially available performance measures. This part also calls for user interaction: currently, it is very open what the actual targets of multimodal optimization shall be and how the algorithms shall be compared experimentally. As there has been little work on theory (not runtime complexity; rather the limits of different mechanisms) in the area, we present a high-level modelling approach that provides some insight in how niching can actually improve optimization methods if it fulfils certain conditions. While the algorithmic ideas for multimodal optimization (as niching) originally stem from biology and have been introduced into evolutionary algorithms from the 70s on, we only now see the consolidation of the field. The vast number of available approaches is getting sorted into collections and taxonomies start to emerge. We present our version of a taxonomy, also taking older but surpisingly modern global optimization approaches into account. We highlight some single mechanisms as clustering, multiobjectivization and archives that can be used as additions to existing algorithms or building blocks of new ones. We also discuss recent relevant competitions and their results, point to available software and outline the possible future developments in this area.

Short Biography

Mike Preuss is Research Associate at ERCIS, the European Research Center for Information Systems, at the University of Muenster, Germany. Previously, he was with the Chair of Algorithm Engineering at the Computer Science Department, TU Dortmund, Germany, where he received his Diploma degree in 1998 and his PhD in 2013. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization and the experimental methodology for (non-deterministic) optimization algorithms. He is currently working on the adaptability and applicability of computational intelligence techniques for various engineering domains and computer games, pushing forward modern approaches of experimental analysis as the Exploratory Landscape Analysis (ELA) and innovative uses of surrogate models. He was involved in founding the EvoGames track at Evo* and the Digital Entertainment Technologies and Arts (DETA) track at GECCO. Within the games field, he is mainly interested in AI for realtime strategy (RTS) games and procedural content generation (PCG).

[return to top of page]

Theory of Parallel Evolutionary Algorithms

Dirk Sudholt, University of Sheffield, UK

Dirk Sudholt

Abstract

Evolutionary algorithms (EAs) have given rise to many parallel variants, fuelled by the rapidly increasing number of CPU cores and the ready availability of computation power through GPUs and cloud computing. A very popular approach is to parallelize evolution in island models, or coarse-grained EAs, by evolving different populations on different processors. These populations run independently most of the time, but they periodically communicate genetic information to coordinate search. Many applications have shown that island models can speed up computation time significantly, and that parallel populations can further increase solution diversity. However, there is little understanding of when and why island models perform well, and what impact fundamental parameters have on performance. This tutorial will give an overview of recent theoretical results on the runtime of parallel evolutionary algorithms. These results give insight into the fundamental working principles of parallel EAs, assess the impact of parameters and design choices on performance, and contribute to the design of more effective parallel EAs.

Short Biography

Dirk Sudholt is a Lecturer at the University of Sheffield, UK. He obtained his Diplom (Master's) degree in 2004 and PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Dirk's research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms, including hybrid and parallel variants, and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He has published more than 50 refereed papers in international conferences and journals and has won 6 best paper awards at GECCO and PPSN. He is an editorial board member of the Evolutionary Computation journal and a member of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.

[return to top of page]

Automatic Design of Algorithms via Hyper-heuristic Genetic Programming

John R. Woodward, University of Stirling, UK
Jerry Swan, University of Stirling, UK
Michael Epitropakis, University of Stirling, UK

John R. Woodward
Jerry Swan
Michael Epitropakis

Abstract

How can we automatically design algorithms for a given problem domain? The aim of this tutorial is to demonstrate how we can use genetic programming to improve human-written programs. The resulting algorithms are therefore part man-made part machine-made. While there are often many algorithms suitable for a specific task (e.g. the Lin-Kernighan for the travelling salesman problem) there is often an over-arching structure which defines their functionality. There are commonalities between these algorithms (that define their purpose) and the differences (which give different performance). The invariant parts of a family of algorithms can be extracted by examining existing algorithms, and variations of the algorithm can be generated using genetic programming resulting in novel behaviour but with a predefined purpose. Therefore we have a method of mass-producing tailor-made algorithms for specific purposes. This is perhaps best illustrated by the following example; typically a travelling salesman algorithm is developed by hand and when executed returns a solution to a specific instance of the problem (i.e. an ordered list of cities). What we are advocating is a method that automatically generates travelling salesman algorithms in this example. An additional yet centrally important advantage of this approach is that the resulting algorithm is “unique” and bespoke to the specific set of problem instances used to train the algorithm. Continuing the travelling salesman example, two logistics companies will have two different probability distributions of customers and therefore require two different algorithms if they are to achieve better performance compared to using a standard off-the-shelf travelling salesman problem algorithm. This method has been applied to a rapidly increasing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on and off line), traveling salesman problems, Boolean satiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, layout of wind farms, and components of metaheuristics themselves. A step-by-step guide will be given, taking the novice through the distinct stages of the process of automatic design and a number of examples will be given to illustrate and reinforce the method in practice.

Short Biographies

John Woodward is a lecturer at the University of Stirling, within the CHORDS group and is employed on the DAASE project, and for the previous four years was a lecturer with the University of Nottingham (China). He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and particularly Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 50 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic setting, and been employed by EDS, CERN and RAF and three UK Universities.
Before entering academia, Jerry Swan spent 20 years in industry as a systems architect and software company owner. He obtained his PhD in Computational Group Theory from the University of Nottingham in 2006. His research interests include software engineering, computer algebra, formal methods and their application to meta- and hyper-heuristics, symbolic computation and machine learning. He has published in international journals and conferences and serves as a reviewer for numerous journals and program committees. Jerry's research has been presented worldwide and since 2011 has been a presenter and co-organizer of various GECCO Workshops concerned with the automation of the heuristic design process.
Michael G. Epitropakis received his B.S., M.S., and Ph.D. degrees from the Department of Mathematics, University of Patras, Greece. He is currently a post-doctoral research assistant at the Computational-Heuristics, Operations Research and Decision-Support (CHORDS) research group, at the Department of Computing Science and Mathematics, University of Stirling, Scotland. His current research interests include computational intelligence, evolutionary computation, swarm intelligence, automatic design of algorithms, machine learning and search-based software engineering. He has published more than 25 journal and conference papers. He serves as a reviewer in numerous journals and conferences. He is a member of the IEEE Computational Intelligence Society and the ACM SIGEVO.

[return to top of page]

Evolutionary Bilevel Optimization

Ankur Sinha, Aalto University School of Business, Helsinki
Pekka Malo, Aalto University School of Business, Helsinki
Kalyanmoy Deb, Michigan State University, East Lansing, MI

Ankur Sinha
Pekka Malo
Kalyanmoy Deb

Abstract

Many practical optimization problems should better be posed as bilevel optimization problems in which there are two levels of optimization tasks. A solution at the upper level is feasible if the corresponding lower level variable vector is optimal for the lower level optimization problem. Consider, for example, an inverted pendulum problem for which the motion of the platform relates to the upper level optimization problem of performing the balancing task in a time-optimal manner. For a given motion of the platform, whether the pendulum can be balanced at all becomes a lower level optimization problem of maximizing stability margin. Such nested optimization problems are commonly found in transportation, engineering design, game playing and business models. They are also known as Stackelberg games in the operations research community. These problems are too complex to be solved using classical optimization methods simply due to the "nestedness" of one optimization task into another. Evolutionary Algorithms (EAs) provide some amenable ways to solve such problems due to their flexibility and ability to handle constrained search spaces efficiently. Clearly, EAs have an edge in solving such difficult yet practically important problems. In the recent past, there has been a surge in research activities towards solving bilevel optimization problems. In this tutorial, we will introduce principles of bilevel optimization for single and multiple objectives, and discuss the difficulties in solving such problems in general. With a brief survey of the existing literature, we will present a few viable evolutionary algorithms for both single and multi-objective EAs for bilevel optimization. Our recent studies on bilevel test problems and some application studies will be discussed. Finally, a number of immediate and future research ideas on bilevel optimization will also be highlighted.

Short Biographies

Ankur Sinha is a researcher at the Department of Information and Service Economy, Aalto University School of Business (former: Helsinki School of Economics), Helsinki, Finland. He completed his Ph.D. from Aalto University School of Business, where his dissertation was adjudged the best thesis for the year 2011. He has a Bachelors degree in Mechanical Engineering from Indian Institute of Technology Kanpur, India. His research interests are in the areas of Bilevel Optimization, Multi-objective Evolutionary Algorithms and Multi-Criteria Decision Making. He has been a co-chair of the track on Evolutionary Bilevel Optimization at CEC 2012 and CEC 2013. He also offered a tutorial on Evolutionary Bilevel Optimization at GECCO 2013. Furthermore, he has chaired sessions on other topics and has been active as a program committee member in the evolutionary computation conferences.
Pekka Malo is an Assistant Professor of Statistics in the Department of Information and Service Economy at Aalto University, Finland. He holds a PhD degree in quantitative methods from Helsinki School of Economics and M.Sc. degree in mathematics from Helsinki University. Before joining the department he worked as a business simulation developer at Cesim and head of research at Fosta Consulting. His research interests include evolutionary computation, machine learning, semantic information retrieval, artificial intelligence, and their applications to economics and finance.
Kalyanmoy Deb is a Koenig Endowed Chair Professor at the Michigan State University in Michigan USA. He is the recipient of the prestigious TWAS Prize in Engineering Science, Infosys Prize in Engineering and Computer Science, Shanti Swarup Bhatnagar Prize in Engineering Sciences for the year 2005. He has also received the ‘Thomson Citation Laureate Award’ from Thompson Scientific for having highest number of citations in Computer Science during the past ten years in India. He is a fellow of IEEE, Indian National Academy of Engineering (INAE), Indian National Academy of Sciences, and International Society of Genetic and Evolutionary Computation (ISGEC). He has received Fredrick Wilhelm Bessel Research award from Alexander von Humboldt Foundation in 2003. His main research interests are in the area of computational optimization, modeling and design, and evolutionary algorithms. He has written two textbooks on optimization and more than 350 international journal and conference research papers. He has pioneered and a leader in the field of evolutionary multi-objective optimization. He is associate editor and in the editorial board or a number of major international journals.

[return to top of page]

Parallel Experiences in Solving Complex Problems

Enrique Alba, University of Malaga, Spain

Enrique Alba

Abstract

This talk introduces the basic concepts of two fields of research: parallelism and metaheuristics. We will revise the main concepts, tools, metrics, open issues, and application domains related to parallel models of search, optimization, and learning techniques. The very special kind of algorithms searching in a decentralized manner and later parallelized will be shown to solve complex problems at unseen levels of efficiency and efficacy. Facts, methodology, and general open issues will be presented in this talk.

Short Biography

Prof. Enrique Alba had his degree in engineering and PhD in Computer Science in 1992 and 1999, respectively, by the University of Málaga (Spain). He works as a Full Professor in this university with different teaching duties: data communications and evolutionary algorithms at graduate and master programs, respectively. Prof. Alba leads a team of researchers in the field of complex optimization with applications in bioinformatics, software engineering, telecoms, smart cities, and others. In addition to the organization of international events (ACM GECCO, IEEE IPDPS-NIDISC, IEEE MSWiM, IEEE DS-RT, …) Prof. Alba has offered dozens doctorate courses, multiple seminars in more than 20 international institutions and has directed several research projects (6 with national funds, 5 in Europe and numerous bilateral actions). Also, Prof. Alba has directed 7 projects for innovation and transference to the industry (OPTIMI, Tartessos, ACERINOX, ARELANCE, TUO) and at present he also works as invited professor at INRIA, the Univ. of Luxembourg, and Univ. of Ostrava. He is editor in several international journals and book series of Springer-Verlag and Wiley, as well as he often reviews articles for more than 30 impact journals. He has published more than 70 articles in journals indexed by Thomson ISI, 17 articles in other journals, 40 papers in LNCS, and more than 200 refereed conferences. Besides that, Prof. Alba has published 11 books, 39 book chapters, and has merited 6 awards to his professional activities. Pr. Alba’s H index is 37, with more than 6500 cites to his work.

[return to top of page]

Algorithm and Experiment Design with HeuristicLab - An Open Source Optimization Environment for Research and Education

Stefan Wagner, University of Applied Sciences Upper Austria, Austria
Gabriel Kronberger, University of Applied Sciences Upper Austria, Austria

Stefan Wagner
Gabriel Kronberger

Abstract

HeuristicLab is an open source system for heuristic optimization that features many metaheuristic optimization algorithms (e.g., genetic algorithms, genetic programming, evolution strategies, taboo search, simulated annealing) as well as many optimization problems (e.g., traveling salesman, regression, classification, vehicle routing, knapsack, job shop scheduling, simulation-based optimization). It is based on C# and the Microsoft .NET Framework and is used as development platform for several research and industry projects as well as for teaching metaheuristics in university courses. This tutorial demonstrates how to apply HeuristicLab in research and education for creating, executing and analyzing metaheuristic optimization algorithms. It includes many interactive live demonstrations in which it will be shown how to parameterize and execute evolutionary algorithms to solve combinatorial optimization problems as well as data analysis problems. The participants will see how to assemble different algorithms and parameter settings to large scale optimization experiments with HeuristicLab's graphical user interface and how to execute such experiments on multi-core or cluster systems. Furthermore, the experiment results will be compared using HeuristicLab’s interactive charts for visual and statistical analysis. To complete the tutorial, it will be sketched briefly how HeuristicLab can be extended with further optimization problems and how custom optimization algorithms can be modeled using the graphical algorithm designer.

Short Biographies

Stefan Wagner received his PhD in technical sciences in 2009 from the Johannes Kepler University Linz, Austria. From 2005 to 2009 he worked as an associate professor for software project engineering and since 2009 as a full professor for complex software systems at the University of Applied Sciences Upper Austria, Campus Hagenberg. He is one of the founders of the research group Heuristic and Evolutionary Algorithms Laboratory (HEAL) and is the project manager and head developer of the HeuristicLab optimization environment.
Gabriel Kronberger received his PhD in engineering sciences in 2010 from JKU Linz, Austria; his research interests include genetic programming, machine learning, and data mining and knowledge discovery. Since 2011 he is professor for data engineering and business intelligence at the University of Applied Sciences Upper Austria, Campus Hagenberg.

[return to top of page]