Scroll down to
explore more

GenOpt

Generalization-based contest in global optimization

To see the results of the previous edition, please refer to the GENOPT 2016 page.

If you are interested in the GENOPT competition, please send us your name and email. You will be inserted into the mailing list and get a password.
Also use this form to send us any short message related to GenOpt.

Your email will be used only for communications related to GenOpt.


Special session in the LION11 conference (Nizhny Novgorod, Russia, June 19-21, 2017).

Organizers:

  • Roberto Battiti, Head of LIONlab for "Machine Learning and Intelligent Optimization", University of Trento (Italy) and "Lobachevsky" University of Nizhny Novgorod (Russia);
  • Yaroslav Sergeyev, Head of Numerical Calculus Laboratory, DIMES, University of Calabria (Italy) and "Lobachevsky" University of Nizhny Novgorod (Russia);
  • Mauro Brunato, LIONlab, University of Trento (Italy);
  • Dmitri Kvasov, DIMES, University of Calabria (Italy) and "Lobachevsky" University of Nizhny Novgorod (Russia).
GENOPT website maintainer:
  • Andrea Mariello, LIONlab, University of Trento (Italy).

While comparing results on benchmark functions is a widely used practice to demonstrate the competitiveness of global optimization algorithms, fixed benchmarks can lead to a negative data mining process. The motivated researcher can "persecute" the algorithm choices and parameters until the final designed algorithm "confesses" positive results for the specific benchmark.

With a similar goal, to avoid the negative data mining effect, the GENOPT contest is based on randomized function generators, with fixed statistical characteristics but individual variation of the generated instances.

The generators are available to the participants to test offline and online tuning schemes, but the final competition is based on random seeds communicated in the last phase.

A dashboard reflects the current ranking of the participants, who are encouraged to exchange preliminary results and opinions.

The final "generalization" ranking is going be confirmed in the last competition phase.

The GENOPT manifesto

The document detailing the motivations and rules of the GENOPT challenge (aka the GENOPT Manifesto, version Dec 31, 2016) is available for download.

Schedule

  • March 31 at 23:59:59 GMT public phase ends; existing competitors will have one week to make a final submission.
    Update 1: existing competitors can retrieve the seed for the final submission from the leaderboard page after login.
    Update 2: detailed instructions for the final submission have been sent by email to the contestants.
  • April 7 at 23:59:59 GMT competition ends, winners for the different categories are determined and asked to submit a paper describing the approach and the detailed results (papers are reviewed by the normal LION rules but with submission deadline April 21);
  • May 19 decisions about paper acceptance communicated to authors.
  • LION11 conference: 19-21 June, 2017 Reviewed and accepted papers are presented, Competition winners are publicly recognized.
  • After LION special issue of good-quality journal dedicated to results obtained by the Winning and reviewed papers.

Participating and submitting

Benchmark function library

Functions to be optimized are made available as binary libraries with wrappers for various languages and platforms. Usage examples are provided in the zip file and below.

A report file will be created which you can then submit to the GENOPT website for ranking in the leaderboard.

The library is written in C. Other languages can directly link the libraries (e.g., Fortran) or access them through wrappers (Java, MATLAB). The avaliable combinations of language and platform are shown in Table 1.

Table 1. Language/platform matrix
Windows
(native, 32- and 64-bit)
Windows
(MinGW, 32- and 64-bit)
Windows
(Cygwin, 32- and 64-bit)
Linux (32- and 64-bit) Mac OS X
(32- and 64-bit, Intel only)
C/C++ Yes Yes Yes Yes Yes
Fortran Yes (G95, Lahey) Yes (GNU, G95) Yes (GNU, G95) Yes Yes (GNU, G95)
Java Yes No No Yes Yes (64bit only)
MATLAB/Octave Yes Untested Yes Yes Yes

If you would like libraries for another platform, or a wrapper for another language, please send a message via the form above. Volunteers are particularly welcome! We are also considering suggestions for additional benchmark functions. Ideally, benchmarks should be designed with controllable parameters to answer specific scientific questions. E.g., about the relationship between problems structure and optimal (possibly self-tuned) algorithms, about scalabilty to large dimensionality, etc.

Download

The current version is genopt-20170217.zip.
MD5 sum: e669e29546476ff4b8ec14d4c77967b9

Documentation

All documentation is also included in the zip file.

Submitting your Results

  • Please make sure that your code is linked with the latest version of the GenOpt libraries provided above.
  • The initialization function (genopt_init in the C and Fortran code, Genopt.init in Java, genopt in MATLAB) takes two integer numbers:
    - a function type index, which in the submission must vary between 0 and 17 inclusive, and
    - an integer seed, to be varied between 1 and 100 inclusive.
  • Run your optimization algorithm for every function type from 0 to 17 inclusive and for every seed from 1 to 100 inclusive for 1,000,000 evaluations. You can set the 1,000,000 evaluation limit by calling the appropriate function in the GenOpt library, or have your algorithm stop shortly after the limit is reached.
  • Every run will generate a report file, for a total of 18x100=1800 files. Compress all report files as a ZIP file.
  • Login (if you don't have your credentials, please send us a message via the contact form above in this page) and upload your ZIP File.
    You can choose any name for your submission (your registration name and email are not public).
    We suggest a default name in this form: participant-algorithm-number so that you will be able to submit different runs for different algorithms.
  • When the upload is complete, the Leaderboard Page will open with your new submission highlighted.
Submissions are ranked by composing different evaluation criteria. For more details, the description of the functions being optimized and the ranking methodology, you can refer to the GENOPT Manifesto.