Workshop collocated with ECML-PKDD 2012, September 24-28, Bristol, UK.
This workshop is dedicated to the proposition that insight often begins with unexpected results. Successful methods do not simply fall from the sky: they are discovered based on clues gathered by trying several ideas, learning from surprising results, and building an understanding of what works, what does not, and why.
Unexpected results chart the boundaries of our knowledge: they identify errors, reveal false assumptions, and force us to dig deeper. When a system works we focus on its input/output behavior, but only when a problem occurs, we examine the underlying mechanisms to understand what went wrong.
Unfortunately, this process is rarely mentioned in the machine learning and data mining discourse, meaning that this insight is essentially lost. Ironically, while we have long understood that learning from only positive results is substantially harder than learning from both positives and negatives, there exists a publication bias that favours (incremental) successes over novel discoveries of why some ideas, while intuitive and plausible, do not work.
Good science consists of carefully designed experiments, systematic procedures, and honest evaluations. It is not mandatory for the results to be positive, only that they provide a deeper understanding of the field. As a scientific area where empirical methods dominate, it is a given that people try many ideas and obtain surprising results in the experimental stage. This may be due to lack of rigor, but often there are deeper, unexpected, and intriguing reasons. We can learn a lot if we analyze scientifically why an intuitive and plausible experiment did not work as expected.
Just as every cloud has a silver lining, these unexpected results define the actual boundaries of our field: they highlight what we do not yet understand, and often point to interesting and open problems that ought to be explored.
The workshop proceedings are now available
Silver 2012 is a full-day workshop on September 24th. It will take place at the Wills Memorial Building, room 3.33.
09.30 | Opening keynote
'Unexpected Results in Monte-Carlo Tree Search' by Olivier Teytaud (Université Paris-Sud) |
10.30 | Coffee break |
11.00 |
'Generation of an Empirical Theory for Positive-Versus-Negative Ensemble Classification'
by Patricia Lutu (University of Pretoria) |
11.30 |
'How to make the most of result evaluation?' by Ana Costa e Silva (University of Porto) |
12.00 | Lunch (on your own) |
13.30 | Keynote
'On the search for and appreciation of unexpected results in data mining research (or: Science - we might be doing it wrong)' by Albrecht Zimmermann (University of Leuven) |
14.30 |
'Adventures in Feature Selection on an Industrial Dataset...and Ensuing General Discoveries' by George Forman (Hewlett-Packard Labs) |
15.00 | Panel Discussion |
16.00 | Coffee break |
16.30 | ECML PKDD Opening Session |
With this workshop, we want to give a voice to those unexpected results that deserve wider dissemination. Given the topic, we will especially focus on fostering interaction. The plenary program will feature several invited talks on lessons learned from unexpected results, a selection of the best contributed (full) papers, and a poster session which will also be open to cases of unexpected results that cannot yet be fully explained. At the end of the workshop, we will hold a panel discussion to discuss the value of unexpected results, their place in the machine learning and data mining literature, and how our research community can ensure these results receive wider attention.
All accepted papers, including those presented as a poster, will be considered as part of a journal special issue on the topic of learning from unexpected results.
Submissions are possible as either a full paper or extended abstract. Full papers should present original studies that fall in one of the following categories:
The submission system can be found here.
A selection will be made of the best papers and runners-up, and these will be presented in the plenary session. The remainder of accepted submissions will be presented in the form of short announcement talks and a poster session. In the journal special issue, the papers selected for plenary presentation will be marked.
Full papers can consist of a maximum of 16 pages; extended abstracts of up to 4 pages, following the Springer LNAI formatting guidelines. Papers in other formats (but not exceeding the page limits) may be accepted during initial submission, but should be reformatted before the camera ready deadline. Each submission must be submitted online via the Easychair submission interface. Submissions can be updated at will before the submission deadline. The only accepted format for submitted papers is PDF. Each paper submission will be reviewed by at least two members of the program committee and all accepted submissions will be considered for the special journal issue.
The main selection criterion will be whether it appears worthwhile to record the unexpected results for the community. That is, whether the lessons learned will be beneficial to a broader audience or whether further research of these results may advance the field. Full papers should describe well-documented experiments, with results that should be surprising to many, and an (at least sketched out) explanation that turns this unexpected outcome into a piece of useful machine learning and data mining knowledge.