Author Identification
2013

Authorship attribution is an important problem in many areas including information retrieval and computational linguistics, but also in applied areas such as law and journalism where knowing the author of a document (such as a ransom note) may be able to save lives. The most common framework for testing candidate algorithms is a text classification problem: given known sample documents from a small, finite set of candidate authors, which if any wrote a questioned document of unknown authorship? It has been commented, however, that this may be an unreasonably easy task. A more demanding problem is author verification where given a set of documents by a single author and a questioned document, the problem is to determine if the questioned document was written by that particular author or not. This may more accurately reflect real life in the experiences of professional forensic linguists, who are often called upon to answer this kind of question.

A note to forensic linguists: In order to bridge the gap between linguistics and computer science, we strongly encourage submissions from researchers from both fields. We understand that research groups with expertise in linguistics use manual or semi-automated methods and, therefore, they are not able to submit their software. To enable their participation, we will provide them with the opportunity to analyze the test corpus after the deadline of software submission (mid-April). Their results will be ranked in a separate list with respect to the performance of the software submissions and they will be entitled to describe their approach in a paper.

Task
Given a small set (no more than 10, possibly as few as one) of "known" documents by a single person and a "questioned" document, the task is to determine whether the questioned document was written by the same person who wrote the known document set.
Training Corpus

To develop your software, we provide you with a training corpus that comprises a set of known documents by a single person and exactly one questioned document. There are several such problem instances covering English, Greek, and Spanish and a varying number of known documents (1-10 per problem). All documents within a single problem instance will be in the same language and best efforts are applied to assure that within-problem documents are matched for genre, register, theme, and date of writing. The document lengths vary from a few hundred to a few thousand words.

Learn more » Download corpus

Output

Your software must take as input the absolute path to a set of problems. For each problem there is a separate sub-folder within that path including the set of known documents and the single unknown document of that problem. The software has to output a single text file answers.txt with all the produced answers for the whole set of evaluation problems. Each line of this file corresponds to a problem instance, it starts with the ID of the problem followed by a binary (Y)ES/(N)O answer to the question "Is the unknown document written by the author of the known documents?". If you do not want to provide answers for some problems, you can either replace the answer character with "-" or just do not include a line for that problem to your answers. For example, an answers.txt file may look like this:

EN01 Y
EN02 N
EN03 -
EN04 Y
EN07 Y
...

Optionally, you may also provide a score, a real number in the set [0,1] inclusive, where 0 corresponds to NO and 1 to YES. This score should be round with two decimal digits and will allow a more detailed evaluation of your approach. In this case, the scores have to be placed next to the binary answers. It is possible to provide scores even for problems you are not able to provide binary answers. For example, an answers.txt file with scores may look like this:

EN01 Y 0.90
EN02 N 0.25
EN03 - 0.53
EN04 Y 0.86
EN07 Y 0.74
...

Use a single whitespace to separate problem ID, binary answer, and score. The naming of the output file is up to you. We reccomend to use the name of the participant group-run.

Performance Measures

Performance of the binary classification will be measured as follows:

  • Recall = #correct_answers / #problems
  • Precision = #correct_answers / #answers

Participants are be ranked by combining these measures via F1.

In addition, participants may also provide a score, a real number in the set [0,1] inclusive, where 0 corresponds to NO and 1 to YES. A separate ranking for those participants who also submit real scores [0,1] according to the ROC-AUC. For the calculation of ROC curves, any missing answers are assumed to be wrong answers.

Test Corpus

Once you finished tuning your approach to achieve satisfying performance on the training corpus, you should run your software on the test corpus.

During the competition, the test corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the test corpus is available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

Download corpus 1 Download corpus 2

Submission

We ask you to prepare your software so that it can be executed via a command line call. You can choose freely among the available programming languages and among the operating systems Microsoft Windows 7 and Ubuntu 12.04. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. Please test your software using one of the unit-test-scripts below. Download the script, fill in the required fields, and start it using the sh command. If the script runs without errors and if the correct output is produced, you can submit your software by sending your unit-test-script via e-mail to pan@webis.de. For more information see the PAN 2013 User Guide below.

PAN User Guide » Unit-Test Windows » Unit-Test Ubuntu »

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Results

The following table lists the performances achieved by the participating teams:

Authorship attribution performance
F1Participant
0.753Shachar Seidman
Bar Ilan University, Israel
0.718Oren Halvani, Martin Steinebach, and Ralf Zimmermann
Fraunhofer Institute for Secure Information Technology SIT, Germany
0.671Robert Layton, Paul Watters, and Richard Dazeley
University of Ballarat, Australia
0.671Timo Petmanson
University of Tartu, Estonia
0.659Magdalena Jankowska, Vlado Kešelj, and Evangelos Milios
Dalhousie University, Canada
0.659Darnes Vilariño, David Pinto, Helena Gómez, Saúl León, and Esteban Castillo
Benemérita Universidad Autónoma de Puebla, Mexico
0.655Victoria Bobicev
Technical University of Moldova, Moldova
0.647Vanessa Wei Feng and Graeme Hirst
University of Toronto, Canada
0.612Paola Ledesma°, Gibran Fuentes*, Gabriela Jasso*, Angel Toledo*, and Ivan Meza*
°Escuela Nacional de Antropología e Historia (ENAH) and *Universidad Nacional Autónoma de México (UNAM), Mexico
0.606M.R. Ghaeini
Amirkabir University of Technology, Iran
0.600Michiel van Dam
Delft University of Technology, The Netherlands
0.600Erwan Moreau and Carl Vogel
Trinity College Dublin, Ireland
0.576Arun Jayapal and Binayak Goswami
Nuance Communications, India
0.553Cristian Grozea° and Marius Popescu*
°Fraunhofer FIRST, Germany, and *University of Bucharest, Romania
0.541Anna Vartapetiance and Lee Gillam
University of Surrey, UK
0.529Roman Kern
Know-Center GmbH, Austria
0.500Baseline
0.417Cor J. Veenman° and Zhenshi Li*
°Netherlands Forensic Institute and *Delft University of Technology, The Netherlands
0.331Sorin Fratila
University Politehnica of Bucharest, Romania

A more detailed analysis of the detection performances can be found in the overview paper accompanying this task.

Learn more »

Related Work

We refer you to:

Task Chair

Efstathios Stamatatos

Efstathios Stamatatos

University of the Aegean

Task Committee

Patrick Juola

Patrick Juola

Duquesne University

Shlomo Argamon

Shlomo Argamon

Illinois Institute of Technology

Moshe Koppel

Moshe Koppel

Bar-Ilan University