Plagiarism Detection
2013

This task is divided into source retrieval and text alignment. You can choose to solve one or both of them.

Source Retrieval

Task
Given a suspicious document and a web search API, your task is to retrieve all plagiarized sources while minimizing retrieval costs.
Training Corpus

To develop your software, we provide you with a training corpus that consists of suspicious documents. Each suspicious document is about a specific topic and may consist of plagiarized passages obtained from web pages on that topic found in the ClueWeb09 corpus.

Learn more » Download corpus

API

If you are not in possession of the ClueWeb09 corpus, we also provide access to two search engines which index the ClueWeb, namely the Lemur Indri search engine and the ChatNoir search engine. To programmatically access these two search engines, we provide a unified search API.

Learn more »

Note: To better separate the source retrieval task from the text alignment task, the API provides a text alignment oracle feature. For each document you request to download from the ClueWeb, the text alignment oracle discloses if this document is a source for plagiarism for the suspicious document in question. In addition, the plagiarized text is returned. This, way participation in the source retrieval task does not require the development of a text alignment solution. However, you are free to use your own text alignment, if you want to.

Baseline

For your convenience, we provide a baseline program written in Python.

Download program

The program loops through the suspicious documents in a given directory and outputs a search interaction log. The log is valid with respect to the output format described below. You may use the source code for getting started with your own approach.

Output

For each suspicious document suspicious-documentXYZ.txt found in the evaluation corpora, your plagiarism detector shall output an interaction log suspicious-documentXYZ.log which logs meta information about your retrieval process:

Timestamp   [Query|Download_URL]
1358326592  barack obama family tree
1358326597  http://webis15.medien.uni-weimar.de/chatnoir/clueweb?id=110212744
1358326598  http://webis15.medien.uni-weimar.de/chatnoir/clueweb?id=10221241
1358326599  http://webis15.medien.uni-weimar.de/chatnoir/clueweb?id=100003305377
1358326605  barack obama genealogy
1358326610  http://webis15.medien.uni-weimar.de/chatnoir/clueweb?id=82208332
...

For example, the above file would specify that at 1358326592 (Unix timestamp) the query barack obama family tree was sent and that in the following three of the retrieved documents were selected for download before the next query was sent.

Performance Measures

Performance will be measured based on the following five scores as averages over each suspicious document:

  1. Number of queries submitted.
  2. Number of web pages downloaded.
  3. Precision and recall of web pages downloaded regarding actual sources of a suspicious document.
  4. Number of queries until the first actual source is found.
  5. Number of downloads until the first actual source is downloaded.

Measures 1-3 capture the overall behavior of a system and measures 4-5 assess the time to first result. The quality of identifying reused passages between documents is not taken into account here, but note that retrieving duplicates of a source document is considered a true positive, whereas retrieving more than one duplicate of a source document does not improve performance.

Learn more »

Test Corpus

Once you finished tuning your approach to achieve satisfying performance on the training corpus, you should run your software on the test corpus.

During the competition, the test corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the test corpus is available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

Download corpus 1 Download corpus 2

Submission

We ask you to prepare your software so that it can be executed via a command line call. You can choose freely among the available programming languages and among the operating systems Microsoft Windows 7 and Ubuntu 12.04. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. Please test your software using one of the unit-test-scripts below. Download the script, fill in the required fields, and start it using the sh command. If the script runs without errors and if the correct output is produced, you can submit your software by sending your unit-test-script via e-mail to pan@webis.de. For more information see the user guide below.

PAN User Guide » Unit-Test Windows » Unit-Test Ubuntu »

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Results

The following table lists the performances achieved by the participating teams:

Source Retrieval Performance
Workload to 1st DetectionDownloaded SourcesTeam
QueriesDownloadsPrecisionRecall
16.85 15.280.120.44V. Elizalde
Private, Argentina
18.80 21.700.020.10L. Gillam
University of Surrey, UK
8.92 1.470.630.38O. Haggag and S. El-Beltagy
Nile University, Egypt
2.45285.660.010.65L. Kong°*, H. Qi°, C. Du*, M. Wang*, Z. Han°
°Heilongjiang Institute of Technology and *Harbin Engineering University, China
7.74 1.720.500.33T. Lee°, J. Chae°, K. Park*, and S. Jung°
°Korea University and *Soonchunhyang University, Republic of Korea
2.16 5.610.150.10A. Nourian
Iran University of Science and Technology, Iran
2.44 74.790.040.23Š Suchomel, J. Kasprzak, and M. Brandejs
Masaryk University, Czech Republic
184.00 5.070.110.35O. Veselý, T. Foltýnek, J. Rybička
Mendel University in Brno, Czech Republic
17.59 2.450.550.50K. Williams, H. Chen, S.R. Choudhury, and C.L. Giles
Pennsylvania State University, USA

A more detailed analysis of the retrieval performances can be found in the overview paper accompanying this task.

Learn more »

Related Work

This is the second time, source retrieval is run as part of plagiarism detection. An overview of the results of PAN'12 can be found in its overview paper, as well as in its proceedings.

Text Alignment

Task
Given a pair of documents, your task is to identify all contiguous maximal-length passages of reused text between them.
Training Corpus

To develop your software, we provide you with a training corpus that consists of pairs of documents, one of which may contain passages of text resued from the other. The reused text is subject to various kinds of (automatic) obfuscation to hide the fact it has been reused.

Learn more » Download corpus

Baseline

For your convenience, we provide a baseline program written in Python.

Download program

The program loops through the document pairs of a corpus and records the detection results in XML files. The XML files are valid with respect to the output format described below. You may use the source code for getting started with your own approach.

Output

Enclosed in the evaluation corpora, a file named pairs is found, which lists all pairs of suspicious documents and source documents to be compared. For each pair suspicious-documentXYZ.txt and source-documentABC.txt, your plagiarism detector shall output an XML file suspicious-documentXYZ-source-documentABC.xml which contains meta information about the plagiarism cases detected within:

<document reference="suspicious-documentXYZ.txt">
<feature
  name="detected-plagiarism"
  this_offset="5"
  this_length="1000"
  source_reference="source-documentABC.txt"
  source_offset="100"
  source_length="1000"
/>
<feature ... />
...
</document>

For example, the above file would specify an aligned passage of text between suspicious-documentXYZ.txt and source-documentABC.txt, and that it is of length 1000 characters, starting at character offset 5 in the suspicious document and at character offset 100 in the source document.

Performance Measures

Performance will be measured using macro-averaged precision and recall, granularity, and the plagdet score, which is a combination of the first three measures. For your convenience, we provide a reference implementation of the measures written in Python.

Learn more » Download measures

If you apply the performance measures program to the results produced by the example program for the corpus pan13-text-alignment-training-corpus-2013-01-21, you should get the following scores:

  • Plagdet Score 0.431573152404
  • Recall 0.35448277069
  • Precision 0.923521328132
  • Granularity 1.27693761815

In addition, the runtime of each software is measured, and we will also introduce precision and recall based on the level of plagiarism cases instead of character level.

Test Corpus

Once you finished tuning your approach to achieve satisfying performance on the training corpus, you should run your software on the test corpus.

During the competition, the test corpus will not be released publicly. Instead, we ask you to submit your software for evaluation at our site as described below.

After the competition, the test corpus is available including ground truth data. This way, you have all the necessities to evaluate your approach on your own, yet being comparable to those who took part in the competition.

Download corpus 1 Download corpus 2

Submission

We ask you to prepare your software so that it can be executed via a command line call. You can choose freely among the available programming languages and among the operating systems Microsoft Windows 7 and Ubuntu 12.04. We will ask you to deploy your software onto a virtual machine that will be made accessible to you after registration. You will be able to reach the virtual machine via ssh and via remote desktop. Please test your software using one of the unit-test-scripts below. Download the script, fill in the required fields, and start it using the sh command. If the script runs without errors and if the correct output is produced, you can submit your software by sending your unit-test-script via e-mail to pan@webis.de. For more information see the user guide below.

PAN User Guide » Unit-Test Windows » Unit-Test Ubuntu »

Note: By submitting your software you retain full copyrights. You agree to grant us usage rights only for the purpose of the PAN competition. We agree not to share your software with a third party or use it for other purposes than the PAN competition.

Results

The following table lists the performances achieved by the participating teams:

Text Alignment Performance
PlagdetTeam
0.82220D.A. Rodríguez Torrejón and J.M. Martín Ramos
Universidad de Huelva, Spain
0.81896L. Kong°*, H. Qi°, C. Du*, M. Wang*, Z. Han°
°Heilongjiang Institute of Technology and *Harbin Engineering University, China
0.74482Š Suchomel, J. Kasprzak, and M. Brandejs
Masaryk University, Czech Republic
0.69913M. Saremi
Semnan University, Iran
0.69551P. Shrestha and T. Solorio
University of Alabama at Birmingham, USA
0.61523Y. Palkovskii and A. Belov
Zhytomyr State University, Ukraine
0.57716A. Nourian
Iran University of Science and Technology, Iran
0.42191Baseline
0.40059L. Gillam
University of Surrey, UK
0.27081A. Jayapal and B. Goswamir
Nuance Communications, India

A more detailed analysis of the detection performances can be found in the overview paper accompanying this task.

Learn more »

Related Work

This task has been run since PAN'09; here is a quick list of the respective proceedings and overviews:

Task Chair

Martin Potthast

Martin Potthast

Bauhaus-Universität Weimar

Task Committee

Tim Gollub

Bauhaus-Universität Weimar

Matthias Hagen

Bauhaus-Universität Weimar

Benno Stein

Benno Stein

Bauhaus-Universität Weimar

Paolo Rosso

Paolo Rosso

Universitat Politècnica de València

Parth Gupta

Parth Gupta

Universitat Politècnica de València

Efstathios Stamatatos

Efstathios Stamatatos

Universitat Politècnica de València