turku_paraphrase_corpus
This is a Finnish paraphrase corpus which consists of pairs of text passages, where a typical passage is about a sentence long. It can be used to either identify or generate paraphrases.
You can load the dataset via:
import datasets
data = datasets.load_dataset('GEM/turku_paraphrase_corpus')
The data loader can be found here.
website
paper
authors
Jenna Kanerva, Filip Ginter, Li-Hsin Chang, Iiro Rastas, Valtteri Skantsi, Jemina Kilpeläinen, Hanna-Mari Kupari, Aurora Piirto, Jenna Saarni, Maija Sevón, Otto Tarkka (TurkuNLP / University of Turku)
Quick-Use
Contact Name
If known, provide the name of at least one person the reader can contact for questions about the
dataset.
If known, provide the name of at least one person the reader can contact for questions about the dataset.
Jenna Kanerva, Filip Ginter
Multilingual?
Is the dataset multilingual?
Is the dataset multilingual?
no
Covered Languages
What languages/dialects are covered in the dataset?
What languages/dialects are covered in the dataset?
Finnish
License
What is the license of the dataset?
What is the license of the dataset?
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
Communicative Goal
Provide a short description of the communicative goal of a model trained for this task on this dataset.
Provide a short description of the communicative goal of a model trained for this task on this dataset.
The corpus provides naturally occurring Finnish paraphrases striving for low lexical overlap, thus supporting many different downstream applications requiring language understanding.
Additional Annotations?
Does the dataset have additional annotations for each instance?
Does the dataset have additional annotations for each instance?
expert created
Contains PII?
Does the source language data likely contain Personal Identifying Information about the data creators
or subjects?
Does the source language data likely contain Personal Identifying Information about the data creators or subjects?
likely
Dataset Overview
-
Where to find the Data and its Documentation
-
Languages and Intended Use
-
Credit
-
Dataset Structure
-
Where to find the Data and its Documentation
-
Languages and Intended Use
-
Credit
-
Dataset Structure
Where to find the Data and its Documentation
Webpage
What is the webpage for the dataset (if it exists)?
What is the webpage for the dataset (if it exists)?
Download
What is the link to where the original dataset is hosted?
What is the link to where the original dataset is hosted?
Paper
What is the link to the paper describing the dataset (open access preferred)?
What is the link to the paper describing the dataset (open access preferred)?
BibTex
Provide the BibTex-formatted reference for the dataset. Please use the correct published version
(ACL anthology, etc.) instead of google scholar created Bibtex.
Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex.
@inproceedings{kanerva-etal-2021-finnish,
title = {Finnish Paraphrase Corpus},
author = {Kanerva, Jenna and Ginter, Filip and Chang, Li-Hsin and Rastas, Iiro and Skantsi, Valtteri and Kilpel{\"a}inen, Jemina and Kupari, Hanna-Mari and Saarni, Jenna and Sev{\'o}n, Maija and Tarkka, Otto},
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa'21)},
year = {2021},
publisher = {Link{\"o}ping University Electronic Press, Sweden},
url = {https://aclanthology.org/2021.nodalida-main.29},
pages = {288--298}
}
Contact Name
If known, provide the name of at least one person the reader can contact for questions about the
dataset.
If known, provide the name of at least one person the reader can contact for questions about the dataset.
Jenna Kanerva, Filip Ginter
Contact Email
If known, provide the email of at least one person the reader can contact for questions about the
dataset.
If known, provide the email of at least one person the reader can contact for questions about the dataset.
Has a Leaderboard?
Does the dataset have an active leaderboard?
Does the dataset have an active leaderboard?
no
Languages and Intended Use
Multilingual?
Is the dataset multilingual?
Is the dataset multilingual?
no
Covered Dialects
What dialects are covered? Are there multiple dialects per language?
What dialects are covered? Are there multiple dialects per language?
written standard language, spoken language
Covered Languages
What languages/dialects are covered in the dataset?
What languages/dialects are covered in the dataset?
Finnish
License
What is the license of the dataset?
What is the license of the dataset?
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
Intended Use
What is the intended use of the dataset?
What is the intended use of the dataset?
Paraphrase classification, paraphrase generation
Primary Task
What primary task does the dataset support?
What primary task does the dataset support?
Paraphrasing
Communicative Goal
Provide a short description of the communicative goal of a model trained for this task on this
dataset.
Provide a short description of the communicative goal of a model trained for this task on this dataset.
The corpus provides naturally occurring Finnish paraphrases striving for low lexical overlap, thus supporting many different downstream applications requiring language understanding.
Credit
Curation Organization Type(s)
In what kind of organization did the dataset curation happen?
In what kind of organization did the dataset curation happen?
academic
Curation Organization(s)
Name the organization(s).
Name the organization(s).
University of Turku
Dataset Creators
Who created the original dataset? List the people involved in collecting the dataset and their
affiliation(s).
Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s).
Jenna Kanerva, Filip Ginter, Li-Hsin Chang, Iiro Rastas, Valtteri Skantsi, Jemina Kilpeläinen, Hanna-Mari Kupari, Aurora Piirto, Jenna Saarni, Maija Sevón, Otto Tarkka (TurkuNLP / University of Turku)
Funding
Who funded the data creation?
Who funded the data creation?
The Turku paraphrase corpus project was funded by the Academy of Finland, as well as the European Language Grid project through its open call for pilot projects. The European Language Grid project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 825627 (ELG).
Who added the Dataset to GEM?
Who contributed to the data card and adding the dataset to GEM? List the people+affiliations
involved in creating this data card and who helped integrate this dataset into GEM.
Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM.
Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)
Dataset Structure
Data Fields
List and describe the fields present in the dataset.
List and describe the fields present in the dataset.
The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example include two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata.
The dataset include three different modes
, plain, classification, and generation. The
plain
mode loads the original data without any additional preprocessing or transformations,
while the classification
mode directly builds the data in a form suitable for training a
paraphrase classifier, where each example is doubled in the data with different directions (text1, text2,
label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with
directionality flag < or >). In the generation
mode, the examples are preprocessed to be
directly suitable for paraphrase generation task. In here, paraphrases not suitable for generation are
discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided
so that the generation goes from more detailed passage to the more general one in order to prevent model
hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided
in both directions (text1, text2, label) --> (text2, text1, label).
Each pair in plain
and classification
mode will include fields:
gem_id
: Identifier of the paraphrase pair (string)
goeswith
: Identifier of the document from which the paraphrase was extracted, can be
not available
in case the source of the paraphrase is not from document-structured data
(string)
fold
: 0-99, data split into 100 parts respecting document boundaries, you can use this e.g.
to implement crossvalidation safely as all paraphrases from one document are in one fold (int)
text1
: First paraphrase passage (string)
text2
: Second paraphrase passage (string)
label
: Manually annotated labels (string)
binary_label
: Label turned into binary with values positive
(paraphrase) and
negative
(not-paraphrase) (string)
is_rewrite
: Indicator whether the example is human produced rewrite or naturally occurring
paraphrase (bool)
Each pair in generation
mode will include the same fields expect text1
and
text2
are renamed to input
and output
in order to indicate the
generation direction. Thus the fields are:
gem_id
: Identifier of the paraphrase pair (string)
goeswith
: Identifier of the document from which the paraphrase was extracted, can be
not available
in case the source of the paraphrase is not from document-structured data
(string)
fold
: 0-99, data split into 100 parts respecting document boundaries, you can use this e.g.
to implement crossvalidation safely as all paraphrases from one document are in one fold (int)
input
: The input paraphrase passage for generation (string)
output
: The output paraphrase passage for generation (string)
label
: Manually annotated labels (string)
binary_label
: Label turned into binary with values positive
(paraphrase) and
negative
(not-paraphrase) (string)
is_rewrite
: Indicator whether the example is human produced rewrite or naturally occurring
paraphrase (bool)
Example Instance
Provide a JSON formatted example of a typical instance in the dataset.
Provide a JSON formatted example of a typical instance in the dataset.
{
'gem_id': 'gem-turku_paraphrase_corpus-train-15',
'goeswith': 'episode-02243',
'fold': 0,
'text1': 'Mitä merkitystä sillä on?',
'text2': 'Mitä väliä sillä edes on?',
'label': '4',
'binary_label': 'positive',
'is_rewrite': False
}
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe and name the splits in the dataset if there are more than one.
The corpus include 3 splits: train, validation, and test.
Splitting Criteria
Describe any criteria for splitting the data, if used. If there are differences between the splits
(e.g., if the training annotations are machine-generated and the dev and test ones are created by
humans, or if different numbers of annotators contributed to each example), describe them here.
Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
The data is split randomly into the three section with a restriction of all paraphrases from the same document (movie, TV episode, news article, student translation, or exam question) being in the same section. All splits are manually annotated.
Dataset in GEM
-
Rationale for Inclusion in GEM
-
GEM-Specific Curation
-
Getting Started with the Task
-
Rationale for Inclusion in GEM
-
GEM-Specific Curation
-
Getting Started with the Task
Rationale for Inclusion in GEM
Why is the Dataset in GEM?
What does this dataset contribute toward better generation evaluation and why is it part of GEM?
What does this dataset contribute toward better generation evaluation and why is it part of GEM?
This dataset provides a large amount of high quality (manually collected and verified) paraphrases for Finnish.
Similar Datasets
Do other datasets for the high level task exist?
Do other datasets for the high level task exist?
yes
Unique Language Coverage
Does this dataset cover other languages than other datasets for the same task?
Does this dataset cover other languages than other datasets for the same task?
no
Ability that the Dataset measures
What aspect of model ability can be measured with this dataset?
What aspect of model ability can be measured with this dataset?
natural language understanding, language variation
GEM-Specific Curation
Modificatied for GEM?
Has the GEM version of the dataset been modified in any way (data, processing, splits) from the
original curated data?
Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data?
yes
GEM Modifications
What changes have been made to he original dataset?
What changes have been made to he original dataset?
data points modified
Modification Details
For each of these changes, described them in more details and provided the intended purpose of the
modification
For each of these changes, described them in more details and provided the intended purpose of the modification
Data structure is slightly simplified, and the release provides ready made transformations into two tasks (paraphrase classification and generation), where some data instances are doubled with different direction, and some are discarded as not being suitable for generation (e.g. negatives).
Additional Splits?
Does GEM provide additional splits to the dataset?
Does GEM provide additional splits to the dataset?
no
Getting Started with the Task
Previous Results
-
Previous Results
-
Previous Results
Previous Results
Measured Model Abilities
What aspect of model ability can be measured with this dataset?
What aspect of model ability can be measured with this dataset?
natural language understanding, language variation
Previous results available?
Are previous results available?
Are previous results available?
yes
Other Evaluation Approaches
What evaluation approaches have others used?
What evaluation approaches have others used?
F-score in paraphrase classification
Dataset Curation
-
Original Curation
-
Language Data
-
Structured Annotations
-
Consent
-
Private Identifying Information (PII)
-
Maintenance
-
Original Curation
-
Language Data
-
Structured Annotations
-
Consent
-
Private Identifying Information (PII)
-
Maintenance
Original Curation
Original Curation Rationale
Original curation rationale
Original curation rationale
The dataset is fully manually annotated. The dataset strives for interesting paraphrases with low lexical overlap, thus the annotation is two fold. First the paraphrases are manually extracted from two related documents, where the annotators are instructed to extract only interesting paraphrases. In the second phrase, all extracted paraphrases are manually labeled given the annotation scheme.
The annotation scheme is: 4 : paraphrase in all reasonably possible contexts 3 : paraphrase in the given document contexts, but not in general 2 : related but not paraphrase During annotation also labels 1 (unrelated) and x (skip, e.g. wrong language) were used, however, the insignificant amount of examples annotated with these labels were discarded from the released corpus.
The following flags are annotated to label 4 paraphrases: < : txt1 is more general than txt2; txt2 is more specific than txt1 (directional paraphrase where txt2 can be replaced with txt1 in all contexts but not to the other direction)
: txt2 is more general than txt1; txt1 is more specific than txt2 (directional paraphrase where txt1 can be replaced with txt2 in all contexts but not to the other direction) i : minor traceable difference (differing in terms of grammatical number or case, 'this' vs 'that', etc.) s : style or strength difference (e.g. equivalent meaning, but one of the statements substantially more colloquial than the other)
For paraphrases where the annotated label was something else than label 4 without any flags, the annotators had an option to rewrite the text passages so that the rewritten paraphrase pair formed label 4 (universal) paraphrase. This was used for cases where simple edit would turn e.g. contextual or directional paraphrase into universal one. For the rewritten examples, both the original and the rewritten pairs are available with corresponding labels annotated.
Communicative Goal
What was the communicative goal?
What was the communicative goal?
Representing text passages with identical meaning but different surface realization.
Sourced from Different Sources
Is the dataset aggregated from different data sources?
Is the dataset aggregated from different data sources?
yes
Source Details
List the sources (one per line)
List the sources (one per line)
movie and TV series subtitles (82%) news articles (9%) discussion forum messages (8%) university translation exercises (1%) university course essays and exams (<1%)
Language Data
How was Language Data Obtained?
How was the language data obtained?
How was the language data obtained?
Found
, Other
Where was it found?
If found, where from?
If found, where from?
Multiple websites
, Offline media collection
, Other
Language Producers
What further information do we have on the language producers?
What further information do we have on the language producers?
The movie and TV series subtitles are extracted from OPUS OpenSubtitles2018 collection, which is based on data from OpenSubtitles. The news articles are collected from two Finnish news sites, YLE and HS, during years 2017-2020. Discussion forum messages are obtained from the Finnish Suomi24 discussion forum released for academic use (http://urn.fi/urn:nbn:fi:lb-2020021801). University translation exercises, essays and exams are collected during the project.
Data Validation
Was the text validated by a different worker or a data curator?
Was the text validated by a different worker or a data curator?
validated by data curator
Was Data Filtered?
Were text instances selected or filtered?
Were text instances selected or filtered?
not filtered
Structured Annotations
Additional Annotations?
Does the dataset have additional annotations for each instance?
Does the dataset have additional annotations for each instance?
expert created
Number of Raters
What is the number of raters
What is the number of raters
2<n<10
Rater Qualifications
Describe the qualifications required of an annotator.
Describe the qualifications required of an annotator.
Members of the TurkuNLP research group, native speakers of Finnish, each annotator has a strong background in language studies by having an academic degree or ongoing studies in a field related to languages or linguistics.
Raters per Training Example
How many annotators saw each training example?
How many annotators saw each training example?
1
Raters per Test Example
How many annotators saw each test example?
How many annotators saw each test example?
1
Annotation Service?
Was an annotation service used?
Was an annotation service used?
no
Annotation Values
Purpose and values for each annotation
Purpose and values for each annotation
- Manual extraction of interesting paraphrases from two related documents.
- Manual labeling of each extracted paraphrase based on the given annotation scheme, e.g. distinguishing contextual and universal paraphrases, marking style or strength differences, etc.
Any Quality Control?
Quality control measures?
Quality control measures?
validated by another rater
Quality Control Details
Describe the quality control measures that were taken.
Describe the quality control measures that were taken.
Partial double annotation, double annotation batches are assigned regularly in order to monitor annotation consistency. In double annotation, one annotator first extracts the candidate paraphrases, and these candidates are assigned to two different annotators, who does the label annotation independently from each other. Afterwards, the label annotations are merged, and conflicting labels are resolved together with the whole annotation team.
Consent
Any Consent Policy?
Was there a consent policy involved when gathering the data?
Was there a consent policy involved when gathering the data?
yes
Consent Policy Details
What was the consent policy?
What was the consent policy?
The corpus is mostly based on public/open data. For other data sources (student material), the licensing was agreed with the data providers during the collection.
Private Identifying Information (PII)
Contains PII?
Does the source language data likely contain Personal Identifying Information about the data
creators or subjects?
Does the source language data likely contain Personal Identifying Information about the data creators or subjects?
likely
Categories of PII
What categories of PII are present or suspected in the data?
What categories of PII are present or suspected in the data?
generic PII
Any PII Identification?
Did the curators use any automatic/manual method to identify PII in the dataset?
Did the curators use any automatic/manual method to identify PII in the dataset?
no identification
Maintenance
Any Maintenance Plan?
Does the original dataset have a maintenance plan?
Does the original dataset have a maintenance plan?
no
Broader Social Context
-
Previous Work on the Social Impact of the Dataset
-
Impact on Under-Served Communities
-
Discussion of Biases
-
Previous Work on the Social Impact of the Dataset
-
Impact on Under-Served Communities
-
Discussion of Biases
Previous Work on the Social Impact of the Dataset
Usage of Models based on the Data
Are you aware of cases where models trained on the task featured in this dataset ore related tasks
have been used in automated systems?
Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems?
no
Impact on Under-Served Communities
Addresses needs of underserved Communities?
Does this dataset address the needs of communities that are traditionally underserved in language
technology, and particularly language generation technology? Communities may be underserved for
exemple because their language, language variety, or social or geographical context is
underepresented in NLP and NLG resources (datasets and models).
Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models).
no
Discussion of Biases
Any Documented Social Biases?
Are there documented social biases in the dataset? Biases in this context are variations in the
ways members of different social categories are represented that can have harmful downstream
consequences for members of the more disadvantaged group.
Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group.
no
Considerations for Using the Data
-
PII Risks and Liability
-
Licenses
-
Known Technical Limitations
-
PII Risks and Liability
-
Licenses
-
Known Technical Limitations
PII Risks and Liability
Potential PII Risk
Considering your answers to the PII part of the Data Curation Section, describe any potential
privacy to the data subjects and creators risks when using the dataset.
Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset.
None
Licenses
Copyright Restrictions on the Dataset
Based on your answers in the Intended Use part of the Data Overview Section, which of the following
best describe the copyright and licensing status of the dataset?
Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset?
open license - commercial use allowed
Copyright Restrictions on the Language Data
Based on your answers in the Language part of the Data Curation Section, which of the following
best describe the copyright and licensing status of the underlying language data?
Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data?
open license - commercial use allowed