askIT Dataset

Collection of dialogues extracted from subreddits related to Information Technology (IT) and extracted with RDET (Reddit Dataset Extraction Tool). It is composed of 61,842,638 tokens in 179,358 dialogues.

Resource Type:Corpus
Media Type:Text
Language:English
TakeLab Vectors

This resource includes the distributional semantic vectors used for the replication of the TakeLab system (https://github.com/nlx-group/arct-rep-rev). The TakeLab system is an automatic classifier for the Argument Reasoning Comprehension Task (https://www.aclweb.org/anthology/S18-1121/). The ...

Resource Type:Lexical / Conceptual
Media Type:Text
Language:English
A Repository of State of the Art and Competitive Baseline Summaries for DUC 2004

In the period since 2004, many novel sophisticated approaches for generic multi-document summarization have been developed. Intuitive simple approaches have also been shown to perform unexpectedly well for the task. Yet it is practically impossible to compare the existing approaches directly, bec...

Resource Type:Corpus
Media Type:Text
Language:English
Dundee GCG-Bank

Dundee GCG-Bank contains hand-corrected deep syntactic annotations for the Dundee eye-tracking corpus (Kennedy et al., 2003). The annotations are designed to support psycholinguistic investigation into the structural determinants of sentence processing effort. Dundee GCG-Bank is distributed as a ...

Resource Type:Corpus
Media Type:Text
Language:English
Datasets for classification experiments IS-pros

Datasets is arff format (for Weka machine learning software) are made available to reproduce the validation experiments presented in the paper.

Resource Type:Corpus
Media Type:Text
Language:English
GREC

GREC is a semantically annotated corpus of 240 MEDLINE abstracts (167 on the subject of E. coli species and 73 on the subject of the Human species) which is intended for training IE systems and/or resources which are used to extract events from biomedical literature.

Resource Type:Corpus
Media Type:Text
Language:English
A Tweet Dataset Annotated in Four Emotion Dimensions

A corpus of 2,019 tweets annotated along each of four emotion dimensions: Valence, Dominance, Arousal and Surprise. Two annotation schemes are used: a 5-point ordinal scale (using SAM manikins for Valence, Arousal and Dominance) and pair-wise comparisons with an "about the same" option (here 2,01...

Resource Type:Corpus
Media Type:Text
Language:English
HIMERA Corpus

The HIMERA annotated corpus contains a set of published historical medical documents that have been manually annotated with semantic information that is relevant to the study of medical history and public health. Specifically, annotations correspond to seven different entity types and two differe...

Resource Type:Corpus
Media Type:Text
Language:English
A Terminological Inventory for Biodiversity

In order to construct the inventory, we firstly compiled a species name dictionary by combining all of the names available in Catalogue of Life (CoL), Encyclopedia of Life (EoL) and Global Biodiversity Information Facility (GBIF). The terms contained in this dictionary were then located within ...

Resource Type:Lexical / Conceptual
Media Type:Text
Language:English
Model weights for a study of commonsense reasoning

This resource contains model weights for five Transformer-based models: RoBERTa, GPT-2, T5, BART and COMET(BART). These models were implemented using HuggingFace, and fine-tuned on the following four commonsense reasoning tasks: Argument Reasoning Comprehension Task (ARCT), AI2 Reasoning Challen...

Resource Type:Language Description
Media Type:Text
Language:English