U-Compare Type system

The resource constitues of a hierarchically-structured system of data types, which is intended to be suitable for describing the inputs and output annotation types of a wide range of natural language processing applications which operate within the UIMA Framework. It is being developed in conjunc...

Resource Type:Language Description
Media Type:Text
Language:English
PsychAnaphora - Types of anaphora produced in a sentence completion task

This set of materials pertains to a study on the production of explicit pronouns, null pronouns, and repeated-NP anaphors, in European Portuguese. A spreadsheet containing data from 73 participants (young adults), namely, count data for instances of the different types of anaphor that occurred in...

Resource Type:Language Description
Media Type:Text
Language:Portuguese
PsychAnaphora - Reading times in a self-paced reading task

This set of materials resulted from a study on the processing of explicit pronouns in European Portuguese. A spreadsheet containing data from 75 participants (young adults), namely, per-word reading times and accuracy data on comprehension questions, is provided. Complementary materials (Read Fir...

Resource Type:Language Description
Media Type:Text
Language:Portuguese
PsychAnaphora - Event related brain potentials from young and older adults

This set of materials pertains to a study on the processing of explicit pronouns in European Portuguese. Forty spreadsheets containing Event Related Potentials, encoded as voltage variations across 64 electrodes during 1.5 s, in two millisecond steps, are provided, 20 of which pertain to younger ...

Resource Type:Language Description
Media Type:Text
Language:Portuguese
Portuguese RoBERTa language model

HuggingFace (pytorch) pre-trained roBERTa model in Portuguese, with 6 layers and 12 attention-heads, totaling 68M parameters. Pre-training was done on 10 million Portuguese sentences and 10 million English sentences from the Oscar corpus. Please cite: Santos, Rodrigo, João Rodrigues, Antóni...

Resource Type:Language Description
Media Type:Text
Languages:English
Portuguese
Model weights for a study of commonsense reasoning

This resource contains model weights for five Transformer-based models: RoBERTa, GPT-2, T5, BART and COMET(BART). These models were implemented using HuggingFace, and fine-tuned on the following four commonsense reasoning tasks: Argument Reasoning Comprehension Task (ARCT), AI2 Reasoning Challen...

Resource Type:Language Description
Media Type:Text
Language:English
Estonian resource grammar for Grammatical Framework

A GF resource grammar for Estonian, implementing the language-neutral API of the GF Resource Grammar Library as well as a morphological synthesizer.

Resource Type:Language Description
Media Type:Text
Language:Estonian
BERTimbau - Portuguese BERT-Large language model

This resource contains a pre-trained BERT language model trained on the Portuguese language. A BERT-Large cased variant was trained on the BrWaC (Brazilian Web as Corpus), a large Portuguese corpus, for 1,000,000 steps, using whole-word mask. The model is available as artifacts for TensorFlow an...

Resource Type:Language Description
Media Type:Text
Language:Portuguese