Web service created by exporting UIMA-based workflow from the U-Compare text mining system. Functionality: Performs discourse parsing on plain text. Also identifies sentences, tokens, parts of speech, lemmas, clauses and coreference chains Tools in workflow: UAIC-POSTagger, UAIC-NPChunker, UAI...
Web service created by exporting UIMA-based workflow from the U-Compare text mining system. Functionality: Identifies and categorises syntactic chunks in plain text Tools in workflow: Freeling shallow parser web service (service provided by the PANACEA project) NOTE: The licence provided cove...
This dataset has been created within the framework of the European Language Resource Coordination (ELRC) Connecting Europe Facility - Automated Translation (CEF.AT) action. For further information on the project: http://lr-coordination.eu. Polish-English parallel corpus from the website of the C...
Dicionário de Gentílicos e Topónimos is a list of pairs of toponyms and demonyms. The toponyms and demonyms included have a morphologically compositional relation between each other. The list contains around 1500 such pairs and additionally provides information on the toponym referent (upper unit...
Web service created by exporting UIMA-based workflow from the U-Compare text mining system. Functionality: Carries out syntactic parsing on plain text Tools in workflow: Cafetiere Sentence Splitter (University of Manchester), OpenNLP Tokenizer (Apache), STEPP Tagger (University of Manchester), ...
Bulgarian-English Wikipedia WSD/NED corpus is composed of articles from the Bulgarian version of Wikipedia and their English counterparts.
Tweet corpus
Tweets annotated with geographic coordinates
The Portuguese Parliamentary Corpus is part of the Mutlilingual ParlaMint Corpus, a set of comparable corpora containing transcriptions of parliamentary debates of 29 European countries and autonomous regions. The Portuguese corpus (ParlaMint-PT) comprehends transcripts of sessions in the time pe...
This is a set of 11.361 biographies of Portuguese people. The compilation of the data involved the biography collection from wikipedia and data conversion. Several filters were applied to remove entries that were mostly empty or non applicable content. Format: JSON (conversion from HTML) ...