File processing
Input format: Input files must be in .txt FORMAT with UTF-8 ENCODING and contain PORTUGUESE TEXT. Input files and folders can also be compressed to the .zip format.
Privacy: The input file you upload and the respective output files will be automatically deleted from our computer after being processed and the result downloaded by you. No copies of your files will be retained after your use of this service.
Email address validation
Loading...
The size of your input file is large and its processing may take some time.
To receive by email an URL from which to download your processed file, please copy the code displayed below into the field "Subject:" of an email message (with the message body empty) and send it to request@portulanclarin.net
To proceed, please send an email to request@portulanclarin.net with the following code in the "Subject" field:
To: | request@portulanclarin.net |
|
Subject: |
|
The communication with the server cannot be established. Please try again later.
We are sorry but an unexpected error has occurred. Please try again later.
The code has expired. Please click the button below to get a new code.
For enhanced security, a new code has to be validated. Please click the button below to get a new code.
Privacy: After we reply to you with the URL for download, your email address is automatically deleted from our records.
Designing your own experiment with a Jupyter Notebook
A Jupyter notebook (hereafter just notebook, for short) is a type of document that contains executable code interspersed with visualizations of code execution results and narrative text.
Below we provide an example notebook which you may use as a starting point for designing your own experiments using language resources offered by PORTULAN CLARIN.
Pre-requisites
To execute this notebook, you need an access key you can obtain by clicking the button below. A key is valid for 31 days. It allows to submit a total of 10 million characters by means of requests with no more 2500 characters each. It allows to enter 100,000 requests, at a rate of no more than 200 requests per hour.
For other usage regimes, you should contact the helpdesk.
The input data sent to any PORTULAN CLARIN web service and the respective output will be automatically deleted from our computers after being processed. However, when running a notebook on an external service, such as the ones suggested below, you should take their data privacy policies into consideration.
Running the notebook
You have three options to run the notebook presented below:
- Run on Binder — The Binder Project is funded by a 501c3 non-profit
organization and is described in detail in the following paper:
Jupyter et al., "Binder 2.0 - Reproducible, Interactive, Sharable Environments for Science at Scale."
Proceedings of the 17th Python in Science Conference. 2018. doi://10.25080/Majora-4af1f417-011 - Run on Google Colab — Google Colaboratory is a free-to-use product from Google Research.
- Download the notebook from our public Github repository and run it on your computer.
This is a more advanced option, which requires you to install Python 3 and Jupyter on your computer. For anyone without prior experience setting up a Python development environment, we strongly recommend one of the two options above.
This is only a preview of the notebook. To run it, please choose one of the following options:
Using LX-Parser to parse sentences and displaying constituency trees¶
This is an example notebook that illustrates how you can use the LX-Parser web service to parse sentences.
Before you run this example, replace access_key_goes_here
by your webservice access key, below:
LXPARSER_WS_API_KEY = 'access_key_goes_here'
LXPARSER_WS_API_URL = 'https://portulanclarin.net/workbench/lx-parser/api/'
Importing required Python modules¶
The next cell will take care of installing the requests
, nltk
and svgling
packages,
if not already installed, and make them available to use in this notebook.
try:
import requests
except:
!pip3 install requests
import requests
try:
import nltk.tree
except:
!pip3 install nltk
import nltk.tree
try:
import svgling
except:
!pip3 install svgling
import svgling
import IPython.display
Wrapping the complexities of the JSON-RPC API in a simple, easy to use function¶
The WSException
class defined below, will be used later to identify errors
from the webservice.
class WSException(Exception):
'Webservice Exception'
def __init__(self, errordata):
"errordata is a dict returned by the webservice with details about the error"
super().__init__(self)
assert isinstance(errordata, dict)
self.message = errordata["message"]
# see https://json-rpc.readthedocs.io/en/latest/exceptions.html for more info
# about JSON-RPC error codes
if -32099 <= errordata["code"] <= -32000: # Server Error
if errordata["data"]["type"] == "WebServiceException":
self.message += f": {errordata['data']['message']}"
else:
self.message += f": {errordata['data']!r}"
def __str__(self):
return self.message
The next function invoques the LX-Suite webservice through it's public JSON-RPC API.
def parse(text, format):
'''
Arguments
text: a string with a maximum of 2000 characters, Portuguese text, with
the input to be processed
format: either 'parentheses', 'table' or 'JSON'
Returns a string or JSON object with the output according to specification in
https://portulanclarin.net/workbench/lx-parser/
Raises a WSException if an error occurs.
'''
request_data = {
'method': 'parse',
'jsonrpc': '2.0',
'id': 0,
'params': {
'text': text,
'format': format,
'key': LXPARSER_WS_API_KEY,
},
}
request = requests.post(LXPARSER_WS_API_URL, json=request_data)
response_data = request.json()
if "error" in response_data:
raise WSException(response_data["error"])
else:
return response_data["result"]
Let us test the function we just defined:
text = '''Esta frase serve para testar o funcionamento do parser de constituência. Esta outra
frase faz o mesmo.'''
# the parentheses format (aka bracketed format) is a popular format for representing
# constituency trees
result = parse(text, format="parentheses")
print(result)
(ROOT (S (S (NP (DEM Esta) (N frase)) (VP (V' (V serve) (PP (P para) (NP (N testar)))) (NP (ART o) (N' (N funcionamento) (PP (P de_) (NP (ART o) (N' (N parser) (PP (P de) (NP (N constituência)))))))))) (PNT .))) (ROOT (S (S (NP (DEM Esta) (N' (A outra) (N frase))) (VP (V faz) (NP (ART o) (N' (A mesmo))))) (PNT .)))
Let us use the svgling
for displaying constituency trees:
for sentence in result.splitlines(keepends=False):
tree = nltk.tree.Tree.fromstring(sentence)
IPython.display.display(svgling.draw_tree(tree))
Getting the status of a webservice access key¶
def get_key_status():
'''Returns a string with the detailed status of the webservice access key'''
request_data = {
'method': 'key_status',
'jsonrpc': '2.0',
'id': 0,
'params': {
'key': LXPARSER_WS_API_KEY,
},
}
request = requests.post(LXPARSER_WS_API_URL, json=request_data)
response_data = request.json()
if "error" in response_data:
raise WSException(response_data["error"])
else:
return response_data["result"]
get_key_status()
{'requests_remaining': 99999978, 'chars_remaining': 999998761, 'expiry': '2030-01-10T00:00+00:00'}
Instructions to use this web service
The web service for this application is available at https://portulanclarin.net/workbench/lx-parser/api/.
Below you find an example of how to use this web service with Python 3.
This example resorts to the requests package. To install this package, run this command in the command line:
pip3 install requests
.
To use this web service, you need an access key you can obtain by clicking in the button below. A key is valid for 31 days. It allows to submit a total of 10 million characters by means of requests with no more 2500 characters each. It allows to enter 100,000 requests, at a rate of no more than 200 requests per hour.
For other usage regimes, you should contact the helpdesk.
The input data and the respective output will be automatically deleted from our computer after being processed. No copies will be retained after your use of this service.
import json
import requests # to install this library, enter in your command line:
# pip3 install requests
# This is a simple example to illustrate how you can use the LX-Parser web service
# Requires: key is a string with your access key
# Requires: text is a string, UTF-8, with a maximum 2500 characters, Portuguese text, with
# the input to be processed
# Requires: format is a string, indicating the output format, which can be either
# 'parentheses', 'table' or 'JSON'
# Ensures: output according to specification in https://portulanclarin.net/workbench/lx-parser/
# Ensures: dict with number of requests and characters input so far with the access key, and
# its date of expiry
key = 'access_key_goes_here' # before you run this example, replace access_key_goes_here by
# your access key
format = 'parentheses' # other possible values are 'table' and 'JSON'
# this string can be replaced by your input
text = '''A Praça Luís de Camões será embelezada.
Já passámos a fase das grandes produções com muitos violinos e orquestras.'''
# To read input text from a file, uncomment this block
#inputFile = open("myInputFileName", "r", encoding="utf-8") # replace myInputFileName by
# the name of your file
#text = inputFile.read()
#inputFile.close()
# Processing:
url = "https://portulanclarin.net/workbench/lx-parser/api/"
request_data = {
'method': 'parse',
'jsonrpc': '2.0',
'id': 0,
'params': {
'text': text,
'format': format,
'key': key,
},
}
request = requests.post(url, json=request_data)
response_data = request.json()
if "error" in response_data:
print("Error:", response_data["error"])
else:
print("Result:")
print(response_data["result"])
# To write output in a file, uncomment this block
#outputFile = open("myOutputFileName","w", encoding="utf-8") # replace myOutputFileName by
# the name of your file
#output = response_data["result"]
#outputFile.write(output)
#outputFile.close()
# Getting acess key status:
request_data = {
'method': 'key_status',
'jsonrpc': '2.0',
'id': 0,
'params': {
'key': key,
},
}
request = requests.post(url, json=request_data)
response_data = request.json()
if "error" in response_data:
print("Error:", response_data["error"])
else:
print("Key status:")
print(json.dumps(response_data["result"], indent=4))
Access key for the web service
This is your access key for this web service.
The following access key for this web service is already associated with .
This key is valid until and can be used to process requests or characters.
An email message has been sent into your address with the information above.
Email address validation
Loading...
To receive by email your access key for this webservice, please copy the code displayed below into the field "Subject" of an email message (with the message body empty) and send it to request@portulanclarin.net
To proceed, please send an email to request@portulanclarin.net with the following code in the "Subject" field:
To: | request@portulanclarin.net |
|
Subject: |
|
The communication with the server cannot be established. Please try again later.
We are sorry but an unexpected error has occurred. Please try again later.
The code has expired. Please click the button below to get a new code.
For enhanced security, a new code has to be validated. Please click the button below to get a new code.
Privacy: When your access key expires, your email address is automatically deleted from our records.
Tag | Category |
---|---|
A | Adjective |
AP | Adjective Phrase |
ADV | Adverb |
ADVP | Adverb Phrase |
C | Complementizer |
CL | Clitics |
CP | Complementizer Phrase |
CARD | Cardinal |
CONJ | Conjuction |
CONJP | Conjuction Phrase |
D | Determiner |
DEM | Demonstrative |
N | Noun |
NP | Noun Phrase |
O | Ordinals |
P | Preposition |
PP | Preposition Phrase |
PPA | Past Participles/Adjectives |
POSS | Possessive |
PRS | Personals |
QNT | Predeterminer |
REL | Relatives |
S | Sentence |
V | Verb |
VP | Verb Phrase |
LX-Parser's documentation
LX-Parser
LX-Parser is a freely available on-line service for constituency parsing of Portuguese sentences. This service was developed and is maintained at the University of Lisbon by the NLX-Natural Language and Speech Group of the Department of Informatics.
LX-Parser performs a syntactic analysis of Portuguese sentences in terms of their constituency structure.
Supporting parser
LX-Parser is supported by the Stanford Parser. The parser developed by the Stanford
University is a statistical parser that is trained over a previously
annotated corpus.
A total of 22,118 sentences from
CINTIL-Treebank were used for training. This treebank
is being developed and maintained at the University of Lisbon by the NLX-Natural Language and
Speech Group of the Department of Informatics.
The parser uses probabilistic grammars. Under the Parseval metric it achieves an f-score of 89% (value obtained through 10-fold cross-evaluation).
Annotation guidelines
The syntactic analyses produced by LX-Parser are similar to the analyses found in the treebank on which LX-Parser was trained. This treebank was designed along the principles described in the following handbook:
- Branco António, João Silva, Francisco Costa, Sérgio Castro, 2011, CINTIL TreeBank Handbook: Design options for the representation of syntactic constituency. Department of Informatics, University of Lisbon, Technical Reports series, nb. di-fcul-tp-11-02.
Authorship
LX-Parser was developed by Patricia Gonçalves and João Silva, managed by António Branco, at the NLX-Natural Language and Speech Group, partly in the scope of the SemanticShare Project, funded by FCT-Fundação para a Ciência e Tecnologia.
Publications
Irrespective of the most recent version of this tool you may use, when mentioning it, please cite this reference:
- Silva, João, António Branco, Sérgio Castro and Ruben Reis, 2010, "Out-of-the-Box Robust Parsing of Portuguese". In Proceedings of the 9th International Conference on the Computational Processing of Portuguese (PROPOR2010), Lecture Notes in Artificial Intelligence, 6001, Berlin, Springer, pp.75–85.
Contact us
Contact us using the following email address: 'nlx' concatenated with 'at' concatenated with 'di.fc.ul.pt'.
Acknowledgments
This work was partly supported by FCT-Fundation of Science and Technology under the grant FCT/PTDC/PLP/81157/2006 for project SemanticShare. The system uses the PHPSyntaxTree Visualizer and the Stanford Parser.
Release
LX-Parser is made available as a standalone parser that you can download and run locally in your computer.
License
LX-Parser is distributed under an MIT license.
Required download
- The parser model file, cintil.ser.gz.
- Stanford Parser (requires Java 5 or later). Note that the model was created with version 1.6.5 of the parser. More recent versions of the software seem to be unable to load the model.
- LX-Tokenizer to tokenize input prior to parsing.
Instructions
Example command line:
java -Xmx500m -cp /path/to/stanford-parser.jar \ edu.stanford.nlp.parser.lexparser.LexicalizedParser \ -tokenized -sentences newline -outputFormat oneline \ -uwModel edu.stanford.nlp.parser.lexparser.BaseUnknownWordModel \ cintil.ser.gz input.txt
A quick explanation of the options:
- For some more complex sentences, the default heap size used by Java
might not be enough. We increase the maximum heap size to 500 megabytes
with the
-Xmx500m
option. - The path to the Stanford Parser JAR file is provided with the
-cp
option. - The name of the Java class we wish to run (
LexicalizedParser
). - The input to the parser must already be tokenized (see LX-Tokenizer for details on tokenization
decisions). We indicate this through the
-tokenized
option. - Each sentence in the input is separated by newline. We indicate this
through the
-sentences newline
option. - The output format is one parse per line. NB: The parser always adds a ROOT node. You can remove it in a post-processing step.
- A class (
BaseUnknownWordModel
, part of the Stanford parser package) that implements a baseline word model is used to handle unknonwn words. It is chosen by the-uwModel
option. - The final two arguments are the model file and the input file.
Tagset
Tag | Category |
---|---|
A | Adjective |
AP | Adjective Phrase |
ADV | Adverb |
ADVP | Adverb Phrase |
C | Complementizer |
CL | Clitics |
CP | Complementizer Phrase |
CARD | Cardinal |
CONJ | Conjuction |
CONJP | Conjuction Phrase |
D | Determiner |
DEM | Demonstrative |
N | Noun |
NP | Noun Phrase |
O | Ordinals |
P | Preposition |
PP | Preposition Phrase |
PPA | Past Participles/Adjectives |
POSS | Possessive |
PRS | Personals |
QNT | Predeterminer |
REL | Relatives |
S | Sentence |
V | Verb |
VP | Verb Phrase |
Why LX-Parser?
LX because LX is the shorthand form Lisboners often use to refer to their hometown.
License
No fee, attribution, all rights reserved, no redistribution, non commercial, no warranty, no liability, no endorsement, temporary, non exclusive, share alike.
The complete text of this license is here.