You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 9, 2025. It is now read-only.
Starting the CoreNLP server is not nice for anyone, it is big, relatively slow and the usage is a bit clunky.
Other options are either spaCy or nltk.
First experiments show that nltk's Named Entity Recognition is not very accurate and the sentence splitter is worse than CoreNLP.
The next choice is spaCy which shows nice results from simple experiments. Before we implement, we have to check the following:
Is the sentence splitter and tokenizer better than CoreNLP?
Can we deploy spaCy with the models according to their license?
Is the NER better than CoreNLP?
Can we have a higher throughput?
Is it parallelizable? CoreNLP doesn't like more than 2-4 requests at the same time.