Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
rpedela
on March 25, 2020
|
parent
|
context
|
favorite
| on:
Stanza: A Python natural language processing toolk...
You can use spacy for english tokenization or you can use their neural model. The neural model will generally do better, especially sentence segmentation, but will be slower.
Consider applying for YC's Spring batch! Applications are open till Feb 11.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: