Convert natural language text into tokens. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the 'stringi' and 'Rcpp' packages for fast yet correct tokenization in 'UTF-8'.
| Version: | 0.2.1 |
| Depends: | R (≥ 3.1.3) |
| Imports: | stringi (≥ 1.0.1), Rcpp (≥ 0.12.3), SnowballC (≥ 0.5.1) |
| LinkingTo: | Rcpp |
| Suggests: | covr, knitr, rmarkdown, stopwords (≥ 0.9.0), testthat |
| Published: | 2018-03-29 |
| Author: | Lincoln Mullen |
| Maintainer: | Lincoln Mullen <lincoln at lincolnmullen.com> |
| BugReports: | https://github.com/ropensci/tokenizers/issues |
| License: | MIT + file LICENSE |
| URL: | https://lincolnmullen.com/software/tokenizers/ |
| NeedsCompilation: | yes |
| Citation: | tokenizers citation info |
| Materials: | README NEWS |
| In views: | NaturalLanguageProcessing |
| CRAN checks: | tokenizers results |
| Reference manual: | tokenizers.pdf |
| Vignettes: |
Introduction to the tokenizers Package The Text Interchange Formats and the tokenizers Package |
| Package source: | tokenizers_0.2.1.tar.gz |
| Windows binaries: | r-devel: tokenizers_0.2.1.zip, r-release: tokenizers_0.2.1.zip, r-oldrel: tokenizers_0.2.1.zip |
| macOS binaries: | r-release: tokenizers_0.2.1.tgz, r-oldrel: tokenizers_0.2.1.tgz |
| Old sources: | tokenizers archive |
| Reverse imports: | covfefe, healthforum, pdfsearch, proustr, rslp, textfeatures, textrecipes, tidypmc, tidytext, wactor |
| Reverse suggests: | cwbtools, quanteda |
Please use the canonical form https://CRAN.R-project.org/package=tokenizers to link to this page.