generated from extendr/helloextendr
-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathREADME.Rmd
85 lines (59 loc) · 2.34 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
# tok
<!-- badges: start -->
[](https://github.com/mlverse/tok/actions)
[](https://CRAN.R-project.org/package=tok)
[](https://cran.r-project.org/package=tok)
<!-- badges: end -->
tok provides bindings to the [🤗tokenizers](https://huggingface.co/docs/tokenizers/v0.13.3/en/index) library. It uses the same Rust libraries that powers the Python implementation.
We still don't provide the full API of tokenizers. Please open a issue if there's
a feature you are missing.
## Installation
You can install tok from CRAN using:
```
install.packages("tok")
```
Installing tok from source requires working Rust toolchain. We recommend using [rustup.](https://rustup.rs/)
On Windows, you'll also have to add the `i686-pc-windows-gnu` and `x86_64-pc-windows-gnu` targets:
rustup target add x86_64-pc-windows-gnu
rustup target add i686-pc-windows-gnu
Once Rust is working, you can install this package via:
``` r
remotes::install_github("dfalbel/tok")
```
## Features
We still don't have complete support for the 🤗tokenizers API. Please open an issue
if you need a feature that is currently not implemented.
## Loading tokenizers
`tok` can be used to load and use tokenizers that have been previously serialized.
For example, HuggingFace model weights are usually accompanied by a 'tokenizer.json'
file that can be loaded with this library.
To load a pre-trained tokenizer from a json file, use:
```{r}
path <- testthat::test_path("assets/tokenizer.json")
tok <- tok::tokenizer$from_file(path)
```
Use the `encode` method to tokenize sentendes and `decode` to transform them back.
```{r}
enc <- tok$encode("hello world")
tok$decode(enc$ids)
```
## Using pre-trained tokenizers
You can also load any tokenizer available in HuggingFace hub by using the `from_pretrained`
static method. For example, let's load the GPT2 tokenizer with:
```{r}
tok <- tok::tokenizer$from_pretrained("gpt2")
enc <- tok$encode("hello world")
tok$decode(enc$ids)
```