step_tokenmerge creates a specification of a recipe step that will take multiple tokenlists and combine them into one tokenlist.

step_tokenmerge(
  recipe,
  ...,
  role = "predictor",
  trained = FALSE,
  columns = NULL,
  prefix = "tokenmerge",
  skip = FALSE,
  id = rand_id("tokenmerge")
)

# S3 method for step_tokenmerge
tidy(x, ...)

Arguments

recipe

A recipe object. The step will be added to the sequence of operations for this recipe.

...

One or more selector functions to choose variables. For step_tokenmerge, this indicates the variables to be encoded into a tokenlist. See recipes::selections() for more details. For the tidy method, these are not currently used.

role

For model terms created by this step, what analysis role should they be assigned?. By default, the function assumes that the new columns created by the original variables will be used as predictors in a model.

trained

A logical to indicate if the recipe has been baked.

columns

A list of tibble results that define the encoding. This is NULL until the step is trained by recipes::prep.recipe().

prefix

A prefix for generated column names, default to "tokenmerge".

skip

A logical. Should the step be skipped when the recipe is baked by recipes::bake.recipe()? While all operations are baked when recipes::prep.recipe() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = TRUE as it may affect the computations for subsequent operations.

id

A character string that is unique to this step to identify it

x

A step_tokenmerge object.

Value

An updated version of recipe with the new step added to the sequence of existing steps (if any).

See also

step_tokenize() to turn character into tokenlist.

Other tokenlist to tokenlist steps: step_lemma(), step_ngram(), step_pos_filter(), step_stem(), step_stopwords(), step_tokenfilter()

Examples

library(recipes) library(modeldata) data(okc_text) okc_rec <- recipe(~ ., data = okc_text) %>% step_tokenize(essay0, essay1) %>% step_tokenmerge(essay0, essay1) okc_obj <- okc_rec %>% prep() juice(okc_obj)
#> # A tibble: 750 x 9 #> essay2 essay3 essay4 essay5 essay6 essay7 essay8 essay9 tokenmerge #> <fct> <fct> <fct> <fct> <fct> <fct> <fct> <fct> <tknlist> #> 1 "writin… "that … "music… "frien… "roman… "usual… "i hav… "you'r… [240 tokens] #> 2 "pickin… "i loo… "non-f… "deser… "every… "makin… "this … "you'r… [28 tokens] #> 3 "procra… "my sm… "sushi… "my ce… "movin… "at ho… "on ma… "you t… [385 tokens] #> 4 "i've b… "my fa… "novel… "famil… "how t… "havin… "is th… "you'd… [104 tokens] #> 5 "being … "that … "books… "- fri… "the u… "proba… "uhh..… "you s… [87 tokens] #> 6 "i'm re… "the w… "books… "guita… "a lit… "hangi… "i'm p… "if yo… [352 tokens] #> 7 "well, … "eithe… "i don… "1) my… "the e… "out w… "i own… "you a… [374 tokens] #> 8 "welll.… "dimpl… "book-… "1-lau… "sex<b… "depen… "i lik… "you a… [228 tokens] #> 9 "eating… "my gl… "donni… "sushi… "love." "drink… "hm..." "." [5 tokens] #> 10 "living… "long … "films… "yuzu,… "menus… "often… "but i… "you a… [610 tokens] #> # … with 740 more rows
tidy(okc_rec, number = 1)
#> # A tibble: 2 x 3 #> terms value id #> <chr> <chr> <chr> #> 1 essay0 <NA> tokenize_2qTCT #> 2 essay1 <NA> tokenize_2qTCT
tidy(okc_obj, number = 1)
#> # A tibble: 2 x 3 #> terms value id #> <quos> <chr> <chr> #> 1 essay0 words tokenize_2qTCT #> 2 essay1 words tokenize_2qTCT