step_ngram creates a specification of a recipe step that will convert a tokenlist into a list of ngram of tokens.

  role = NA,
  trained = FALSE,
  columns = NULL,
  num_tokens = 3L,
  min_num_tokens = 3L,
  delim = "_",
  skip = FALSE,
  id = rand_id("ngram")

# S3 method for step_ngram
tidy(x, ...)



A recipe object. The step will be added to the sequence of operations for this recipe.


One or more selector functions to choose variables. For step_ngram, this indicates the variables to be encoded into a tokenlist. See recipes::selections() for more details. For the tidy method, these are not currently used.


Not used by this step since no new variables are created.


A logical to indicate if the recipe has been baked.


A list of tibble results that define the encoding. This is NULL until the step is trained by recipes::prep.recipe().


The number of tokens in the n-gram. This must be an integer greater than or equal to 1. Defaults to 3.


The minimum number of tokens in the n-gram. This must be an integer greater than or equal to 1 and smaller than n. Defaults to 3.


The separator between words in an n-gram. Defaults to "_".


A logical. Should the step be skipped when the recipe is baked by recipes::bake.recipe()? While all operations are baked when recipes::prep.recipe() is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using skip = TRUE as it may affect the computations for subsequent operations.


A character string that is unique to this step to identify it.


A step_ngram object.


An updated version of recipe with the new step added to the sequence of existing steps (if any).


The use of this step will leave the ordering of the tokens meaningless. If min_num_tokens < num_tokens then the tokens order in increasing fashion with respect to the number of tokens in the n-gram. If min_num_tokens = 1 and num_tokens = 3 then the output contains all the 1-grams followed by all the 2-grams followed by all the 3-grams.

See also

step_tokenize() to turn character into tokenlist.

Other tokenlist to tokenlist steps: step_lemma(), step_pos_filter(), step_stem(), step_stopwords(), step_tokenfilter(), step_tokenmerge()


library(recipes) library(modeldata) data(okc_text) okc_rec <- recipe(~ ., data = okc_text) %>% step_tokenize(essay0) %>% step_ngram(essay0) okc_obj <- okc_rec %>% prep() juice(okc_obj, essay0) %>% slice(1:2)
#> # A tibble: 2 x 1 #> essay0 #> <tknlist> #> 1 [182 tokens] #> 2 [22 tokens]
juice(okc_obj) %>% slice(2) %>% pull(essay0)
#> <textrecipes_tokenlist[1]> #> [1] [22 tokens] #> # Unique Tokens: 22
tidy(okc_rec, number = 2)
#> # A tibble: 1 x 3 #> terms value id #> <chr> <chr> <chr> #> 1 essay0 <NA> ngram_cooT1
tidy(okc_obj, number = 2)
#> # A tibble: 1 x 2 #> terms id #> <quos> <chr> #> 1 essay0 ngram_cooT1