Skip to content
On this page

Tokenizer affects the required memory also as query time and flexibility of partial matches. Try to choose the most upper of these tokenizer which fits your needs:

OptionDescriptionExampleMemory Factor (n = length of word)
"strict"index whole wordsfoobar* 1
"forward"incrementally index words in forward directionfoobar
foobar
* n
"reverse"incrementally index words in both directionsfoobar
foobar
* 2n - 1
"full"index every possible combinationfoobar
foobar
* n * (n - 1)

Add custom tokenizer

TIP

A tokenizer split words/terms into components or partials.

Define a private custom tokenizer during creation/initialization:

js
var index = new FlexSearch({
  tokenize: function (str) {
    return str.split(/\s-\//g);
  },
});

The tokenizer function gets a string as a parameter and has to return an array of strings representing a word or term. In some languages every char is a term and also not separated via whitespaces.