It's all about the tokens: understanding tokenization

It's all about the tokens: understanding tokenization

It's all about the tokens: understanding tokenization

Michele Riva

Algorithms

6

min read

May 21, 2023

At the heart of search engines lies an important component called the tokenizer.

In this blog post, we will delve into the world of tokenizers, their purpose, and how they work in the context of full-text search engines. We’ll also explore some examples using JavaScript to help solidify your understanding.

What is a Tokenizer?

A tokenizer is a component of a full-text search engine that processes raw text and breaks it into individual tokens.

Tokens are the smallest units of text that the search engine will analyze and index.
In most cases, a token represents a single word, but in some instances, it can represent phrases, numbers, or other types of text.

The process of converting raw text into tokens is called tokenization.

Tokenizers are crucial for building efficient and accurate search indexes. They help search engines:

  1. Analyze and process text data more efficiently.

  2. Index the text data in a structured format for faster retrieval.

  3. Improve search accuracy by allowing for more advanced search techniques, such as stemming and lemmatization.

Types of Tokenizers

There are several types of tokenizers, each with its own strengths and weaknesses.

In this section, we will explore three common tokenizers: Whitespace Tokenizer, Pattern Tokenizer, and Standard Tokenizer.

Whitespace Tokenizer

The Whitespace Tokenizer is the simplest type of tokenizer. It splits the input text into tokens based on whitespace characters, such as spaces, tabs, and newlines.

function whitespaceTokenizer(text) {
  return text.split(/\s+/);
}

const sampleText = "Full-text search engines are powerful tools.";

console.log(whitespaceTokenizer(sampleText));
// Output: ["Full-text", "search", "engines", "are", "powerful", "tools."]

Pros of the Whitespace Tokenizer:

  1. Simplicity: The Whitespace Tokenizer is simple and easy to implement, as it only requires splitting the input text based on whitespace characters (spaces, tabs, newlines).

  2. Speed: Due to its simplicity, the Whitespace Tokenizer is generally faster than more complex tokenizers, making it suitable for applications where processing speed is important.

  3. Language agnosticism: It can be applied to a wide range of languages, as most written languages use whitespace characters to separate words.

Cons of the Whitespace Tokenizer:

  1. Limited accuracy: The Whitespace Tokenizer doesn’t take into account punctuation, special characters, or compound words, which may result in less accurate search results or indexing. This is particularly problematic when dealing with languages that have complex rules for word formation or use characters other than whitespace to separate words.

  2. Inconsistency: Tokens generated by the Whitespace Tokenizer may include punctuation marks, numbers, or special characters, resulting in an inconsistent representation of the text.

  3. Lack of normalization: The Whitespace Tokenizer does not perform any normalization or preprocessing on the tokens, such as converting them to lowercase, removing diacritics, or applying stemming/lemmatization. This may lead to reduced search accuracy and performance.

In summary, while the Whitespace Tokenizer is simple and fast, its limitations in handling punctuation, special characters, and text normalization may result in less accurate search results and indexing. More sophisticated tokenizers, such as the Standard Tokenizer, can provide better accuracy and consistency by taking into account various linguistic rules and preprocessing techniques.

Pattern Tokenizer

The Pattern Tokenizer uses a regular expression pattern to split the input text into tokens. This allows for more fine-grained control over the tokenization process and can handle more complex cases, such as splitting on punctuation marks or special characters.

function patternTokenizer(text, pattern) {
  return text.split(pattern);
}

const sampleText = "Full-text search engines are powerful tools.";
const pattern = /[\s,.!?]+/;

console.log(patternTokenizer(sampleText, pattern));
// Output: ["Full-text", "search", "engines", "are", "powerful", "tools"]

Pros of the Pattern Tokenizer:

  1. Flexibility: The Pattern Tokenizer allows for more fine-grained control over the tokenization process by using regular expression patterns to split the input text. This makes it more adaptable to various use cases and text formats.

  2. Better handling of special characters and punctuation: By using custom patterns, the Pattern Tokenizer can handle complex cases, such as splitting on punctuation marks, special characters, or other delimiters, which can improve the accuracy and consistency of tokenization.

  3. Language adaptability: With the right pattern, the Pattern Tokenizer can be customized to handle specific language characteristics or word separation rules, making it more versatile across different languages.

Cons of the Pattern Tokenizer:

  1. Complexity: Compared to the Whitespace Tokenizer, the Pattern Tokenizer requires more knowledge of regular expressions and can be more challenging to implement and maintain.

  2. Performance: The use of regular expressions may lead to slower processing times compared to simpler tokenizers, especially when dealing with large amounts of text or complex patterns.

  3. Lack of normalization: Like the Whitespace Tokenizer, the Pattern Tokenizer does not perform any normalization or preprocessing on the tokens by default, such as converting them to lowercase, removing diacritics, or applying stemming/lemmatization. This may lead to reduced search accuracy and performance if not addressed separately.

In summary, the Pattern Tokenizer offers more flexibility and control over the tokenization process compared to the Whitespace Tokenizer, allowing for better handling of special characters, punctuation, and language-specific rules. However, it may be more complex to implement and maintain, and it does not address token normalization or preprocessing issues by default.

Standard Tokenizer

The Standard Tokenizer is a more sophisticated tokenizer that takes into account various linguistic rules, such as handling punctuation, special characters, and compound words. It usually combines multiple tokenization strategies to provide a more accurate and meaningful representation of the text.

function standardTokenizer(text) {
  return text
    .replace(/[\.,!?\n]+/g, " ")          // Replace punctuation marks and newlines with spaces
    .split(/\s+/)                         // Split on whitespace characters
    .map((token) => token.toLowerCase()); // Convert all tokens to lowercase
}

const sampleText = "Full-text search engines are powerful tools.";

console.log(standardTokenizer(sampleText));
// Output: ["full-text", "search", "engines", "are", "powerful", "tools"]

Pros of the Standard Tokenizer:

  1. Linguistic rules: The Standard Tokenizer takes into account various linguistic rules, such as handling punctuation, special characters, and compound words, providing a more accurate and meaningful representation of the text.

  2. Normalization: By default, the Standard Tokenizer performs some normalization on the tokens, such as converting them to lowercase or removing diacritics. This helps improve search accuracy and performance.

  3. Consistency: The Standard Tokenizer generates more consistent tokens by removing unwanted characters and applying uniform normalization rules, which can lead to better search results and indexing.

  4. Language adaptability: The Standard Tokenizer can be customized to handle specific language characteristics or word separation rules, making it versatile across different languages.

Cons of the Standard Tokenizer:

  1. Complexity: Compared to the Whitespace and Pattern Tokenizers, the Standard Tokenizer is more complex to implement and maintain, as it combines multiple tokenization strategies and preprocessing techniques.

  2. Performance: Due to its sophistication, the Standard Tokenizer may be slower than simpler tokenizers, especially when processing large amounts of text. However, this trade-off is often justified by improved search accuracy and performance.

  3. Customization challenges: While the Standard Tokenizer is more adaptable to different languages, customizing it to handle specific language rules or nuances may require deeper linguistic knowledge and expertise.

In summary, the Standard Tokenizer offers a more sophisticated approach to tokenization, taking into account various linguistic rules and providing better accuracy, consistency, and normalization. However, it can be more complex to implement and maintain, and it may have slower processing times compared to simpler tokenizers. Despite these drawbacks, the improved search accuracy and performance often make the Standard Tokenizer a preferred choice for full-text search engines.

Tokenization in Orama

Orama uses the Standard Tokenizer to tokenize text for indexing and searching.

The Standard Tokenizer is a good choice for Orama because it provides a more accurate and consistent representation of the text, which is important for search accuracy and performance. It also allows for customization to handle specific language characteristics or word separation rules, making it versatile across different languages.

To customize the tokenizer used by Orama, provide an object which has at least the following properties:

  • tokenize: A function that accepts a string and language and returns a list of tokens.

  • language (string): The language supported by the tokenizer.

  • normalizationCache (Map): It can used to cache tokens normalization.

In other words, a tokenizer must satisfy the following interface:

interface Tokenizer {
  language: string
  normalizationCache: Map<string, string>
  tokenize: (raw: string, language?: string) => string[] | Promise<string[]>
}

For instance, with the following configuration only the first character of each string will be indexed and only the first character of a term will be searched:

import { create } from '@orama/orama'
 
const movieDB = await create({
  schema: {
    title: 'string',
    director: 'string',
  },
  components: {
    tokenizer: {
      language: 'english',
      normalizationCache: new Map(),
      tokenize(raw) {
        return raw[0]
      }
    }
  }
})

The Orama’s default tokenizer is exported via @orama/orama/components and can be customized as follows:

import { create } from '@orama/orama'
import { tokenizer as defaultTokenizer } from '@orama/orama/components'
 
const movieDB = await create({
  schema: {
    title: 'string',
    director: 'string',
  },
  components: {
    tokenizer: await defaultTokenizer.createTokenizer({language: 'english', stemming: false })
  }
})

Optionally you can pass the customization options without using createTokenizer:

import { create } from '@orama/orama'
 
const movieDB = await create({
  schema: {
    title: 'string',
    director: 'string',
  },
  components: {
    tokenizer: {
      language: 'english',
    }
  }
})

Read more in the official docs.

Conclusion

In conclusion, tokenization is a fundamental aspect of full-text search engines, providing the basis for efficient text analysis and indexing. As we’ve explored, there are various tokenizers available, each with its own strengths and weaknesses. The Whitespace Tokenizer is simple and fast but lacks accuracy; the Pattern Tokenizer is flexible but requires more knowledge of regular expressions; and the Standard Tokenizer is more sophisticated, offering better accuracy and consistency, but at the cost of increased complexity and potentially slower processing times.

When choosing a tokenizer for your search engine, it is crucial to consider the specific requirements and characteristics of the data and languages you are working with, as well as the performance and accuracy trade-offs you are willing to make. In the case of Orama, the Standard Tokenizer is used due to its improved search accuracy, performance, and adaptability to different languages.

By understanding the different types of tokenizers and their implications, you can make more informed decisions about the tokenization strategies to employ in your search engine, ultimately improving the effectiveness and efficiency of your text-based searches.

Run unlimited full-text, vector, and hybrid search queries at the edge, for free!

Our latest news and articles

Upgrade for extra features and collaboration with your team.

Our latest news and articles

Upgrade for extra features and collaboration with your team.

Our latest news and articles

Upgrade for extra features and collaboration with your team.

Our latest news and articles

Upgrade for extra features and collaboration with your team.

Our latest news and articles

Upgrade for extra features and collaboration with your team.

© OramaSearch Inc. All rights reserved.

Made in Italy and California with 💜

© OramaSearch Inc. All rights reserved.

Made in Italy and California with 💜

© OramaSearch Inc. All rights reserved.

Made in Italy and California with 💜

© OramaSearch Inc. All rights reserved.

Made in Italy and California with 💜

© OramaSearch Inc. All rights reserved.

Made in Italy and California with 💜