NGramTokenizer Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Tokenizes the input into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.
public class NGramTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer
public class NGramTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer, System.ClientModel.Primitives.IJsonModel<Azure.Search.Documents.Indexes.Models.NGramTokenizer>, System.ClientModel.Primitives.IPersistableModel<Azure.Search.Documents.Indexes.Models.NGramTokenizer>
type NGramTokenizer = class
inherit LexicalTokenizer
type NGramTokenizer = class
inherit LexicalTokenizer
interface IJsonModel<NGramTokenizer>
interface IPersistableModel<NGramTokenizer>
Public Class NGramTokenizer
Inherits LexicalTokenizer
Public Class NGramTokenizer
Inherits LexicalTokenizer
Implements IJsonModel(Of NGramTokenizer), IPersistableModel(Of NGramTokenizer)
- Inheritance
- Implements
Constructors
NGramTokenizer(String) |
Initializes a new instance of NGramTokenizer. |
Properties
MaxGram |
The maximum n-gram length. Default is 2. Maximum is 300. |
MinGram |
The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram. |
Name |
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. (Inherited from LexicalTokenizer) |
TokenChars |
Character classes to keep in the tokens. |
Methods
JsonModelWriteCore(Utf8JsonWriter, ModelReaderWriterOptions) |