ClassicTokenizer Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.
public class ClassicTokenizer : Azure.Search.Documents.Indexes.Models.LexicalTokenizer, System.ClientModel.Primitives.IJsonModel<Azure.Search.Documents.Indexes.Models.ClassicTokenizer>, System.ClientModel.Primitives.IPersistableModel<Azure.Search.Documents.Indexes.Models.ClassicTokenizer>
type ClassicTokenizer = class
inherit LexicalTokenizer
interface IJsonModel<ClassicTokenizer>
interface IPersistableModel<ClassicTokenizer>
Public Class ClassicTokenizer
Inherits LexicalTokenizer
Implements IJsonModel(Of ClassicTokenizer), IPersistableModel(Of ClassicTokenizer)
- Inheritance
- Implements
Constructors
ClassicTokenizer(String) |
Initializes a new instance of ClassicTokenizer. |
Properties
MaxTokenLength |
The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters. |
Name |
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. (Inherited from LexicalTokenizer) |
Methods
JsonModelWriteCore(Utf8JsonWriter, ModelReaderWriterOptions) |