The documentation says that 'support configuring custom analyzers, including analyzers that are not included with Lucene itself.' However,
Here it doesn't say how to configure my own tokenizer. Is there an example of how to configure my own tokenizer?
If I have my own tokenizer interface in python:
def get_tokens(text): ... return tokens
How to configure it to be used by the full-text indexing?