Elasticsearch Analyzer 前 4 个和后 4 个字符
Elasticsearch Analyzer first 4 and last 4 characters
对于 Elasticsearch,我想指定一个 搜索分析器,其中前 4 个字符和后 4 个字符被标记化。
For example: supercalifragilisticexpialidocious => ["supe", "ious"]
我尝试了一个 ngram,如下所示
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 4,
"max_gram": 4
}
}
}
}
}
我正在测试分析仪如下
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "supercalifragilisticexpialidocious."
}
然后取回“超级”……一大堆我不想要的东西和 'cious'。我的问题是 我怎样才能只从上面指定的 ngram 分词器中获取第一个和最后一个结果?
{
"tokens": [
{
"token": "supe",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "uper",
"start_offset": 1,
"end_offset": 5,
"type": "word",
"position": 1
},
...
{
"token": "ciou",
"start_offset": 29,
"end_offset": 33,
"type": "word",
"position": 29
},
{
"token": "ious",
"start_offset": 30,
"end_offset": 34,
"type": "word",
"position": 30
},
{
"token": "ous.",
"start_offset": 31,
"end_offset": 35,
"type": "word",
"position": 31
}
]
}
实现此目的的一种方法是利用 pattern_capture
token filter 并取前 4 个和后 4 个字符。
首先,像这样定义索引:
PUT my_index
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"first_last_four"
]
}
},
"filter": {
"first_last_four": {
"type": "pattern_capture",
"preserve_original": false,
"patterns": [
"""(\w{4}).*(\w{4})"""
]
}
}
}
}
}
}
然后,您可以测试新的自定义分析器:
POST test/_analyze
{
"text": "supercalifragilisticexpialidocious",
"analyzer": "my_analyzer"
}
并看到您期望的令牌在那里:
{
"tokens" : [
{
"token" : "supe",
"start_offset" : 0,
"end_offset" : 34,
"type" : "word",
"position" : 0
},
{
"token" : "ious",
"start_offset" : 0,
"end_offset" : 34,
"type" : "word",
"position" : 0
}
]
}
对于 Elasticsearch,我想指定一个 搜索分析器,其中前 4 个字符和后 4 个字符被标记化。
For example: supercalifragilisticexpialidocious => ["supe", "ious"]
我尝试了一个 ngram,如下所示
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 4,
"max_gram": 4
}
}
}
}
}
我正在测试分析仪如下
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "supercalifragilisticexpialidocious."
}
然后取回“超级”……一大堆我不想要的东西和 'cious'。我的问题是 我怎样才能只从上面指定的 ngram 分词器中获取第一个和最后一个结果?
{
"tokens": [
{
"token": "supe",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "uper",
"start_offset": 1,
"end_offset": 5,
"type": "word",
"position": 1
},
...
{
"token": "ciou",
"start_offset": 29,
"end_offset": 33,
"type": "word",
"position": 29
},
{
"token": "ious",
"start_offset": 30,
"end_offset": 34,
"type": "word",
"position": 30
},
{
"token": "ous.",
"start_offset": 31,
"end_offset": 35,
"type": "word",
"position": 31
}
]
}
实现此目的的一种方法是利用 pattern_capture
token filter 并取前 4 个和后 4 个字符。
首先,像这样定义索引:
PUT my_index
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"first_last_four"
]
}
},
"filter": {
"first_last_four": {
"type": "pattern_capture",
"preserve_original": false,
"patterns": [
"""(\w{4}).*(\w{4})"""
]
}
}
}
}
}
}
然后,您可以测试新的自定义分析器:
POST test/_analyze
{
"text": "supercalifragilisticexpialidocious",
"analyzer": "my_analyzer"
}
并看到您期望的令牌在那里:
{
"tokens" : [
{
"token" : "supe",
"start_offset" : 0,
"end_offset" : 34,
"type" : "word",
"position" : 0
},
{
"token" : "ious",
"start_offset" : 0,
"end_offset" : 34,
"type" : "word",
"position" : 0
}
]
}