弹性搜索 N-gram 未返回预期结果
Elastic search N-gram not returning expected result
试图计算出这个微不足道的例子的得分。我希望得到 brenda eaton
的文档,但我得到 brenda fassie
作为最好的结果。
PUT ngram
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"tweet" : {
"properties" : {
"text" : {
"type" : "text",
"analyzer": "my_analyzer"
}
}
}
}
}
PUT ngram/tweet/1
{
"text":"searched the blue sky during the summer"
}
PUT ngram/tweet/2
{
"text":"sdssded the trans hex during the sssss"
}
PUT ngram/tweet/3
{
"text":"searched the brenda eaton during the summer"
}
PUT ngram/tweet/4
{
"text":"sdssded the brenda fassie during the sssss"
}
GET ngram/_search
{
"query": {
"match" : {
"text" : {
"query" : "brenda eaton",
"max_expansions" : 10
}
}
}
}
在填充索引的初始阶段,文档的相关性可能在很大程度上取决于它们在分片中的分布。尝试用一个主分片和一个副本分片创建索引,你会得到想要的结果。
您可以在 Elasticsearch 指南的以下文章中找到对这种现象的很好解释:Relevance is broken!
试图计算出这个微不足道的例子的得分。我希望得到 brenda eaton
的文档,但我得到 brenda fassie
作为最好的结果。
PUT ngram
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "ngram",
"min_gram": 3,
"max_gram": 3,
"token_chars": [
"letter",
"digit"
]
}
}
}
},
"mappings": {
"tweet" : {
"properties" : {
"text" : {
"type" : "text",
"analyzer": "my_analyzer"
}
}
}
}
}
PUT ngram/tweet/1
{
"text":"searched the blue sky during the summer"
}
PUT ngram/tweet/2
{
"text":"sdssded the trans hex during the sssss"
}
PUT ngram/tweet/3
{
"text":"searched the brenda eaton during the summer"
}
PUT ngram/tweet/4
{
"text":"sdssded the brenda fassie during the sssss"
}
GET ngram/_search
{
"query": {
"match" : {
"text" : {
"query" : "brenda eaton",
"max_expansions" : 10
}
}
}
}
在填充索引的初始阶段,文档的相关性可能在很大程度上取决于它们在分片中的分布。尝试用一个主分片和一个副本分片创建索引,你会得到想要的结果。
您可以在 Elasticsearch 指南的以下文章中找到对这种现象的很好解释:Relevance is broken!