ElasticSearch 查询忽略其他搜索词

ElasticSearch Query Ignoring Additional Search Terms

我正在使用 ElasticSearch 在多个结构化字段上构建一个自由形式的“键入即搜索”功能。人们搜索的主要字段是 first_namelast_namecity

问题:以下两个搜索 David SalazarDavid Salazar Denver 返回相同的结果,其中 "Denver" 似乎被忽略了。

我假设这是我的查询的问题,但我仍然不知道如何更改它以获得我正在寻找的内容。

这里是查询:

GET index_name/_search
{
  "query": {
    "multi_match": {
      "fields": [
        "first_name","middle_name", "last_name", "city", "county", "street"],
      "query": "David Salazar Denver",
      "type": "cross_fields",
      "use_dis_max": false
    }
  },
  "size": 10
}

下面是索引设置和字段映射的相关部分

{
  "index": {
    "aliases": {},
    "mappings": {
      "type": {
        "properties": {
          "city": {
            "type": "keyword"
          },
          "county": {
            "type": "keyword"
          },
          "first_name": {
            "type": "text",
            "analyzer": "synonym_autocomplete",
            "search_analyzer": "standard"
          },
          "last_name": {
            "type": "text",
            "analyzer": "autocomplete",
            "search_analyzer": "standard"
          },
          "middle_name": {
            "type": "text",
            "analyzer": "synonym_autocomplete",
            "search_analyzer": "standard"
          },
          "street": {
            "type": "text",
            "analyzer": "autocomplete",
            "search_analyzer": "standard"
          },
        }
      }
    },
    "settings": {
      "index": {
        [...]
        "analysis": {
          "filter": {
            "synonym": {
              "type": "synonym",
              "synonyms": [Long list of nicknames]
            },
            "autocomplete_filter": {
              "type": "edge_ngram",
              "min_gram": "2",
              "max_gram": "15"
            }
          },
          "analyzer": {
            "synonym_autocomplete": {
              "filter": [
                "standard", "lowercase", "synonym", "autocomplete_filter"],
              "type": "custom",
              "tokenizer": "standard"
            },
            "autocomplete": {
              "filter": ["standard","lowercase","autocomplete_filter"],
              "type": "custom",
              "tokenizer": "standard"
            }
          }
        },
        [...]
        }
      }
    }
  }
}

请查看 cross_fields 查询文档。你有一个 operator 参数,如果不存在则设置为 OR 。这意味着您当前的查询正在您的字段列表 ["first_name","middle_name", "last_name", "city", "county", "street"] 中搜索来自 "David Salazar Denver" 的任何术语。这基本上意味着只要在您的任何字段中找到搜索查询中的一个词,就会从搜索中返回文档。

Val 是正确的,主要问题是 cross_fields 仅适用于使用相同分析器的字段。

所以我用下面的代码创建了一个新索引,然后使用 reindex API 将数据复制到这个新索引

{
  "index": {
    "aliases": {},
    "mappings": {
      "type": {
        "properties": {
          "city": {
            "type": "keyword"
          },
          "county": {
            "type": "text",
            "analyzer": "synonym_autocomplete",
            "search_analyzer": "standard"
          },
          "first_name": {
            "type": "text",
            "analyzer": "synonym_autocomplete",
            "search_analyzer": "standard"
          },
          "last_name": {
            "type": "text",
            "analyzer": "autocomplete",
            "search_analyzer": "standard"
          },
          "middle_name": {
            "type": "text",
            "analyzer": "synonym_autocomplete",
            "search_analyzer": "standard"
          },
          "street": {
            "type": "text",
            "analyzer": "synonym_autocomplete",
            "search_analyzer": "standard"
          },
        }
      }
    },
    "settings": {
      "index": {
        [...]
        "analysis": {
          "filter": {
            "synonym": {
              "type": "synonym",
              "synonyms": [Long list of nicknames]
            },
            "autocomplete_filter": {
              "type": "edge_ngram",
              "min_gram": "2",
              "max_gram": "15"
            }
          },
          "analyzer": {
            "synonym_autocomplete": {
              "filter": [
                "standard", "lowercase", "synonym", "autocomplete_filter"],
              "type": "custom",
              "tokenizer": "standard"
            },
            "autocomplete": {
              "filter": ["standard","lowercase","autocomplete_filter"],
              "type": "custom",
              "tokenizer": "standard"
            }
          }
        },
        [...]
        }
      }
    }
  }
}