Elasticsearch02

一. analysis与analyzer

​ analysis(只是一个概念),文本分析是将全文本转换为一系列单词的过程,也叫分词。analysis是通过analyzer(分词器)来实现的,可以使用Elasticsearch内置的分词器,也可以自己去定制一些分词器。除了在数据写入的时候将词条进行转换,那么在查询的时候也需要使用相同的分析器对语句进行分析。

​ anaylzer是由三部分组成,例如有

Hello a World, the world is beautifu

1. Character Filter: 将文本中html标签剔除掉。
2. Tokenizer: 按照规则进行分词,在英文中按照空格分词。
3. Token Filter: 去掉stop world(停顿词,a, an, the, is),然后转换小写。

1.1 内置的分词器

分词器名称 处理过程
Standard Analyzer 默认的分词器,按词切分,小写处理
Simple Analyzer 按照非字母切分(符号被过滤),小写处理
Stop Analyzer 小写处理,停用词过滤(the, a, this)
Whitespace Analyzer 按照空格切分,不转小写
Keyword Analyzer 不分词,直接将输入当做输出
Pattern Analyzer 正则表达式,默认是\W+(非字符串分隔)

1.2 内置分词器示例

A. Standard Analyzer

GET _analyze
{
  "analyzer": "standard",
  "text": "2 Running quick brown-foxes leap over lazy dog in the summer evening"
}

B. Simple Analyzer

GET _analyze
{
  "analyzer": "simple",
  "text": "2 Running quick brown-foxes leap over lazy dog in the summer evening"
}

C. Stop Analyzer

GET _analyze
{
  "analyzer": "stop",
  "text": "2 Running quick brown-foxes leap over lazy dog in the summer evening"
}

D. Whitespace Analyzer

GET _analyze
{
  "analyzer": "whitespace",
  "text": "2 Running quick brown-foxes leap over lazy dog in the summer evening"
}

E. Keyword Analyzer

GET _analyze
{
  "analyzer": "keyword",
  "text": "2 Running quick brown-foxes leap over lazy dog in the summer evening"
}

F. Pattern Analyzer

GET _analyze
{
  "analyzer": "pattern",
  "text": "2 Running quick brown-foxes leap over lazy dog in the summer evening"
}

1.3 中文分词

​ 中文分词在所有的搜索引擎中都是一个很大的难点,中文的句子应该是切分成一个个的词,一句中文,在不同的上下文中,其实是有不同的理解,例如下面这句话:

这个苹果,不大好吃/这个苹果,不大,好吃
1.3.1 IK分词器

IK分词器支持自定义词库,支持热更新分词字典,地址为 : https://github.com/medcl/elasticsearch-analysis-ik

elasticsearch-plugin.bat install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.3.0/elasticsearch-analysis-ik-6.3.0.zip

安装步骤:

  1. 下载zip包,下载路径为:https://github.com/medcl/elasticsearch-analysis-ik/releases
  2. 在Elasticsearch的plugins目录下创建名为 analysis-ik 的目录,将下载好的zip包解压在该目录下
  3. 在dos命令行进入Elasticsearch的bin目录下,执行 elasticsearch-plugin.bat list 即可查看到该插件,然后重启elasticsearch.bat

IK分词插件对应的分词器有以下几种:

  • ik_smart
  • ik_max_word
GET _analyze
{
  "analyzer": "ik_smart",
  "text": "特朗普5日接受采访时表示取消佛罗里达州的议程,他可能在白宫接受共和党总统候选人提名并发表演讲。"
}
GET _analyze
{
  "analyzer": "ik_max_word",
  "text": "特朗普5日接受采访时表示取消佛罗里达州的议程,他可能在白宫接受共和党总统候选人提名并发表演讲。"
}
1.3.2 HanLP

安装步骤如下:

  1. 下载ZIP包,下载路径为:https://pan.baidu.com/s/1K4aSgHBpbfF3ET6p0YWgpg,密码:vmvl
  2. 在Elasticsearch的plugins目录下创建名为 analysis-hanlp 的目录,将下载好的elasticsearch-analysis-hanlp-7.4.2.zip包解压在该目录下.
  3. 下载词库,地址为:https://github.com/hankcs/HanLP/releases
  4. 将analyzer-hanlp目录下的data目录删掉,然后将词库 data-for-1.7.5.zip 解压到anayler-hanlp目录下
  5. 第2步 解压目录下的 config 文件夹中两个文件 hanlp.properties hanlp-remote.xml 拷贝到ES的根目录中的config目录下 的analysis-hanlp 文件夹中(analyzer-hanlp 目录需要手动去创建)。
  6. hanlp分词器安装的时候所需的文件\hanlp文件夹中提供的六个文件拷贝到 $ES_HOME\plugins\analysis-hanlp\data\dictionary\custom 目录下。
  7. 在dos命令行进入Elasticsearch的bin目录下,执行 elasticsearch-plugin.bat list 即可查看到该插件,然后重启elasticsearch.bat

HanLP对应的分词器如下:

  • hanlp,默认的分词
  • hanlp_standard,标准分词
  • hanlp_index,索引分词
  • hanlp_nlp,nlp分词
  • hanlp_n_short,N-最短路分词
  • hanlp_dijkstra,最短路分词
  • hanlp_speed,极速词典分词
GET _analyze
{
  "analyzer": "hanlp",
  "text": "特朗普5日接受采访时表示取消佛罗里达州的议程,他可能在白宫接受共和党总统候选人提名并发表演讲。"
}
1.3.3 pinyin分词器

安装步骤:

  1. 下载ZIP包,下载路径为:https://github.com/medcl/elasticsearch-analysis-pinyin/releases
  2. 在Elasticsearch的plugins目录下创建名为 analyzer-pinyin 的目录,将下载好的elasticsearch-analysis-pinyin-7.4.2.zip包解压在该目录下.
  3. 在dos命令行进入Elasticsearch的bin目录下,执行 elasticsearch-plugin.bat list 即可查看到该插件,然后重启elasticsearch.bat

1.4 中文分词演示

ik_smart

GET _analyze
{
  "analyzer": "ik_smart",
  "text": ["剑桥分析公司多位高管对卧底记者说,他们确保了唐纳德·特朗普在总统大选中获胜"]
}

hanlp

GET _analyze
{
  "analyzer": "hanlp",
  "text": ["剑桥分析公司多位高管对卧底记者说,他们确保了唐纳德·特朗普在总统大选中获胜"]
}

hanlp_standard

GET _analyze
{
  "analyzer": "hanlp_standard",
  "text": ["剑桥分析公司多位高管对卧底记者说,他们确保了唐纳德·特朗普在总统大选中获胜"]
}

hanlp_speed

GET _analyze
{
  "analyzer": "hanlp_speed",
  "text": ["剑桥分析公司多位高管对卧底记者说,他们确保了唐纳德·特朗普在总统大选中获胜"]
}

1.5 分词的实际应用

​ 在如上列举了很多的分词器,那么在实际中该如何应用?

1.5.1 设置mapping

​ 要想使用分词器,先要指定我们想要对那个字段使用何种分词,如下所示:

# 先删除当前索引
DELETE user

# 自定义某个字段的分词器
PUT user
{
  "mappings": {
    "properties": {
      "content": {
        "type": "text",
        "analyzer": "hanlp_index"
      }
    }
  }
}
1.5.2 插入数据
POST user/_bulk
{"index":{}}
{"content":"如不能登录,请在百端登录百度首页,点击【登录遇到问题】,进行找回密码操作"}
{"index":{}}
{"content":"网盘客户端访问隐藏空间需要输入密码方可进入。"}
{"index":{}}
{"content":"剑桥的网盘不好用"}
1.5.3 查询
GET user/_search
{
  "query": {
    "match": {
      "content": "密码"
    }
  }
}

1.6 拼音分词器

​ 在查询的过程中我们可能需要使用拼音来进行查询,在中文分词器中我们介绍过 pinyin 分词器,那么在实际的工作中该如何使用呢?

1.6.1 设置settings
PUT /medcl 
{
    "settings" : {
        "analysis" : {
            "analyzer" : {
                "pinyin_analyzer" : {
                    "tokenizer" : "my_pinyin"
                 }
            },
            "tokenizer" : {
                "my_pinyin" : {
                    "type" : "pinyin",
                    "keep_separate_first_letter" : false,
                    "keep_full_pinyin" : true,
                    "keep_original" : true,
                    "limit_first_letter_length" : 16,
                    "lowercase" : true,
                    "remove_duplicated_term" : true
                }
            }
        }
    }
}

如上所示,我们基于现有的拼音分词器定制了一个名为 pinyin_analyzer 这样一个分词器。可用的参数可以参照:https://github.com/medcl/elasticsearch-analysis-pinyin

1.6.2 设置mapping
PUT medcl/_mapping
{
    "properties": {
        "name": {
            "type": "keyword",
                "fields": {
                    "pinyin": {
                        "type": "text",
                            "analyzer": "pinyin_analyzer",
                                "boost": 10
                    }
                }
        }
    }
}
1.6.3 数据的插入
POST medcl/_bulk
{"index":{}}
{"name": "马云"}
{"index":{}}
{"name": "马化腾"}
{"index":{}}
{"name": "李彦宏"}
{"index":{}}
{"name": "张朝阳"}
{"index":{}}
{"name": "刘强东"}
1.6.4 查询
GET medcl/_search
{
  "query": {
    "match": {
      "name.pinyin": "zcy"
    }
  }
}

1.7 中文、拼音混合查找

1.7.1 设置settings
PUT product
{
  "settings": {
    "analysis": {
      "analyzer": {
        "hanlp_standard_pinyin":{
          "type": "custom",
          "tokenizer": "hanlp_standard",
          "filter": ["my_pinyin"]
        }
      },
      "filter": {
        "my_pinyin": {
          "type" : "pinyin",
          "keep_separate_first_letter" : false,
          "keep_full_pinyin" : true,
          "keep_original" : true,
          "limit_first_letter_length" : 16,
          "lowercase" : true,
          "remove_duplicated_term" : true
        }
      }
    }
  }
}
1.7.2 mappings设置
PUT product/_mapping
{"properties": {
    "content": {
      "type": "text",
      "analyzer": "hanlp_standard_pinyin"
    }
  }
}
1.7.3 添加数据
POST product/_bulk
{"index":{}}
{"content":"如不能登录,请在百端登录百度首页,点击【登录遇到问题】,进行找回密码操作"}
{"index":{}}
{"content":"网盘客户端访问隐藏空间需要输入密码方可进入。"}
{"index":{}}
{"content":"剑桥的网盘不好用"}
1.7.4 查询
GET product/_search
{
  "query": {
    "match": {
      "content": "wangpan"
    }
  },
  "highlight": {
    "pre_tags": "<em>",
    "post_tags": "</em>",
    "fields": {
      "content": {}
    }
  }
}
属性名 解释
keep_first_letter true: 将所有汉字的拼音首字母拼接到一起:李小璐 -> lxl
keep_full_pinyin true:在最终的分词结果中,会出现每个汉字的全拼:李小璐 -> li , xiao, lu
keep_none_chinese true: 是否保留非中文本,例如 java程序员, 在最终的分词结果单独出现 java
keep_separate_first_lett true: 在最终的分词结果单独将每个汉字的首字母作为一个结果:李小璐 -> l, y
keep_joined_full_pinyin true:在最终的分词结果中将所有汉字的拼音放到一起:李小璐 -> lixiaolu
keep_none_chinese_in_joined_full_pinyin true:将非中文内容文字和中文汉字拼音拼到一起
none_chinese_pinyin_tokenize true: 会将非中文按照可能的拼音进行拆分:wvwoxvlu -> w, v, wo, x, v, lu
keep_original true: 保留原始的输入
remove_duplicated_term true: 移除重复

二. SpringBoot与Elasticsearch的整合

2.1 添加依赖

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>

2.2 获取ElasticsearchTemplate

package com.qf.config;

import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.config.AbstractElasticsearchConfiguration;
import org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate;

@Configuration
public class ElasticSearchConfig extends AbstractElasticsearchConfiguration {


    @Bean
    public RestHighLevelClient elasticsearchClient() {

        //spring官网
//        final ClientConfiguration clientConfiguration = ClientConfiguration.builder()
//            .connectedTo("localhost:9200")
//            .build();
//        return RestClients.create(clientConfiguration).rest();

        //elasticsearch官网
        RestHighLevelClient client = new RestHighLevelClient(
                RestClient.builder(
                        new HttpHost("localhost", 9200, "http")));

        return client;

    }

    @Bean
    public ElasticsearchRestTemplate elasticsearchRestTemplate() {
        return new ElasticsearchRestTemplate(elasticsearchClient());
    }
}

2.3 索引操作

package com.qf;

import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.client.indices.GetIndexRequest;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

@SpringBootTest
class SpringbootEs04ApplicationTests {

    @Test
    void contextLoads() {
    }
    
    @Autowired
    private RestHighLevelClient elasticsearchClient;

    //创建索引
    @Test
    public void testCreateIndexRequest()throws Exception{
        CreateIndexRequest createIndexRequest = new CreateIndexRequest("my_index");

        CreateIndexResponse createIndexResponse =
                elasticsearchClient.indices().create(createIndexRequest, RequestOptions.DEFAULT);

        System.out.println(createIndexResponse);
    }

    //判断索引是否存在
    @Test
    public void testGetIndexRequest()throws Exception{
        GetIndexRequest getIndexRequest = new GetIndexRequest("my_index");

        boolean exists =
                elasticsearchClient.indices().exists(getIndexRequest, RequestOptions.DEFAULT);

        System.out.println(exists);
    }

    //删除索引
    @Test
    public void testDeleteIndexRequest()throws Exception{
        DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest("my_index");

        AcknowledgedResponse acknowledgedResponse =
                elasticsearchClient.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT);

        System.out.println(acknowledgedResponse.isAcknowledged());
    }

}

2.3 文档操作

创建实体类

package com.qf.pojo;

import com.fasterxml.jackson.annotation.JsonFormat;
import com.fasterxml.jackson.annotation.JsonIgnore;
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

import java.util.Date;

@Data
@NoArgsConstructor
@AllArgsConstructor
public class Person {

    @JsonIgnore//忽略该字段
    private Integer id;

    private String name;
    private Integer age;

    @JsonFormat(pattern = "yyyy-MM-dd")//格式化该字段
    private Date birthday;

}

操作文档

package com.qf;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.qf.pojo.Person;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

import java.io.IOException;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;

@SpringBootTest
class SpringbootEs04ApplicationTests {

    @Test
    void contextLoads() {
    }

    //----------------------文档操作-------------------------

    @Autowired
    private RestHighLevelClient elasticsearchClient;

    ObjectMapper mapper = new ObjectMapper();
    String index = "person";

    //创建索引
    @Test
    public void createIndex() throws IOException {
        //1. 准备关于索引的settings
        Settings.Builder settings = Settings.builder()
                .put("number_of_shards", 3)
                .put("number_of_replicas", 1);

        //2. 准备关于索引的结构mappings
        XContentBuilder mappings = JsonXContent.contentBuilder()
                .startObject()
                .startObject("properties")
                .startObject("name")
                .field("type","text")
                .endObject()
                .startObject("age")
                .field("type","integer")
                .endObject()
                .startObject("birthday")
                .field("type","date")
                .field("format","yyyy-MM-dd")
                .endObject()
                .endObject()
                .endObject();

        //3. 将settings和mappings封装到一个Request对象
        CreateIndexRequest request = new CreateIndexRequest(index)
                .settings(settings)
                .mapping(mappings);

        //4. 通过client对象去连接ES并执行创建索引
        CreateIndexResponse createIndexResponse = elasticsearchClient.indices().create(request, RequestOptions.DEFAULT);

        //5. 输出
        System.out.println(createIndexResponse);
    }

    
    //添加文档
    @Test
    public void createDoc() throws IOException {
        //1. 准备一个json数据
        Person person = new Person(1,"张三",23,new Date());
        String json = mapper.writeValueAsString(person);

        //2. 准备一个request对象(手动指定id)
        IndexRequest request = new IndexRequest(index);
        request.source(json, XContentType.JSON);

        //3. 通过client对象执行添加
        IndexResponse resp = elasticsearchClient.index(request, RequestOptions.DEFAULT);

        //4. 输出返回结果
        System.out.println(resp.toString());
    }

    
    //修改文档
    @Test
    public void updateDoc() throws IOException {
        //1. 创建一个Map,指定需要修改的内容
        Map<String,Object> doc = new HashMap<>();
        doc.put("name","李四");
        String docId = "4omnl3YBRfk9XpJKTqZR";//通过 GET person/_search 查到对应的_id

        //2. 创建request对象,封装数据
        UpdateRequest request = new UpdateRequest(index,docId);
        request.doc(doc);

        //3. 通过client对象执行
        UpdateResponse updateResponse = elasticsearchClient.update(request, RequestOptions.DEFAULT);

        //4. 输出返回结果
        System.out.println(updateResponse);
    }


    //删除文档
    @Test
    public void deleteDoc() throws IOException {

        String docId = "4omnl3YBRfk9XpJKTqZR";//通过 GET person/_search 查到对应的_id

        //1. 封装Request对象
        DeleteRequest request = new DeleteRequest(index,docId);

        //2. client执行
        DeleteResponse deleteResponse = elasticsearchClient.delete(request, RequestOptions.DEFAULT);

        //3. 输出结果
        System.out.println(deleteResponse);
    }

    
    //批量添加
    @Test
    public void bulkCreateDoc() throws IOException {
        //1. 准备多个json数据
        Person p1 = new Person(1,"张三",23,new Date());
        Person p2 = new Person(2,"李四",24,new Date());
        Person p3 = new Person(3,"王五",25,new Date());

        String json1 = mapper.writeValueAsString(p1);
        String json2 = mapper.writeValueAsString(p2);
        String json3 = mapper.writeValueAsString(p3);

        //2. 创建Request,将准备好的数据封装进去
        BulkRequest request = new BulkRequest();
        request.add(new IndexRequest(index).source(json1,XContentType.JSON));
        request.add(new IndexRequest(index).source(json2,XContentType.JSON));
        request.add(new IndexRequest(index).source(json3,XContentType.JSON));

        //3. 用client执行
        BulkResponse bulkResponse = elasticsearchClient.bulk(request, RequestOptions.DEFAULT);

        //4. 输出结果
        System.out.println(bulkResponse);
    }

    
    //批量删除
    @Test
    public void bulkDeleteDoc() throws IOException {

        String docId_1 = "44m7l3YBRfk9XpJKdKZn";//通过 GET person/_search 查到对应的_id
        String docId_2 = "5Im7l3YBRfk9XpJKdKZn";//通过 GET person/_search 查到对应的_id
        String docId_3 = "5Ym7l3YBRfk9XpJKdKZn";//通过 GET person/_search 查到对应的_id

        //1. 封装Request对象
        BulkRequest request = new BulkRequest();
        request.add(new DeleteRequest(index,docId_1));
        request.add(new DeleteRequest(index,docId_2));
        request.add(new DeleteRequest(index,docId_3));

        //2. client执行
        BulkResponse bulkResponse = elasticsearchClient.bulk(request, RequestOptions.DEFAULT);

        //3. 输出
        System.out.println(bulkResponse);
    }
}

2.4 查询操作

创建索引,索引名:sms-logs-index,指定数据结构如下:

创建实体类

package com.qf.pojo;

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;

import java.util.Date;

@Data
@NoArgsConstructor
@AllArgsConstructor
public class SmsLogs {
    
    private String id;// 唯一ID 1
    private Date createDate;// 创建时间
    private Date sendDate; // 发送时间
    private String longCode;// 发送的长号码
    private String mobile;// 下发手机号
    private String corpName;// 发送公司名称
    private String smsContent; // 下发短信内容
    private Integer state; // 短信下发状态 0 成功 1 失败
    private Integer operatorId; // '运营商编号 1 移动 2 联通 3 电信
    private String province;// 省份
    private String ipAddr; //下发服务器IP地址
    private Integer replyTotal; //短信状态报告返回时长(秒)
    private Integer fee;  // 费用
}

创建索引并添加测试数据

package com.qf;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.qf.pojo.Person;
import com.qf.pojo.SmsLogs;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

import java.io.IOException;
import java.util.Date;
import java.util.HashMap;
import java.util.Map;

@SpringBootTest
class SpringbootEs04ApplicationTests {

    @Test
    void contextLoads() {
    }

    //----------------------查询操作-------------------------

    @Autowired
    private RestHighLevelClient elasticsearchClient;

    ObjectMapper mapper = new ObjectMapper();
    String index = "sms-logs-index";

    //添加索引
    @Test
    public void createSmsLogsIndex() throws IOException {

        //1. settings
        Settings.Builder settings = Settings.builder()
                .put("number_of_shards", 3)
                .put("number_of_replicas", 1);

        //2. mapping.
        XContentBuilder mapping = JsonXContent.contentBuilder()
                .startObject()
                .startObject("properties")
                .startObject("createDate")
                .field("type", "date")
                .endObject()
                .startObject("sendDate")
                .field("type", "date")
                .endObject()
                .startObject("longCode")
                .field("type", "keyword")
                .endObject()
                .startObject("mobile")
                .field("type", "keyword")
                .endObject()
                .startObject("corpName")
                .field("type", "keyword")
                .endObject()
                .startObject("smsContent")
                .field("type", "text")
                .field("analyzer", "ik_max_word")
                .endObject()
                .startObject("state")
                .field("type", "integer")
                .endObject()
                .startObject("operatorId")
                .field("type", "integer")
                .endObject()
                .startObject("province")
                .field("type", "keyword")
                .endObject()
                .startObject("ipAddr")
                .field("type", "ip")
                .endObject()
                .startObject("replyTotal")
                .field("type", "integer")
                .endObject()
                .startObject("fee")
                .field("type", "long")
                .endObject()
                .endObject()
                .endObject();

        //3. 添加索引.
        CreateIndexRequest request = new CreateIndexRequest(index);
        request.settings(settings);
        request.mapping( mapping);
        elasticsearchClient.indices().create(request, RequestOptions.DEFAULT);
        System.out.println("OK!!");
    }

    //添加测试数据
    @Test
    public void addTestData() throws IOException {
        BulkRequest request = new BulkRequest();

        SmsLogs smsLogs1 = new SmsLogs();
        smsLogs1.setId("1");
        smsLogs1.setMobile("13100000000");
        smsLogs1.setCorpName("盒马鲜生");
        smsLogs1.setCreateDate(new Date());
        smsLogs1.setSendDate(new Date());
        smsLogs1.setIpAddr("10.126.2.9");
        smsLogs1.setLongCode("10660000988");
        smsLogs1.setReplyTotal(15);
        smsLogs1.setState(0);
        smsLogs1.setSmsContent("【盒马】您尾号12345678的订单已开始配送,请在您指定的时间收货不要走开 哦~配送员:" + "刘三,电话:13800000000");
        smsLogs1.setProvince("北京");
        smsLogs1.setOperatorId(2);
        smsLogs1.setFee(5);
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs1), XContentType.JSON));

        smsLogs1.setMobile("13100000001");
        smsLogs1.setProvince("上海");
        smsLogs1.setSmsContent("【盒马】您尾号7775678的订单已开始配送,请在您指定的时间收货不要走开 哦~配送员:" + "王五,电话:13800000001");
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs1), XContentType.JSON));
        // -------------------------------------------------------------------------------------------------------------------

        SmsLogs smsLogs2 = new SmsLogs();
        smsLogs2.setId("2");
        smsLogs2.setMobile("18000000000");
        smsLogs2.setCorpName("滴滴打车");
        smsLogs2.setCreateDate(new Date());
        smsLogs2.setSendDate(new Date());
        smsLogs2.setIpAddr("10.126.2.8");
        smsLogs2.setLongCode("10660000988");
        smsLogs2.setReplyTotal(50);
        smsLogs2.setState(1);
        smsLogs2.setSmsContent("【滴滴单车平台】专属限时福利!青桔/小蓝月卡立享5折,特惠畅骑30天。" + "戳 https://xxxxxx退订TD");
        smsLogs2.setProvince("上海");
        smsLogs2.setOperatorId(3);
        smsLogs2.setFee(7);
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs2), XContentType.JSON));

        smsLogs2.setMobile("18000000001");
        smsLogs2.setProvince("武汉");
        smsLogs2.setSmsContent("【滴滴单车平台】专属限时福利!青桔/小蓝月卡立享5折,特惠畅骑30天。" + "戳 https://xxxxxx退订TD");
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs2), XContentType.JSON));
        // -------------------------------------------------------------------------------------------------------------------

        SmsLogs smsLogs3 = new SmsLogs();
        smsLogs3.setId("3");
        smsLogs3.setMobile("13900000000");
        smsLogs3.setCorpName("招商银行");
        smsLogs3.setCreateDate(new Date());
        smsLogs3.setSendDate(new Date());
        smsLogs3.setIpAddr("10.126.2.8");
        smsLogs3.setLongCode("10690000988");
        smsLogs3.setReplyTotal(50);
        smsLogs3.setState(0);
        smsLogs3.setSmsContent("【招商银行】尊贵的李四先生,恭喜您获得华为P30 Pro抽奖资格,还可领100 元打" + "车红包,仅限1天");
        smsLogs3.setProvince("上海");
        smsLogs3.setOperatorId(1);
        smsLogs3.setFee(8);
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs3), XContentType.JSON));

        smsLogs3.setMobile("13990000001");
        smsLogs3.setProvince("武汉");
        smsLogs3.setSmsContent("【招商银行】尊贵的李四先生,恭喜您获得华为P30 Pro抽奖资格,还可领100 元打" + "车红包,仅限1天");
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs3), XContentType.JSON));
        // -------------------------------------------------------------------------------------------------------------------

        SmsLogs smsLogs4 = new SmsLogs();
        smsLogs4.setId("4");
        smsLogs4.setMobile("13700000000");
        smsLogs4.setCorpName("中国平安保险有限公司");
        smsLogs4.setCreateDate(new Date());
        smsLogs4.setSendDate(new Date());
        smsLogs4.setIpAddr("10.126.2.8");
        smsLogs4.setLongCode("10690000998");
        smsLogs4.setReplyTotal(18);
        smsLogs4.setState(0);
        smsLogs4.setSmsContent("【中国平安】奋斗的时代,更需要健康的身体。中国平安为您提供多重健康保 障,在奋斗之路上为您保驾护航。退订请回复TD");
        smsLogs4.setProvince("武汉");
        smsLogs4.setOperatorId(1);
        smsLogs4.setFee(5);
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs4), XContentType.JSON));

        smsLogs4.setMobile("13700000001");
        smsLogs4.setProvince("武汉");
        smsLogs4.setSmsContent("【中国平安】奋斗的时代,更需要健康的身体。中国平安为您提供多重健康保 障,在奋斗之路上为您保驾护航。退订请回复TD");
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs4), XContentType.JSON));
        // -------------------------------------------------------------------------------------------------------------------

        SmsLogs smsLogs5 = new SmsLogs();
        smsLogs5.setId("5");
        smsLogs5.setMobile("13600000000");
        smsLogs5.setCorpName("中国移动");
        smsLogs5.setCreateDate(new Date());
        smsLogs5.setSendDate(new Date());
        smsLogs5.setIpAddr("10.126.2.8");
        smsLogs5.setLongCode("10650000998");
        smsLogs5.setReplyTotal(60);
        smsLogs5.setState(0);
        smsLogs5.setSmsContent("【北京移动】尊敬的客户137****0000,5月话费账单已送达您的139邮箱," + "点击查看账单详情 http://y.10086.cn/; " + " 回Q关闭通知,关注“中国移动139邮箱”微信随时查账单【中国移动 139邮箱】");
        smsLogs5.setProvince("武汉");
        smsLogs5.setOperatorId(1);
        smsLogs5.setFee(4);
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs5), XContentType.JSON));

        smsLogs5.setMobile("13600000001");
        smsLogs5.setProvince("山西");
        smsLogs5.setSmsContent("【北京移动】尊敬的客户137****1234,8月话费账单已送达您的126邮箱,\" + \"点击查看账单详情 http://y.10086.cn/; \" + \" 回Q关闭通知,关注“中国移动126邮箱”微信随时查账单【中国移动 126邮箱】");
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs5), XContentType.JSON));
        // -------------------------------------------------------------------------------------------------------------------


        SmsLogs smsLogs6 = new SmsLogs();
        smsLogs6.setId("6");
        smsLogs6.setMobile("13500000000");
        smsLogs6.setCorpName("途虎养车");
        smsLogs6.setCreateDate(new Date());
        smsLogs6.setSendDate(new Date());
        smsLogs6.setIpAddr("10.126.2.9");
        smsLogs6.setLongCode("10690000988");
        smsLogs6.setReplyTotal(10);
        smsLogs6.setState(0);
        smsLogs6.setSmsContent("【途虎养车】亲爱的张三先生/女士,您在途虎购买的货品(单号TH123456)已 到指定安装店多日," + "现需与您确认订单的安装情况,请点击链接按实际情况选择(此链接有效期为72H)。您也可以登录途 虎APP进入" + "“我的-待安装订单”进行预约安装。若您在服务过程中有任何疑问,请致电400-111-8868向途虎咨 询。");
        smsLogs6.setProvince("北京");
        smsLogs6.setOperatorId(1);
        smsLogs6.setFee(3);
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs6), XContentType.JSON));

        smsLogs6.setMobile("13500000001");
        smsLogs6.setProvince("上海");
        smsLogs6.setSmsContent("【途虎养车】亲爱的刘红先生/女士,您在途虎购买的货品(单号TH1234526)已 到指定安装店多日," + "现需与您确认订单的安装情况,请点击链接按实际情况选择(此链接有效期为72H)。您也可以登录途 虎APP进入" + "“我的-待安装订单”进行预约安装。若您在服务过程中有任何疑问,请致电400-111-8868向途虎咨 询。");
        request.add(new IndexRequest(index).source(mapper.writeValueAsString(smsLogs6), XContentType.JSON));
        // -------------------------------------------------------------------------------------------------------------------

        elasticsearchClient.bulk(request, RequestOptions.DEFAULT);
        System.out.println("OK!");
    }
}

2.5 term&terms查询

2.5.1 term查询

term的查询是代表完全匹配,搜索之前不会对你搜索的关键字进行分词,对你的关键字去文档分词库中去匹配内容。

# term查询,from 表示:limit ?,中问号对应的位置,size 表示:limit x,? ,中问号对应的位置
POST sms-logs-index/_search
{
  "from": 0,     
  "size": 5,	 
  "query": {
    "term": {
      "province": {
        "value": "北京"
      }
    }
  }
}

代码实现:

package com.qf;

import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;

import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

import java.io.IOException;
import java.util.Map;

@SpringBootTest
class SpringbootEs04ApplicationTests {

    @Test
    void contextLoads() {
    }

    //----------------------查询操作-------------------------

    @Autowired
    private RestHighLevelClient elasticsearchClient;

    String index = "sms-logs-index";

    // Java实现
    @Test
    public void termQuery() throws IOException {
        //1. 创建Request对象
        SearchRequest request = new SearchRequest(index);

        //2. 指定查询条件
        SearchSourceBuilder builder = new SearchSourceBuilder();
        builder.from(0);
        builder.size(5);
        builder.query(QueryBuilders.termQuery("province","北京"));

        request.source(builder);

        //3. 执行查询
        SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

        //4. 获取到_source中的数据,并展示
        for (SearchHit hit : resp.getHits().getHits()) {
            Map<String, Object> result = hit.getSourceAsMap();
            System.out.println(result);
        }
    }
}

2.5.2 terms查询

terms和term的查询机制是一样,都不会将指定的查询关键字进行分词,直接去分词库中匹配,找到相应文档内容。

terms是在针对一个字段包含多个值的时候使用。

term:where province = 北京;

terms:where province = 北京 or province = ?or province = ?

# terms查询
POST sms-logs-index/_search
{
  "query": {
    "terms": {
      "province": [
        "北京",
        "山西"
      ]
    }
  }
}

代码实现:

// Java实现
@Test
public void termsQuery() throws IOException {
    //1. 创建request
    SearchRequest request = new SearchRequest(index);

    //2. 封装查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.termsQuery("province","北京","山西"));

    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出_source
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.6 match查询

match查询属于高层查询,他会根据你查询的字段类型不一样,采用不同的查询方式。

  • 查询的是日期或者是数值的话,他会将你基于的字符串查询内容转换为日期或者数值对待。
  • 如果查询的内容是一个不能被分词的内容(keyword),match查询不会对你指定的查询关键字进行分词。
  • 如果查询的内容时一个可以被分词的内容(text),match会将你指定的查询内容根据一定的方式去分词,去分词库中匹配指定的内容。

match查询,实际底层就是多个term查询,将多个term查询的结果给你封装到了一起。

2.6.1 match_all查询

查询全部内容,不指定任何查询条件。

# match_all查询
POST sms-logs-index/_search
{
  "query": {
    "match_all": {}
  }
}

代码实现方式

//  java代码实现
@Test
public void matchAllQuery() throws IOException {
    //1. 创建Request
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.matchAllQuery());
    builder.size(20);// ES默认只查询10条数据,如果想查询更多,添加size
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }

    System.out.println(resp.getHits().getHits().length);
}

2.6.2 match查询

指定一个Field作为筛选的条件

# match查询
POST sms-logs-index/_search
{
  "query": {
    "match": {
      "smsContent": "收货安装"
    }
  }
}

代码实现方式

@Test
public void matchQuery() throws IOException {
    //1. 创建Request
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();

    builder.query(QueryBuilders.matchQuery("smsContent","收货安装"));
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.6.3 布尔match查询

基于一个Field匹配的内容,采用and或者or的方式连接

# 布尔match查询
# 内容既包含中国 也包含 健康
POST sms-logs-index/_search
{
  "query": {
    "match": {
      "smsContent": {
        "query": "中国 健康",
        "operator": "and"      
      }
    }
  }
}


# 布尔match查询
# 内容包括健康 或者 包括中国
POST sms-logs-index/_search
{
  "query": {
    "match": {
      "smsContent": {
        "query": "中国 健康",
        "operator": "or"		
      }
    }
  }
}

代码实现方式

// Java代码实现
@Test
public void booleanMatchQuery() throws IOException {
    //1. 创建Request
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    //----------------------------------------------------------------------选择AND或者OR
    builder.query(QueryBuilders.matchQuery("smsContent","中国 健康").operator(Operator.AND));
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.6.4 multi_match查询

match针对一个field做检索,multi_match针对多个field进行检索,多个field对应一个text。

# multi_match 查询
POST sms-logs-index/_search
{
  "query": {
    "multi_match": {
      "query": "北京",					
      "fields": ["province","smsContent"]
    }
  }
}

代码实现方式

// java代码实现
@Test
public void multiMatchQuery() throws IOException {
    //1. 创建Request
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.multiMatchQuery("北京","province","smsContent"));
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7 其他查询

2.7.1 id查询

查询所有

# 查询所有
Get sms-logs-index/_search

根据 id 查询(指根据 _id 的值查询)

# id 查询
GET sms-logs-index/_doc/i6dz-X0B3HR5Jl96fnFG

代码实现方式

// Java代码实现
@Test
public void findById() throws IOException {
    //1. 创建GetRequest
    GetRequest request = new GetRequest(index,"i6dz-X0B3HR5Jl96fnFG");

    //2. 执行查询
    GetResponse resp = elasticsearchClient.get(request, RequestOptions.DEFAULT);

    //3. 输出结果
    System.out.println(resp.getSourceAsMap());
}

2.7.2 ids查询

根据多个id查询

# ids查询
POST sms-logs-index/_search
{
  "query": {
    "ids": {
      "values": ["kadz-X0B3HR5Jl96fnFG","jadz-X0B3HR5Jl96fnFG","jqdz-X0B3HR5Jl96fnFG"]
    }
  }
}

代码实现方式

// Java代码实现
@Test
public void findByIds() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.idsQuery().addIds("kadz-X0B3HR5Jl96fnFG","jadz-X0B3HR5Jl96fnFG","jqdz-X0B3HR5Jl96fnFG"));
    request.source(builder);

    //3. 执行
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7.3 prefix查询

前缀查询,可以通过一个关键字去指定一个Field的前缀,从而查询到指定的文档。

#prefix 查询
POST sms-logs-index/_doc/_search
{
  "query": {
    "prefix": {
      "corpName": {
        "value": "途虎"
      }
    }
  }
}

代码实现方式

// Java实现前缀查询
@Test
public void findByPrefix() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.prefixQuery("corpName","途虎"));
    request.source(builder);

    //3. 执行
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7.4 fuzzy查询

模糊查询,我们输入字符的大概,ES就可以去根据输入的内容大概去匹配一下结果。

# fuzzy查询
# prefix_length 指定前面几个字符是不允许出现错误的
POST sms-logs-index/_search
{
  "query": {
    "fuzzy": {
      "corpName": {
        "value": "盒马先生",
        "prefix_length": 2			
      }
    }
  }
}

代码实现方式

// Java代码实现Fuzzy查询
@Test
public void findByFuzzy() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.fuzzyQuery("corpName","盒马先生").prefixLength(2));
    request.source(builder);

    //3. 执行
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7.5 wildcard查询

通配查询,和MySQL中的like是一个套路,可以在查询时,在字符串中指定通配符*和占位符?

# wildcard 查询
# 可以使用*和?指定通配符和占位符
POST /sms-logs-index/_search
{
  "query": {
    "wildcard": {
      "corpName": {
        "value": "中国*"    
      }
    }
  }
}

代码实现方式

// Java代码实现Wildcard查询
@Test
public void findByWildCard() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.wildcardQuery("corpName","中国*"));
    request.source(builder);

    //3. 执行
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7.6 range查询

范围查询,只针对数值类型,对某一个Field进行大于或者小于的范围指定

# range 查询
# 可以使用 gt:>      gte:>=     lt:<     lte:<=
POST /sms-logs-index/_search
{
  "query": {
    "range": {
      "fee": {
        "gt": 5,
        "lte": 10
      }
    }
  }
}

代码实现方式

// Java实现range范围查询
@Test
public void findByRange() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.rangeQuery("fee").lte(10).gt(5));
    request.source(builder);

    //3. 执行
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7.7 regexp查询

正则查询,通过你编写的正则表达式去匹配内容。

Ps:prefix,fuzzy,wildcard和regexp查询效率相对比较低,要求效率比较高时,避免去使用

# regexp 正则表达式查询
POST /sms-logs-index/_search
{
  "query": {
    "regexp": {
      "mobile": "180[0-9]{8}"
    }
  }
}

代码实现方式

// Java代码实现正则查询
@Test
public void findByRegexp() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.query(QueryBuilders.regexpQuery("mobile","180[0-9]{8}"));
    request.source(builder);

    //3. 执行
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.7.8 深分页Scroll

ES对from + size是有限制的,from和size二者之和不能超过1W

原理:

  • from+size在ES查询数据的方式:
    • 第一步现将用户指定的关键进行分词。
    • 第二步将词汇去分词库中进行检索,得到多个文档的id。
    • 第三步去各个分片中去拉取指定的数据。耗时较长。
    • 第四步将数据根据score进行排序。耗时较长。
    • 第五步根据from的值,将查询到的数据舍弃一部分。
    • 第六步返回结果。
  • scroll+size在ES查询数据的方式:
    • 第一步现将用户指定的关键进行分词。
    • 第二步将词汇去分词库中进行检索,得到多个文档的id。
    • 第三步将文档的id存放在一个ES的上下文中。
    • 第四步根据你指定的size的个数去ES中检索指定个数的数据,拿完数据的文档id,会从上下文中移除。
    • 第五步如果需要下一页数据,直接去ES的上下文中,找后续内容。
    • 第六步循环第四步和第五步

Scroll查询方式,不适合做实时的查询

# 执行scroll查询,返回第一页数据,并且将文档id信息存放在ES上下文中(内存中),指定生存时间为1分钟
POST sms-logs-index/_search?scroll=1m
{
  "query": {
    "match_all": {}
  },
  "size": 2,
  "sort": [
    {
      "fee": {
        "order": "desc"
      }
    }
  ]
}

# 根据scroll查询下一页数据
POST /_search/scroll
{
  "scroll_id": "<根据第一步得到的scorll_id去指定>",
  "scroll": "<scorll信息的生存时间>"
}

# 删除scroll在ES上下文中的数据
DELETE /_search/scroll/scroll_id

代码实现方式

 // Java实现scroll分页
@Test
public void scrollQuery() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);
    //2. 指定scroll生存时间1分钟
    request.scroll(TimeValue.timeValueMinutes(1L));
    //3. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.size(4);
    builder.sort("fee", SortOrder.DESC);
    builder.query(QueryBuilders.matchAllQuery());

    request.source(builder);

    //4. 获取返回结果scrollId,source
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);
    String scrollId = resp.getScrollId();
    System.out.println(scrollId);

    System.out.println("----------首页---------");
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }

    while(true) {
        //5. 循环 - 创建SearchScrollRequest
        SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId);
        //6. 指定scrollId的生存时间
        scrollRequest.scroll(TimeValue.timeValueMinutes(1L));
        //7. 执行查询获取返回结果
        SearchResponse scrollResp = elasticsearchClient.scroll(scrollRequest, RequestOptions.DEFAULT);
        //8. 判断是否查询到了数据,输出
        SearchHit[] hits = scrollResp.getHits().getHits();
        if(hits != null && hits.length > 0) {
            System.out.println("----------下一页---------");
            for (SearchHit hit : hits) {
                System.out.println(hit.getSourceAsMap());
            }
        }else{
            //9. 判断没有查询到数据-退出循环
            System.out.println("----------结束---------");
            break;
        }
    }

    //10. 创建CLearScrollRequest
    ClearScrollRequest clearScrollRequest = new ClearScrollRequest();
    //11. 指定ScrollId
    clearScrollRequest.addScrollId(scrollId);
    //12. 删除ScrollId
    ClearScrollResponse clearScrollResponse = elasticsearchClient.clearScroll(clearScrollRequest, RequestOptions.DEFAULT);
    //13. 输出结果
    System.out.println("删除scroll:" + clearScrollResponse.isSucceeded());
}

2.7.9 delete-by-query

根据term,match等查询方式去删除大量的文档

Ps:如果你需要删除的内容,是index下的大部分数据,推荐创建一个全新的index,将保留的文档内容,添加到全新的索引

# delete-by-query
POST sms-logs-index/_delete_by_query
{
  "query": {
    "range": {
      "fee": {
        "lt": 9
      }
    }
  }
}

代码实现方式

// Java代码实现
@Test
public void deleteByQuery() throws IOException {
    //1. 创建DeleteByQueryRequest
    DeleteByQueryRequest request = new DeleteByQueryRequest(index);

    //2. 指定检索的条件和SearchRequest指定Query的方式不一样
    request.setQuery(QueryBuilders.rangeQuery("fee").lt(9));

    //3. 执行删除
    BulkByScrollResponse resp = elasticsearchClient.deleteByQuery(request, RequestOptions.DEFAULT);

    //4. 输出返回结果
    System.out.println(resp.toString());

}

2.8 复合查询

复合过滤器,将你的多个查询条件,以一定的逻辑组合在一起。

  • must: 所有的条件,用must组合在一起,表示And的意思
  • must_not:将must_not中的条件,全部都不能匹配,标识Not的意思
  • should:所有的条件,用should组合在一起,表示Or的意思
# 查询省份为武汉或者北京
# 运营商不是联通(operatorId不等于2)
# smsContent中包含中国和平安
# bool查询
POST sms-logs-index/_search
{
  "query": {
    "bool": {
      "should": [
        {
          "term": {
            "province": {
              "value": "北京"
            }
          }
        },
        {
          "term": {
            "province": {
              "value": "武汉"
            }
          }
        }
      ],
      "must_not": [
        {
          "term": {
            "operatorId": {
              "value": "2"
            }
          }
        }
      ],
      "must": [
        {
          "match": {
            "smsContent": "中国"
          }
        },
        {
          "match": {
            "smsContent": "平安"
          }
        }
      ]
    }
  }
}

代码实现方式

// Java代码实现Bool查询
@Test
public void BoolQuery() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定查询条件
    SearchSourceBuilder builder = new SearchSourceBuilder();
    BoolQueryBuilder boolQuery = QueryBuilders.boolQuery();
    // # 查询省份为武汉或者北京
    boolQuery.should(QueryBuilders.termQuery("province","武汉"));
    boolQuery.should(QueryBuilders.termQuery("province","北京"));
    // # 运营商不是联通
    boolQuery.mustNot(QueryBuilders.termQuery("operatorId",2));
    // # smsContent中包含中国和平安
    boolQuery.must(QueryBuilders.matchQuery("smsContent","中国"));
    boolQuery.must(QueryBuilders.matchQuery("smsContent","平安"));

    builder.query(boolQuery);
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 输出结果
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getSourceAsMap());
    }
}

2.9 高亮查询

高亮查询就是你用户输入的关键字,以一定的特殊样式展示给用户,让用户知道为什么这个结果被检索出来。

高亮展示的数据,本身就是文档中的一个Field,单独将Field以highlight的形式返回给你。

ES提供了一个highlight属性,和query同级别的。

  • fragment_size:指定高亮数据展示多少个字符回来。
  • pre_tags:指定前缀标签,举个栗子< font color="red" >
  • post_tags:指定后缀标签,举个栗子< /font >
  • fields:指定哪几个Field以高亮形式返回

RESTful实现

# highlight查询
POST sms-logs-index/_search
{
  "query": {
    "match": {
      "smsContent": "盒马"
    }
  },
  "highlight": {
    "fields": {
      "smsContent": {}
    },
    "pre_tags": "<font color='red'>",
    "post_tags": "</font>",
    "fragment_size": 10
  }
}

代码实现方式

// Java实现高亮查询
@Test
public void highLightQuery() throws IOException {
    //1. SearchRequest
    SearchRequest request = new SearchRequest(index);
    //2. 指定查询条件(高亮)
    SearchSourceBuilder builder = new SearchSourceBuilder();
    //2.1 指定查询条件
    builder.query(QueryBuilders.matchQuery("smsContent","盒马"));
    //2.2 指定高亮
    HighlightBuilder highlightBuilder = new HighlightBuilder();
    highlightBuilder.field("smsContent",10)
        .preTags("<font color='red'>")
        .postTags("</font>");
    builder.highlighter(highlightBuilder);

    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 获取高亮数据,输出
    for (SearchHit hit : resp.getHits().getHits()) {
        System.out.println(hit.getHighlightFields().get("smsContent"));
    }
}

2.10 聚合查询

ES的聚合查询和MySQL的聚合查询类似,ES的聚合查询相比MySQL要强大的多,ES提供的统计数据的方式多种多样。

# ES聚合查询的RESTful语法
POST index/_search
{
    "aggs": {
        "名字(agg)": {
            "agg_type": {
                "属性": "值"
            }
        }
    }
}
2.10.1 去重计数查询

去重计数,即Cardinality,第一步先将返回的文档中的一个指定的field进行去重,统计一共有多少条

# 去重计数查询 北京 上海 武汉 山西
POST sms-logs-index/_search
{
  "aggs": {
    "agg": {
      "cardinality": {
        "field": "province"
      }
    }
  }
}

代码实现方式

//Java代码实现去重计数查询
@Test
public void cardinality() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定使用的聚合查询方式
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.aggregation(AggregationBuilders.cardinality("agg").field("province"));

    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 获取返回结果
    Cardinality agg = resp.getAggregations().get("agg");
    long value = agg.getValue();
    System.out.println(value);
}
2.10.2 范围统计

统计一定范围内出现的文档个数,比如,针对某一个Field的值在 0100,100200,200~300之间文档出现的个数分别是多少。

范围统计可以针对普通的数值,针对时间类型,针对ip类型都可以做相应的统计。

range,date_range,ip_range

数值统计

# 数值方式范围统计
# fee中 0-5,5-10,10以上,各有多少个
# from包含当前值,to不包含当前值
POST sms-logs-index/_search
{
  "aggs": {
    "agg": {
      "range": {
        "field": "fee",
        "ranges": [
          {
            "to": 5
          },
          {
            "from": 5,     
            "to": 10
          },
          {
            "from": 10
          }
        ]
      }
    }
  }
}

代码实现:

// Java实现数值 范围统计
@Test
public void range() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定使用的聚合查询方式
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.aggregation(AggregationBuilders.range("agg").field("fee")
                        .addUnboundedTo(5)
                        .addRange(5,10)
                        .addUnboundedFrom(10));
   
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 获取返回结果
    Range agg = resp.getAggregations().get("agg");
    for (Range.Bucket bucket : agg.getBuckets()) {
        String key = bucket.getKeyAsString();
        Object from = bucket.getFrom();
        Object to = bucket.getTo();
        long docCount = bucket.getDocCount();
        System.out.println(String.format("key:%s,from:%s,to:%s,docCount:%s",key,from,to,docCount));
    }
}

时间范围统计

# 时间方式范围统计
POST sms-logs-index/_search
{
  "aggs": {
    "agg": {
      "date_range": {
        "field": "createDate",
        "format": "yyyy", 
        "ranges": [
          {
            "to": 2020
          },
          {
            "from": 2020
          }
        ]
      }
    }
  }
}

代码实现参考 数值统计 即可

ip统计方式

# ip方式 范围统计
POST sms-logs-index/_search
{
  "aggs": {
    "agg": {
      "ip_range": {
        "field": "ipAddr",
        "ranges": [
          {
            "to": "10.126.2.9"
          },
          {
            "from": "10.126.2.9"
          }
        ]
      }
    }
  }
}

代码实现参考 数值统计 即可

2.10.3 统计聚合查询

他可以帮你查询指定Field的最大值,最小值,平均值,平方和等

使用:extended_stats

# 统计聚合查询
POST sms-logs-index/_search
{
  "aggs": {
    "agg": {
      "extended_stats": {
        "field": "fee"
      }
    }
  }
}

代码实现方式

// Java实现统计聚合查询
@Test
public void extendedStats() throws IOException {
    //1. 创建SearchRequest
    SearchRequest request = new SearchRequest(index);

    //2. 指定使用的聚合查询方式
    SearchSourceBuilder builder = new SearchSourceBuilder();
    builder.aggregation(AggregationBuilders.extendedStats("agg").field("fee"));
    request.source(builder);

    //3. 执行查询
    SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

    //4. 获取返回结果
    ExtendedStats agg = resp.getAggregations().get("agg");
    double max = agg.getMax();
    double min = agg.getMin();
    System.out.println("fee的最大值为:" + max + ",最小值为:" + min);
}

其他的聚合查询方式查看官方文档:https://www.elastic.co/guide/en/elasticsearch/reference/7.x/index.html

2.11 地图经纬度搜索

ES中提供了一个数据类型 geo_point,这个类型就是用来存储经纬度的。

创建一个带geo_point类型的索引,并添加测试数据

# 创建一个索引,指定一个name,locaiton
PUT map
{
  "settings": {
    "number_of_shards": 5,
    "number_of_replicas": 1
  },
  "mappings": {
      "properties": {
        "name":{
          "type": "text"
        },
        "location": {
          "type": "geo_point"
        }
      }
  }
}

# 添加测试数据
PUT map/_doc/1
{
  "name": "海为科技园",
  "location": {
    "lon": 113.657903,
    "lat": 34.727474 
  }
}

PUT map/_doc/2
{
  "name": "郑航家属院",
  "location": {
    "lon": 113.653232,
    "lat": 34.728275 
  }
}

PUT map/_doc/3
{
  "name": "二七区政府",
  "location": {
    "lon": 113.646512,
    "lat": 34.73047 
  }
}


PUT map/_doc/4
{
  "name": "二七万达",
  "location": {
    "lon": 113.64892,
    "lat": 34.724329
  }
}

PUT map/_doc/5
{
  "name": "市第二人民医院地铁站",
  "location": {
    "lon": 113.650501,
    "lat": 34.726791
  }
}
2.11.1 ES的地图检索方式
语法 说明
geo_distance 直线距离检索方式
geo_bounding_box 以两个点确定一个矩形,获取在矩形内的全部数据
geo_polygon 以多个点,确定一个多边形,获取多边形内的全部数据
2.11.2 基于RESTful实现地图检索

geo_distance

# geo_distance
# location:确定一个点(此处为北京站)
# distance:确定半径
# distance_type:指定形状为圆形
POST map/_search
{
  "query": {
    "geo_distance": {
      "location": {				
        "lon": 113.657903,
        "lat": 34.727474
      },
      "distance": 1000,			 
      "distance_type": "arc"     
    }
  }
}

geo_bounding_box

# geo_bounding_box
# top_left:左上角的坐标点(二七政府)
# bottom_right:右下角的坐标点(海为科技园)
POST map/_search
{
  "query": {
    "geo_bounding_box": {
      "location": {
        "top_left": {				
          "lon": 113.646512,
          "lat": 34.73047
        },
        "bottom_right": {			 
          "lon": 113.657903,
          "lat": 34.727474
        }
      }
    }
  }
}

geo_polygon

# geo_polygon
# points:指定多个点确定一个多边形(海为科技园,二七区政府,二七万达)
POST map/_search
{
  "query": {
    "geo_polygon": {
      "location": {
        "points": [					
          {
            "lon": 113.646512,
            "lat": 34.73047
          },
          {
            "lon": 113.657903,
            "lat": 34.727474
          },
          {
            "lon": 113.64892,
            "lat": 34.724329
          }
        ]
      }
    }
  }
}

Java实现geo_polygon

package com.qf;

import org.elasticsearch.action.search.*;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;

import org.elasticsearch.common.geo.GeoPoint;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

@SpringBootTest
class SpringbootEs04ApplicationTests {

    @Test
    void contextLoads() {
    }

    //----------------------查询操作-------------------------

    @Autowired
    private RestHighLevelClient elasticsearchClient;
    
    String  index = "map";

    // 基于Java实现geo_polygon查询
    @Test
    public void geoPolygon() throws IOException {
        //1. SearchRequest
        SearchRequest request = new SearchRequest(index);

        //2. 指定检索方式
        SearchSourceBuilder builder = new SearchSourceBuilder();
        List<GeoPoint> points = new ArrayList<>();
        points.add(new GeoPoint(34.73047,113.646512));
        points.add(new GeoPoint(34.727474,113.657903));
        points.add(new GeoPoint(34.724329,113.64892));
        builder.query(QueryBuilders.geoPolygonQuery("location",points));

        request.source(builder);

        //3. 执行查询
        SearchResponse resp = elasticsearchClient.search(request, RequestOptions.DEFAULT);

        //4. 输出结果
        for (SearchHit hit : resp.getHits().getHits()) {
            System.out.println(hit.getSourceAsMap());
        }
    }
}
posted @ 2022-07-20 23:10  qtyanan  阅读(76)  评论(0编辑  收藏  举报