一、Elasticsearch介绍(Elasticsearch:官方分布式搜索和分析引擎 | Elastic

  Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎,能够解决不断涌现出的各种用例。 作为 Elastic Stack 的核心,它集中存储您的数据,帮助您发现意料之中以及意料之外的情况。

  它可以用来快速地储存、搜索和分析海量的数据。用来代替MySql作数据的搜索和分析,可以秒级的从海量的数据中作检索和分析。

  官方学习文档:Elasticsearch Guide [7.5] | Elastic

 1.基本概念

  1.1、Index(索引)

  动词:相当于MySQL的insert

  名词:相当于MySQL的Database

  1.2、Type(类型)

  在Index中可以定义多个Type。每一种类型的数据放在一起。

  类似于MySQL中的Table。

  1.3、Document(文档)

  保存在某个索引(Index)下,某种类型(Type)的一个数据(Document)。

  文档是Json格式的,Documnet类似MySQL某个Table的内容。

  

  ## ES6及以后的版本已经删除了TYPE的概念

二、Docker安装Elasticsearch

  1、下载镜像文件

docker pull elasticsearch:7.4.2
docker pull kibana:7.4.2

//将es中配置文件挂载到外面的目录,通过修改虚拟机外面的文件夹es配置,进而修改docker中es的配置
mkdir -p /mydata/elasticsearch/config

mkdir -p /mydata/elasticsearch/data
//写了一个配置  http.host:0.0.0.0 代表es可以被远程的任何机器访问,注意这里host:后需要有空格 
echo "http.host: 0.0.0.0">> /mydata/elasticsearch/config/elasticsearch.yml

  2、创建实例

  1.  Elasticsearch

//运行elasticsearch命令,
//为容器起一个名字为elasticsearch,-p暴露两个端口 9200 9300, 9200是发送http请求——restapi的端口,9300是es在分布式集群状态下,结点之间的通信端口, \代表换行下一行, 
//-e  single-node 是以单节点方式运行,ES_JAVA_OPTS不指定的话,es一启动,会将内存全部占用,整个虚拟机就卡死了,
//-v 进行挂载,目录中配置,数据等一一关联 -d 后台启动es使用指定的镜像 z
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms64m -Xmx128m" \
-v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
-v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-d elasticsearch:7.4.2

 安装完 elasticsearch 后我们来启动一下,docker ps会发现使用docker ps命令查看启动的容器时没有找到我们的 es,这是因为目前 es 的配置文件的权限导致的,因此我们还需要修改一下 es 的配置文件的权限:

chmod -R 777 /mydata/elasticsearch/

若仍然启动失败,则说明运行代码有误,使用 docker log elasticsearch 查看报错

同时,需要 docker pa -a 查看容器id,将elasticsearch容器删除,再执行elasticsearch命令

 启动成功后,访问http://192.168.56.10:9200,输出以下内容说明启动成功。

{
  "name" : "e514c560d500",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "Rq6lZhNVRhiVkjTljaCGig",
  "version" : {
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

   2、安装Kibana

//访问5601端口,访问到可视化界面kibana,kibana再先发送请求到es9200
docker run --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.56.10:9200 -p 5601:5601 -d kibana:7.4.2
//http://192.168.56.10:9200 要改成自己的Ip端口

  访问可视化界面 http://192.168.56.10:5601 

三、基本操作

  1、_cat

  Get/_cat/nodes  //查看节点

  Get/_cat/health  //查看健康状况

  Get/_cat/master  //查看主节点

  Get/_cat/indices  //查看索引

  2、索引一个文档(保存)

  PUT/customer/external/1 ,发送多次是更新操作

  POST/customer/external/1 ,发送多次是更新操作

  POST/customer/external,自动生成id

http://192.168.56.10:9200/customer/external/1
{
    "name":"John Doe"
}

  响应数据

{
    "_index": "customer",
    "_type": "external",
    "_id": "1",
    "_version": 1,
    "result": "created",
    "_shards": {
        "total": 2,
        "successful": 1,
        "failed": 0
    },
    "_seq_no": 0,//并发控制字段,每次更新就会+1,用来做乐观锁
    "_primary_term": 1,  //同上,主分片重新分配,如重启,就会变化
}

  PUT/customer/external/1?if_seq_no=7&if_primary_term=1

  通过_seq_no和_primary_term的值进行乐观锁操作

{
    "error": {
        "root_cause": [
            {
                "type": "illegal_argument_exception",
                "reason": "request [/customer/external/1] contains unrecognized parameters: [if_primary_term], [if_seq_no]"
            }
        ],
        "type": "illegal_argument_exception",
        "reason": "request [/customer/external/1] contains unrecognized parameters: [if_primary_term], [if_seq_no]"
    },
    "status": 400
}

  3、查询文档

  GET/customer/external/1

{
    "_index": "customer",
    "_type": "external",
    "_id": "1",          //Id号
    "_version": 1,       //版本号
    "_seq_no": 0,        //并发控制字段,每次更新就会+1,用来做乐观锁
    "_primary_term": 1,  //同上,主分片重新分配,如重启,就会变化
    "found": true,
    "_source": {    //真正的内容
        "name": "John Doe"
    }
}

   4、更新文档

  POST/customer/external/1/_update   该语句会对比数据内容,数据一样则不会进行update操作,版本号不会更新

{
    "_index": "customer",
    "_type": "external",
    "_id": "1",
    "_version": 2,
    "result": "noop",    //检测到数据一致,无操作执行
    "_shards": {
        "total": 0,
        "successful": 0,
        "failed": 0
    },
    "_seq_no": 1,
    "_primary_term": 1
}

  不带_update的语句会进行新增或更新操作

  5、删除文档

  DEL/customer/external/1 删除文档

  DEL/customer  删除索引

  6、批量API

  bulk批量API操作每个语句是独立的,其中一条出现错误,整个语句不会进行回滚

  使用Kibana的Dev Tool进行测试

命令:
POST /customer/external/_bulk
{"index":{"_id":"1"}}
{"name":"tang"}
{"index":{"_id":"2"}}
{"name":"yao"}
结果:
#! Deprecation: [types removal] Specifying types in bulk requests is deprecated.
{
  "took" : 501,
  "errors" : false,
  "items" : [
    {
      "index" : {
        "_index" : "customer",
        "_type" : "external",
        "_id" : "1",
        "_version" : 1,
        "result" : "created",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 3,
        "_primary_term" : 1,
        "status" : 201
      }
    },
    {
      "index" : {
        "_index" : "customer",
        "_type" : "external",
        "_id" : "2",
        "_version" : 1,
        "result" : "created",
        "_shards" : {
          "total" : 2,
          "successful" : 1,
          "failed" : 0
        },
        "_seq_no" : 4,
        "_primary_term" : 1,
        "status" : 201
      }
    }
  ]
}

   复杂的批量API操作

POST /_bulk
{"delete":{"_index":"website","_type":"blog","_id":"123"}}
{"create":{"_index":"website","_type":"blog","_id":"123"}}
{"title":"My first blog post"}
{"index":{"_index":"website","_type":"blog"}}
{"title":"My second blog post"}
{"update":{"_index":"website","_type":"blog","_id":"123"}}
{"doc":{"title":"My updated blog post"}}

   官方测试数据:elasticsearch/accounts.json at v7.4.2 · elastic/elasticsearch (github.com)

四、进阶操作

  1、SearchAPI(检索信息)

  1.1、检索bank下所有信息

  GET bank/_search

  1.2、请求参数方式检索

//?检索条件,q=* 查询所有,sort=account_number:asc排序规则按照该字段升序排列
GET bank/_search?q=*&sort=account_number:asc

  1.3、请求体+URL检索 即QueryDSL检索

GET /bank/_search
{
  "query": {
    "match_all": {}
  },
  "sort": [
    {
      "balance":"desc"
    },
    {
      "account_number": "asc"
    }
  ]
}

  2、QueryDSL

  请求体+URL检索的方式进行检索

  2.1、基本语法

GET /bank/_search
{
  "query": {
    "match_all": {}
  },
  "sort": [
    {
      "balance":"desc"
    },
    {
      "account_number": "asc"
    }
  ],
  "from": 0,
  "size": 5,
  "_source": ["balance","firstname"]
}

   2.2、match 匹配查询

##match 全文检索按照评分进行排序,会对检索条件进行分词匹配
GET /bank/_search
{
  "query": {
    "match": {
      "account_number": "20"
    }
  }
}

GET /bank/_search
{
  "query": {
    "match": {
      "address": "mill lane"
    }
  }
}

  2.3、match_phrase

##短语匹配
GET /bank/_search
{
    "query":{
        "match_phrase":{
            "address":"mill lane"
         }
     }
}

  2.4、multi_match 多字段匹配

GET /bank/_search
{
  "query": {
    "multi_match": {
      "query": "mill",
      "fields": ["address","city"]
    }
  }
}

  2.5、bool 合并查询

GET /bank/_search
{
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "age": "40"
          }
        }
      ],
      "must_not": [
        {
          "match": {
            "state": "ID"
          }
        }
      ],
      "should": [
        {
          "match": {
            "lastname": "Ross"
          }
        }
      ]
    }
  }
}

  2.6、filter 结果过滤

GET /bank/_search
{
  "query": {
    "bool": {
      "must": { "match_all": {} },
      "filter": {
        "range": {
          "balance": {
            "gte": 20000,
            "lte": 30000
          }
        }
      }
    }
  }
}

  2.7、term 

  和match一样,匹配某个属性的值。全文检索字段用match,非text字段用trem

##term 非TEXT字段
GET bank/_search      
{
   "query":{
      "term":{
         "balance":"32838"
      }
   }
}
GET /_search
{
  "query": {
    "match_phrase": {
      "address": "789 Madison"
    }
  }
}
##精准查询
GET /_search
{
  "query": {
    "match": {
      "address.keyword": "789 Madison"
    }
  }
}

  2.8、aggregations 聚合执行

GET bank/_search
{
  "query": {
    "match": {
      "address": "mill"
    }
  },
  "aggs": {
    "ageaggs": {
      "terms": {
        "field": "age",
        "size": 10 //假设年龄有100种可能,只取出前10种可能
      }
    }
  }
}

  复杂:

# 按照年龄聚合,并且请求这些年龄段的这些人的平均薪资
GET bank/_search
{
  "query": {
    "match_all": {}
  },
  "aggs": {
    "ageAgg": {
      "terms": {
        "field": "age",
        "size": 100
      },
      "aggs": {
        "balanceAvg": {
          "avg": {
            "field": "balance"
          }
        }
      }
    }
  }
}
# 查出所有的年龄分布,并且这些年龄段中性别为M的平均薪资和F的平均薪资以及这个年龄段的总体平均薪资

GET bank/_search
{
  "query": {
    "match_all": {}
  },
  "aggs": {
    "ageAgg": {
      "terms": {
        "field": "age",
        "size": 100
      },
      "aggs": {
        "gender": {
          "terms": {
            "field": "gender.keyword"
          },
          "aggs": {
            "genderBalance": {
              "avg": {
                "field": "balance"
              }
            }
          }
        },
        "ageBlance":{
         "avg": {
           "field": "balance"
         }
        }
        
      }
      
    }
  },
  "size": 0
}

  3、Mapping

  3.1 字段类型

  参考文档 Field datatypes | Elasticsearch Guide [7.5] | Elastic

  3.2 映射

  Mapping(映射)用来定义一个文档(document),以及他所包含属性(feild)是如何存储和索引的。

  查看mapping信息 GET bank/_mapping

  每个属性的映射类型,type为text默认就会就全文检索,检索起来就会分词,想要精确检索address的值,就要用address.keyword 

  3.3 创建映射规则

PUT /my_index
{
  "mappings": {
    "properties": {
      "age":    { "type": "integer" },  
      "email":  { "type": "keyword"  }, 
      "name":   { "type": "text"  }  
    }
  }
}

GET my_index/_mapping

  3.4 添加新的字段映射 

PUT /my_index/_mapping
{
  "properties": {
    "employee-id": {
      "type": "keyword",
      "index": false  // 设置不可以被索引
    }
  }
}

   3.5 更新映射

  对于已经存在的映射字段,我们不能更新。更新必须创建新的索引进行数据迁移

  3.6 数据迁移

  根据bank的属性,复制过来进行修改而生成新的索引和映射规则

newbank映射规则
PUT /newbank
{
  "mappings": {
     "properties" : {
        "account_number" : {
          "type" : "long"
        },
        "address" : {
          "type" : "text"
        },
        "age" : {
          "type" : "integer"
        },
        "balance" : {
          "type" : "long"
        },
        "city" : {
          "type" : "keyword"
        },
        "email" : {
          "type" : "keyword"
        },
        "employer" : {
          "type" : "keyword"
          
        },
        "firstname" : {
          "type" : "text"
        },
        "gender" : {
          "type" : "keyword"
        },
        "lastname" : {
          "type" : "text",
          "fields" : {
            "keyword" : {
              "type" : "keyword",
              "ignore_above" : 256
            }
          }
        },
        "state" : {
          "type" : "keyword"
          
        }
     }
  }
}

  将老bank数据迁移到newbank

POST _reindex
{
  "source": {
    "index": "bank",
    "type": "account"
  },
  "dest": {
    "index": "newbank"
  }
}

GET newbank/_search

  4、分词 Standard Analyzer | Elasticsearch Guide [7.4] | Elastic

  将完成的大段话分词,利用单词的相关性匹配,最终完成全文检索功能。默认使用标准分词器

POST _analyze
{
  "analyzer": "standard",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

  默认是英文分词器,如果text为中文的话就会分割成一个个汉字,需要自己安装分词器。

  4.1 ik分词器

  github地址:https://github.com/medcl/elasticsearch-analysis-ik/releases/tag/v7.4.2

# -it 交互模式 /bin/bash 进入控制台
docker exec -it [进程号] /bin/bash

   复制链接 https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.4.2/elasticsearch-analysis-ik-7.4.2.zip

wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.4.2/elasticsearch-analysis-ik-7.4.2.zip

  wget命令没找到,使用Xshell和Xftp进行上传安装

  

  修改配置允许vagrant账号密码登录

  vi /etc/ssh/sshd_config

  使用which ssh检查是否已经安装shh,若没安装需要先安装ssh

  CentOS7安装ssh(自行查找自己版本的安装方式)

   yum install -y openssl openssh-server

  确认安装成功后,进入XShell连接虚拟机

连接成功       

  下载ES7.4.2.zip https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.4.2/elasticsearch-analysis-ik-7.4.2.zip

  解压后再通过Xftp上传到 /mydata/elasticsearch/plugins/ 目录下

  修改ik文件权限为可读可写可执行 chmod -R 777 ik/

  测试ik分词器

docker exec -it [ES端口号] /bin/bash    #进入内部环境
cd bin 
elasticsearch-plugin list    #输出ik则说明安装成功

docker restart elasticsearch    #重启ES

   实例

ik_smart
POST _analyze
{
  "analyzer": "ik_smart",
  "text": "我是中国人"
}

{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "中国人",
      "start_offset" : 2,
      "end_offset" : 5,
      "type" : "CN_WORD",
      "position" : 2
    }
  ]
}

  5、修改Linux网络设置&开启root密码访问

cd /etc/sysconfig/network-scripts/ 
vi ifcfg-eth1
##添加以下配置
GATEWAY=192.168.56.1
DNS1=114.114.114.114
DNS2=8.8.8.8
##重启网卡
service network restart

  

  

  连接成功,就可以进行安装一些必要的工具

  yum install -y wget

  yum install -y unzip

 6、自定义扩展词库

  指定一个远程词库,让ik分词器自己向远程发送请求,要到最新的单词,最新的单词就会作为新的单词进行分解

  1)自己写一个项目,处理这个请求,返回我们新的单词,让ik分词器给我们项目发送请求

  2)安装nginx,将最新词库放到nginx里面,让ik分词器给nginx发送请求,由它(也是个web服务器)返回最新的词库,就能把新词库和原来的词库合并起来

  先确保linux虚拟机有足够内存,关闭虚拟机进行内存设置

  

  之前设置es的jvm堆内存比较小,最大只有128m,想要设置成512m,最快的方式停掉原来的创建一个新的。

  这样做数据并不会丢失,因为之前创建的时候,将es中数据映射到外面的文件data下,即使删掉了容器,可是外面的文件夹还在,再创建新的容器和外面文件夹关联起来,数据也不会丢失。  

docker stop elasticsearch
docker rm elasticsearch
##重写启动ES
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms64m -Xmx512m" \
-v /mydata/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /mydata/elasticsearch/data:/usr/share/elasticsearch/data \
-v /mydata/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-d elasticsearch:7.4.2

   6.1、安装nginx

// 在/mydata 目录下面创建nginx
mkdir nginx

// 本地没有找到镜像,自动去远程下载
docker run -p 80:80 --name nginx -d nginx:1.10

docker container cp nginx:/etc/nginx . 
    
docker stop nginx

docker rm nginx
// 切换到mydata后 把nginx改名字为conf
mv nginx conf

mkdir nginx
// 把整个文件夹移动到 nginx下,以后nginx就在conf下面了
mv conf nginx/

注意,\前面是有空格的

docker run -p 80:80 --name nginx \
-v /mydata/nginx/html:/usr/share/nginx/html \
-v /mydata/nginx/logs:/var/log/nginx \
-v /mydata/nginx/conf:/etc/nginx \
-d nginx:1.10

测试nginx

访问默认端口号80,进入nginx欢迎页

#在nginx文件夹下
cd /html
#只要这个文件夹下有index.html,就会默认展示
vi index.html
#在html文件夹下,创建es,把es中ik分词器用到的资源放在这里
mkdir es
cd es
vi fenci.txt 
#添加尚硅谷和乔碧萝两个词
192.168.56.10/es/fenci.txt

  6.2、自定义词库

cd /mydata/elasticsearch/plugins/ik/config/
vi IKAnalyzer.cfg.xml
#修改为词库地址
http://192.168.56.10/es/fenci.txt
#修改后,重启elasticsearch
docker restart elasticsearch

 

分词成功

五、Elasticsearch-Rest-Client

  

    Elasticsearch Clients:https://www.elastic.co/guide/en/elasticsearch/client/index.html

    java REST Client:https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/index.html  

    1、创建一个新项目专门用于检索

  

  

  引入依赖

<dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>7.4.2</version>
 </dependency>

  发现版本不配套,原因是springboot项目依赖里面t对es的版本也做了管理,当前Springboot默认整合springData,来操作es

  在自己的pom文件里添加版本管理

<elasticsearch.version>7.4.2</elasticsearch.version>

  创建config.GulimallElasticSearchConfig类

import org.springframework.context.annotation.Configuration;

@Configuration
public class GulimallElasticSearchConfig {
}

  导入common依赖

<dependency>
            <groupId>com.atguigu.gulimall</groupId>
            <artifactId>gulimall-common</artifactId>
            <version>0.0.1-SNAPSHOT</version>
</dependency>

  配置application.properties

spring.application.name=gulimall-search
spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848

 参考 https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/java-rest-high-getting-started-initialization.html

/**
 * 1、导入依赖
 * 2、编写配置给容器中注入一个RestHighlevelClient
 * 3、参照API https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/java-rest-high-getting-started-initialization.html
 */
@Configuration
public class GulimallElasticSearchConfig {

    @Bean
    public RestHighLevelClient esRestClient() {

        RestClientBuilder builder = null;
        //final String hostname, final int port, final String scheme
        builder = RestClient.builder(new HttpHost("192.168.56.10", 9200, "http"));
        RestHighLevelClient client = new RestHighLevelClient(builder);

//        RestHighLevelClient client = new RestHighLevelClient(
//                //如果es有多个,指定es的地址和端口号以及协议名
//                RestClient.builder(
//                        new HttpHost("192.168.56.10", 9200, "http")));
        return client;
    }
}

   2、编写测试类

import org.elasticsearch.client.RestHighLevelClient;

import org.junit.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;

@RunWith(SpringRunner.class)
@SpringBootTest
public class GulimailSearchApplicationTests {

    @Autowired
    private RestHighLevelClient client;

    @Test
    public void contextLoads() {
        System.out.println(client);
    }

}

   在common里面默认是有数据源的,就要配合数据源有关的配置,比如mysql的驱动,包括mybatis-plus的依赖,但是search项目不操作数据库,所以需要去掉

  在启动类上修改注释 

@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)

  测试成功

  3、实例测试

  参考文档:https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/java-rest-high-getting-started-request-options.html

  3.1、请求设置项

  所有请求,比如es添加了安全访问规则,通过RequestOptions进行设置

  

   public static final RequestOptions COMMON_OPTIONS;
    static {
        RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
//        builder.addHeader("Authorization", "Bearer " + TOKEN);
//        builder.setHttpAsyncResponseConsumerFactory(
//                new HttpAsyncResponseConsumerFactory
//                        .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024 * 1024));
        COMMON_OPTIONS = builder.build();
    }

   3.2、保存/更新实例

/**
     *  测试存储数据到es
     *  保存更新二合一
     */
    @Test
    public void indexData() throws IOException{
        IndexRequest indexRequest = new IndexRequest("user");
        indexRequest.id("1");
        //indexRequest.source("userName","ZhangSan","age",18,"gender","男");
        User user = new User();
        user.setUserName("zhangsan");
        user.setAge(22);
        user.setGender("男");
        
        String jsonString = JSON.toJSONString(user);
        indexRequest.source(jsonString, XContentType.JSON);    //要保存得内容
        //执行操作
        IndexResponse index = null;
        index = client.index(indexRequest, GulimailElasticSearchConfig.COMMON_OPTIONS);

        //响应数据
        System.out.println(index);
    }

    @Data
    class User{
        private String userName;
        private String gender;
        private Integer age;
    }

 

  3.3、复杂检索实例

  参考文档:https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.4/java-rest-high-search.html

复杂检索实例

@Test
    void searchData() throws IOException {
        //1.创建一个检索请求
        SearchRequest searchRequest = new SearchRequest();
        //制定索引
        searchRequest.indices("bank");
        //制定DSL,检索条件
        // SearchSourceBuilder searchSourceBuilder 封装检索条件
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        //1.1)构造检索条件
//        searchSourceBuilder.query();
//        searchSourceBuilder.from();
//        searchSourceBuilder.size();
//        searchSourceBuilder.aggregation();

        searchSourceBuilder.query(QueryBuilders.matchQuery("address", "mill"));
        //1.2)按照年龄的值分布进行聚合
        TermsAggregationBuilder ageAgg = AggregationBuilders.terms("ageAgg").field("age").size(10);
        searchSourceBuilder.aggregation(ageAgg);

        //1.3)计算平均薪资
        AvgAggregationBuilder balanceAvg = AggregationBuilders.avg("balanceAvg").field("balance");
        searchSourceBuilder.aggregation(balanceAvg);

        System.out.println("检索条件" + searchSourceBuilder.toString());


        SearchRequest source = searchRequest.source(searchSourceBuilder);
        //2.执行检索
        SearchResponse searchResponse = client.search(source, GulimallElasticSearchConfig.COMMON_OPTIONS);


        //3、分析结果
        System.out.println(searchResponse.toString());

//        Map map = JSON.parseObject(searchResponse.toString(), Map.class);
        //3.1)、获取所有查到的数据
        SearchHits hits = searchResponse.getHits();
        SearchHit[] searchHits = hits.getHits();
        for (SearchHit hit : searchHits) {
            //在此之前根据json生成java对象Account
            String sourceAsString = hit.getSourceAsString();
            Account account = JSON.parseObject(sourceAsString, Account.class);
            System.out.println("account = " + account);
        }

        //3.2)获取检索到的分析信息
        Aggregations aggregations = searchResponse.getAggregations();
//        for (Aggregation aggregation : aggregations.asList()) {
//            System.out.println("name = " + aggregation.getName());
//        }
        System.out.println("aggregations = " + aggregations.toString());
        Terms ageAgg1 = aggregations.get("ageAgg");
        for (Terms.Bucket bucket : ageAgg1.getBuckets()) {
            String keyAsString = bucket.getKeyAsString();
            System.out.println("年龄 = " + keyAsString + "===>" + bucket.getDocCount());
        }
        Avg balanceAvg1 = aggregations.get("balanceAvg");
        System.out.println("平均薪资" + balanceAvg1.getValue());
        System.out.println(aggregations);
    }

@Data
    static class  Accout
    {
        private int account_number;

        private int balance;

        private String firstname;

        private String lastname;

        private int age;

        private String gender;

        private String address;

        private String employer;

        private String email;

        private String city;

        private String state;

    }

  

至此,Elasticsearch介绍和搭建已经完成。接下来是商品业务的实现:

谷粒商城高级篇-商城业务 - Slothhh - 博客园 (cnblogs.com)

 posted on 2022-06-08 23:39  Slothhh  阅读(284)  评论(0编辑  收藏  举报