docker安装elasticsearch7.16.3集群
一、在物理机上分别创建要挂载的目录
//elasticsearch.yml mkdir -p /usr/elasticsearch/es01/config mkdir -p /usr/elasticsearch/es02/config mkdir -p /usr/elasticsearch/es03/config //data目录 mkdir -p /usr/elasticsearch/es01/data mkdir -p /usr/elasticsearch/es02/data mkdir -p /usr/elasticsearch/es03/data //插件目录 mkdir -p /usr/elasticsearch/es01/plugins mkdir -p /usr/elasticsearch/es02/plugins mkdir -p /usr/elasticsearch/es03/plugins //授权所有es目录下文件 chmod -R 777 /usr/elasticsearch/* //kibana.yml mkdir -p /usr/kibana/config
二、拷贝elasticsearch.yml到3个config目录下,每个下面修改对应的端口(es01=9200、es02=9201、es03=9202)
# ======================== Elasticsearch Configuration ========================= # # NOTE: Elasticsearch comes with reasonable defaults for most settings. # Before you set out to tweak and tune the configuration, make sure you # understand what are you trying to accomplish and the consequences. # # The primary way of configuring a node is via this file. This template lists # the most important settings you may want to configure for a production cluster. # # Please consult the documentation for further information on configuration options: # https://www.elastic.co/guide/en/elasticsearch/reference/index.html # # ---------------------------------- Cluster ----------------------------------- # # Use a descriptive name for your cluster: # #cluster.name: my-application # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # #node.name: node-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # #path.data: /path/to/data # # Path to log files: # #path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # #bootstrap.memory_lock: true # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # #network.host: 192.168.0.1 #network.host: 0.0.0.0 # # Set a custom port for HTTP: # #http.port: 9200 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # #cluster.initial_master_nodes: ["node-1", "node-2"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true network.host: 0.0.0.0 # 同时设置bind_host和publish_host http.port: 9202 # rest客户端连接端口 transport.tcp.port: 9300 # 集群中节点互相通信端口 node.master: true # 设置master角色 node.data: true # 设置data角色 node.ingest: true # 设置ingest角色 在索引之前,对文档进行预处理,支持pipeline管道,相当于过滤器 node.max_local_storage_nodes: 1 http.cors.enabled: true # 跨域配置 http.cors.allow-origin: "*" # 跨域配置
三、拷贝kibana.yml到/usr/kibana/config下
# Kibana is served by a back end server. This setting specifies the port to use. server.port: 5601 server.host: "0.0.0.0" # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. #server.host: "localhost" # Enables you to specify a path to mount Kibana at if you are running behind a proxy. # Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath # from requests it receives, and to prevent a deprecation warning at startup. # This setting cannot end in a slash. #server.basePath: "" # Specifies whether Kibana should rewrite requests that are prefixed with # `server.basePath` or require that they are rewritten by your reverse proxy. # This setting was effectively always `false` before Kibana 6.3 and will # default to `true` starting in Kibana 7.0. #server.rewriteBasePath: false # Specifies the public URL at which Kibana is available for end users. If # `server.basePath` is configured this URL should end with the same basePath. #server.publicBaseUrl: "" # The maximum payload size in bytes for incoming server requests. #server.maxPayloadBytes: 1048576 # The Kibana server's name. This is used for display purposes. #server.name: "your-hostname" # The URLs of the Elasticsearch instances to use for all your queries. #elasticsearch.hosts: ["http://localhost:9200"] # Kibana uses an index in Elasticsearch to store saved searches, visualizations and # dashboards. Kibana creates a new index if the index doesn't already exist. #kibana.index: ".kibana" # The default application to load. #kibana.defaultAppId: "home" # If your Elasticsearch is protected with basic authentication, these settings provide # the username and password that the Kibana server uses to perform maintenance on the Kibana # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which # is proxied through the Kibana server. #elasticsearch.username: "kibana_system" #elasticsearch.password: "123456" # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. # These settings enable SSL for outgoing requests from the Kibana server to the browser. #server.ssl.enabled: false #server.ssl.certificate: /path/to/your/server.crt #server.ssl.key: /path/to/your/server.key # Optional settings that provide the paths to the PEM-format SSL certificate and key files. # These files are used to verify the identity of Kibana to Elasticsearch and are required when # xpack.security.http.ssl.client_authentication in Elasticsearch is set to required. #elasticsearch.ssl.certificate: /path/to/your/client.crt #elasticsearch.ssl.key: /path/to/your/client.key # Optional setting that enables you to specify a path to the PEM file for the certificate # authority for your Elasticsearch instance. #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] # To disregard the validity of SSL certificates, change this setting's value to 'none'. #elasticsearch.ssl.verificationMode: full # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of # the elasticsearch.requestTimeout setting. #elasticsearch.pingTimeout: 1500 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value # must be a positive integer. #elasticsearch.requestTimeout: 30000 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side # headers, set this value to [] (an empty list). #elasticsearch.requestHeadersWhitelist: [ authorization ] # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. #elasticsearch.customHeaders: {} # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. #elasticsearch.shardTimeout: 30000 # Logs queries sent to Elasticsearch. Requires logging.verbose set to true. #elasticsearch.logQueries: false # Specifies the path where Kibana creates the process ID file. #pid.file: /run/kibana/kibana.pid # Enables you to specify a file where Kibana stores log output. #logging.dest: stdout # Set the value of this setting to true to suppress all logging output. #logging.silent: false # Set the value of this setting to true to suppress all logging output other than error messages. #logging.quiet: false # Set the value of this setting to true to log all events, including system usage information # and all requests. #logging.verbose: false # Set the interval in milliseconds to sample system and process performance # metrics. Minimum is 100ms. Defaults to 5000. #ops.interval: 5000 # Specifies locale to be used for all localizable strings, dates and number formats. # Supported languages are the following: English - en , by default , Chinese - zh-CN . i18n.locale: "zh-CN"
4. 编写docker-compose.yml并拷贝至/usr/elasticsearch下
version: '3' services: es01: image: elasticsearch:7.16.3 container_name: es01 restart: always environment: - node.name=es01 - cluster.name=es-docker-cluster - discovery.seed_hosts=es02,es03 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - /usr/elasticsearch/es01/data:/usr/share/elasticsearch/data - /usr/elasticsearch/es01/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /usr/elasticsearch/es01/plugins:/usr/share/elasticsearch/plugins ports: - 9200:9200 networks: - elastic es02: image: elasticsearch:7.16.3 container_name: es02 restart: always environment: - node.name=es02 - cluster.name=es-docker-cluster - discovery.seed_hosts=es01,es03 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - /usr/elasticsearch/es02/data:/usr/share/elasticsearch/data - /usr/elasticsearch/es02/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /usr/elasticsearch/es02/plugins:/usr/share/elasticsearch/plugins ports: - 9201:9201 networks: - elastic es03: image: elasticsearch:7.16.3 container_name: es03 restart: always environment: - node.name=es03 - cluster.name=es-docker-cluster - discovery.seed_hosts=es01,es02 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - /usr/elasticsearch/es03/data:/usr/share/elasticsearch/data - /usr/elasticsearch/es03/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /usr/elasticsearch/es03/plugins:/usr/share/elasticsearch/plugins ports: - 9202:9202 networks: - elastic kibana: image: kibana:7.16.3 container_name: kibana restart: always depends_on: - es01 environment: ELASTICSEARCH_URL: http://es01:9200 ELASTICSEARCH_HOSTS: http://es01:9200 volumes: - /usr/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml networks: - elastic ports: - 5601:5601 networks: elastic: driver: bridge
四、修改物理机参数
vi /etc/sysctl.conf
末尾添加一行
vm.max_map_count=262144
生效
sysctl -p
五、运行容器
docker-compose up -d
六、增加插件
1、分别在es01、es02、es03中子目录plugins下创建2个文件夹ik、pinyin
2、下载对应elasticsearch版本的插件
ik插件下载地址:https://github.com/medcl/elasticsearch-analysis-ik/releases pinyin下载地址:https://github.com/medcl/elasticsearch-analysis-pinyin/releases
3、解压后拷贝至每个ik和pinyin目录下
4、重启集群
七、集群状态
查看集群状态:http://192.168.1.100:9200/_cluster/health?pretty 查看主节点:http://192.168.1.100:9200/_cat/master 查看所有节点:http://192.168.1.100:9200/_cat/nodes
八、访问地址
1、elasticsearch
http://192.168.1.100:9200 http://192.168.1.100:9201 http://192.168.1.100:9202
2、kibana地址
http://192.168.1.100:5601
参考:https://zhuanlan.zhihu.com/p/439001624