第七篇 elasticsearch 链接mysql不会更新
这是我键的索引
"settings":{ "number_of_shards":3, "number_of_replicas":2 }, "mappings":{ "foods":{ "properties":{ "goodsname":{ "type":"keyword" }, "goodsprice":{ "type":"double" }, "goodsnum":{ "type":"integer" }, "goodspath":{ "type":"text" }, "shopname":{ "type":"completion", "analyzer":"simple", "search_analyzer":"simple", "preserve_separators":true }, "updtime":{ "type":"date" }, "goodsrole":{ "type":"integer" }, "createtime":{ "type":"date" }, "uid":{ "type":"integer" }, "goodstype":{ "type":"text" }, "shopid":{ "type":"integer" } } } } }
2.插入的工具用的是postman
1、版本介绍
Elasticsearch:
https://www.elastic.co/products/elasticsearch
版本:6.1
Logstash:
https://www.elastic.co/products/logstash
版本:2.4.0
所需要的安装文件,到官网下载即可。
还需要对应的数据库JDBC,这里使用的是mysql-connector-java-5.1.39.jar
去mysql官网下载
Elasticsearch配置请参照之前的博客,不在这里介绍了。
、Logstash安装配置
Logstash直接下载后解压即可,主要是配置文件的内容编写。
安装logstash-input-jdbc的Logstash插件,用来进行mysql、oracle等的数据同步。
linux
[zsz@VS-zsz logstash-2.4.0]$ bin/plugin install logstash-input-jdbc The use of bin/plugin is deprecated and will be removed in a feature release. Please use bin/logstash-plugin. Validating logstash-input-jdbc Installing logstash-input-jdbc Installation successful
window下载后解压就可以
配置文件(自行指定文件名,这里命名为logstash-mysql.conf ):
linux打开[zsz@VS-zsz conf]$ vi logstash-mysql.conf
window新建logstash-mysql.conf文件在bin文件夹下
复制下面的
input { jdbc { jdbc_driver_library => "/usr/local/logstash-2.4.0/mysql-connector-java-5.1.39.jar" jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_connection_string => "jdbc:mysql://192.168.****:3306/******?characterEncoding=UTF-8&useSSL=false" jdbc_user => "*****" jdbc_password => "*********" statement => "SELECT * FROM news limit 0,1" jdbc_paging_enabled => "true" jdbc_page_size => "50000" schedule => "* * * * *" } } filter { json { source => "message" remove_field => ["message"] } } output { stdout { codec => rubydebug } elasticsearch { hosts => "192.168.****" index => "myindex" } }
启动Logstash在bin问价夹下执行logstash -f /usr/local/logstash-2.4.0/conf/logstash-mysql.conf
下面是linux的
[zsz@VS-zsz conf]$ /usr/local/logstash-2.4.0/bin/logstash -f /usr/local/logstash-2.4.0/conf/logstash-mysql.conf
window在bin文件夹线执行logstash -f /usr/local/logstash-2.4.0/conf/logstash-mysql.conf
这个进程会一直执行下去,因为设置的schedule => "* * * * *"(每分钟执行一次),如果想结束进程需要kill掉进程。
查看elasticsearch是否同步数据成功
[root@VS-zsz conf]# curl '192.168.31.79:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open test 5 1 0 0 1.5kb 795b
green open myindex 5 1 494 0 924.7kb 457.6kb
[root@VS-zsz conf]# curl '192.168.31.78:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open test 5 1 0 0 1.5kb 795b
green open myindex 5 1 494 0 925kb 457.8kb
[root@VS-zsz conf]# curl '192.168.31.79:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
green open test 5 1 0 0 1.5kb 795b
green open myindex 5 1 494 0 925kb 457.8kb
说明数据成功导入,而且在设置了定时任务的情况下, myindex索引的容量不断增加。
5、常见错误:
(1)Pipeline aborted due to error {:exception=>"LogStash::ConfigurationError", :backtrace=>..................stopping pipeline {:id=>"main"}
原因:logstash-mysql.conf 文件配置错误,对于>=2.*的版本, elasticsearch 的参数名应为hosts,如果设置为host则会报错。此处应该是可以配置多个数据源。
6.介绍配置文件(自行指定文件名,这里命名为logstash-mysql.conf ):
input { stdin { } jdbc { # 数据库地址 端口 数据库名 jdbc_connection_string => "jdbc:mysql://localhost:3306/shen" # 数据库用户名 jdbc_user => "root" # 数据库密码 jdbc_password => "rootroot" # mysql java驱动地址 jdbc_driver_library => "/usr/share/logstash/mysql-connector-java-5.1.43-bin.jar" # 驱动类的名称 jdbc_driver_class => "com.mysql.jdbc.Driver" jdbc_paging_enabled => "true" jdbc_page_size => "50000" statement => "SELECT * FROM TABLE" # sql 语句文件,对于复杂的查询,可以放在文件中。 # statement_filepath => "filename.sql" # 设置监听间隔,语法与Linux系统Cron相同 schedule => "* * * * *" } } output { stdout { codec => json_lines } elasticsearch { hosts => "localhost:9200" index => "contacts" document_type => "contact" document_id => "%{id}" } }