Solr+Tomcat+zookeeper部署实战

一 、安装solr

环境说明:centos 7.3,solr 6.6,zookeeper3.4,Tomcat8.5,jdk1.8

zookeeper的部署请参考:http://www.cnblogs.com/Sunzz/p/8464284.html

1. 在/opt/下解压tomcat、solr

[root@solr_1 ~]# tar -xf apache-tomcat-8.5.23.tar.gz -C /opt/
[root@solr_1 ~]# tar -xf solr-6.6.2.tgz    -C /opt/
[root@solr_1 ~]# cd  /opt
[root@solr_1 opt]# ln -sv apache-tomcat-8.5.23 tomcat
[root@solr_1 opt]# ln -sv solr-6.6.2 solr

2. 将solr-6.6.0/server/solr-webapp/webapp复制到tomcat/webapps下,并改名为solr

[root@solr_1 ~]# cp -r /opt/solr/server/solr-webapp/webapp/ /opt/tomcat/webapps/
[root@solr_1 ~]# mv /opt/tomcat/webapps/webapp  /opt/tomcat/webapps/solr

3. 复制所需jar包到tomcat中solr项目的lib下

① 将solr-6.6.0/server/lib/ext下的jar、

② 将solr-6.6.0/server/lib下以metrics开头的5个jar(metrics-core-3.2.2.jar、metrics-ganglia-3.2.2.jar、metrics-graphite-3.2.2.jar、metrics-jetty9-3.2.2.jar、metrics-jvm-3.2.2.jar)、

③ 将solr-6.6.0/dist/下的solr-dataimporthandler-6.6.0.jar和solr-dataimporthandler-extras-6.6.0.jar

拷贝到apache-tomcat-8.5.20/webapps/solr/WEB-INF/lib下

[root@solr_1 ~]# cp  /opt/solr/server/lib/ext/*.jar /opt/solr/server/lib/metrics*.jar  /opt/solr/dist/solr-dataimporthandler-*.jar   /opt/tomcat/webapps/solr/WEB-INF/lib/

4. 将solr-6.6.0/server/solr下的文件拷贝至新建的solr-home下

[root@solr_1 ~]# mkdir /opt/solr/solr-home
[root@solr_1 ~]#  cp -r /opt/solr/server/solr/* /opt/solr/solr-home/

5. 修改apache-tomcat-8.5.20/webapps/solr/WEB-INF下的web.xml

① 找到<env-entry>,解开注释,并修改env-entry-value为solr-home的路径

命令:

[root@solr_1 ~]# vim /opt/tomcat/webapps/solr/WEB-INF/web.xml

修改后

<env-entry>
       <env-entry-name>solr/home</env-entry-name>
       <env-entry-value>/opt/solr/solr-home</env-entry-value>
       <env-entry-type>Java.lang.String</env-entry-type>
 </env-entry>

② 去掉权限,不然访问solr会出现没有授权的错误,将两个security-constraint标签注释。

修改后:

<!-- 
<security-constraint>
    <web-resource-collection>
      <web-resource-name>Disable TRACE</web-resource-name>
      <url-pattern>/</url-pattern>
      <http-method>TRACE</http-method>
    </web-resource-collection>
    <auth-constraint/>
  </security-constraint>
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>Enable everything but TRACE</web-resource-name>
      <url-pattern>/</url-pattern>
      <http-method-omission>TRACE</http-method-omission>
    </web-resource-collection>
  </security-constraint>
-->

6. 在/opt/tomcat/webapps/solr/WEB-INF/下创建classes文件夹

并将solr-6.6.0/server/resources/log4j.properties拷贝过去

命令:

[root@solr_1 ~]# cd /opt/tomcat/webapps/solr/WEB-INF/
[root@ WEB-INF]# mkdir classes
[root@ WEB-INF]# cp -rf /opt/solr/server/resources/log4j.properties ./classes/

7. 在solr-home目录下,新建collection1文件夹

并将 /solr-6.6.0/server/solr/configsets/basic_configs中conf文件夹复制到新建的collection1文件夹中.在collection1目录下新建data文件夹.

[root@solr_1 ~]# mkdir /opt/solr/solr-home/collection1
[root@solr_1 ~]# cp -r /opt/solr/server/solr/configsets/basic_configs/conf/ /opt/solr/solr-home/collection1/
[root@solr_1 ~]# mkdir  /opt/solr/solr-home/collection1/data

collection1中创建文件core.properties,写入内容

[root@solr_1 ~]# vim  /opt/solr/solr-home/collection1/core.properties
name=collection1
config=solrconfig.xml
schema=managed-schema
dataDir=data

8. 修改solr的端口与tomcat相同

修改/usr/local/solr/solr-home/solr.xml中的

<int name="hostPort">${jetty.port:8080}</int>

9. 启动tomcat

在浏览器输入地址:http://192.168.29.110:8080/solr/index.html

二 、配置IKAnalyzer分词

1. 解压ikanalyzer-solr6.5.zip

[root@solr_1 ~]# unzip ikanalyzer-solr6.5.zip
[root@solr_1 ~]# mv ikanalyzer-solr6.5 /opt/

把ext.dic、IKAnalyzer.cfg.xml和stopword.dic复制到apache-tomcat-8.5.20\webapps\solr\WEB-INF\classes中,

[root@solr_1 ~]# mkdir /opt/tomcat/webapps/solr/WEB-INF/classes
[root@solr_1 ~]# cp /opt/ikanalyzer-solr6.5/ikanalyzer-solr5/ext.dic /opt/ikanalyzer-solr6.5/ikanalyzer-solr5/IKAnalyzer.cfg.xml /opt/ikanalyzer-solr6.5/ikanalyzer-solr5/stopword.dic  /opt/tomcat/webapps/solr/WEB-INF/classes

把ik-analyzer-solr5-5.x.jar 和 solr-analyzer-ik-5.1.0.jar复制到apache-tomcat-8.5.20/webapps/solr/WEB-INF/lib中;

[root@solr_1 ~]# cp /opt/ikanalyzer-solr6.5/ikanalyzer-solr5/*.jar  /opt/tomcat/webapps/solr/WEB-INF/lib/

2. 打开solr-home/collection1/conf下的managed-schema文件

[root@solr_1 ~]# vim /opt/solr/solr-home/collection1/conf/managed-schema

在</schema>前加入配置:

 <!-- IK分词 -->
<fieldType name="text_ik" class="solr.TextField">
<analyzer type="index">
<tokenizer class="org.apache.lucene.analysis.ik.IKTokenizerFactory" useSmart="false"/>
</analyzer>
<analyzer type="query">
<tokenizer class="org.apache.lucene.analysis.ik.IKTokenizerFactory" useSmart="true"/>
</analyzer>
</fieldType>

3. 重启tomcat

进入http://192.168.29.110:8080/solr/index.html进行确认。

三 拼音配置

1.复制相关jar文件

将pinyinTokenFilter-1.1.0-RELEASE.jar和pinyinAnalyzer4.3.1.jar和pinyin4j-2.5.0.jar复制到apache-tomcat-8.5.20/webapps/solr/WEB-INF/lib目录下

[root@solr_1 ~]# cp  /opt/ikanalyzer-solr6.5/pinyin*  /opt/tomcat/webapps/solr/WEB-INF/lib/

2. 修改solr-home/collection1/conf下的managed-schema文件

(修改后)

<!-- IK分词 -->
<fieldType name="text_ik" class="solr.TextField">
<analyzer type="index">
<tokenizer class="org.apache.lucene.analysis.ik.IKTokenizerFactory" useSmart="false"/>
<filter class="top.pinyin.index.solr.PinyinTokenFilterFactory" pinyin="true" isFirstChar="true" minTermLenght="2" />
<filter class="com.shentong.search.analyzers.PinyinNGramTokenFilterFactory" minGram="2" maxGram="20" />
</analyzer>
<analyzer type="query">
<tokenizer class="org.apache.lucene.analysis.ik.IKTokenizerFactory" useSmart="true"/>
</analyzer>
</fieldType>

(上方标红的为添加内容)

重启tomcat,测试

四、同义词

1. 修改managed-schema文件中的ik分词配置

<fieldType name="text_ik" class="solr.TextField">
   <analyzer type="index">
      <tokenizer class="org.apache.lucene.analysis.ik.IKTokenizerFactory" useSmart="false"/>
      <filter class="top.pinyin.index.solr.PinyinTokenFilterFactory" pinyin="true" isFirstChar="true" minTermLenght="2"/>
      <filter class="com.shentong.search.analyzers.PinyinNGramTokenFilterFactory" minGram="2" maxGram="20"/>
      <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
   <analyzer type="query">
      <tokenizer class="org.apache.lucene.analysis.ik.IKTokenizerFactory" useSmart="true"/
      <filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
      <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
</fieldType>

2. 在solr-home\collection1\conf\synonyms.txt中添加同义词

hell,二是

诛仙,诛仙2,梦幻诛仙

五、联想词

1. 添加联想类型

首先需要加上用以联想的字段,这里假设我们对name字段进行联想,配置如下(managed-schema文件):

<field name="name" type="text_ik" multiValued="false" indexed="true" stored="true"/>
<field name="suggestion" type="text_suggest" indexed="true" stored="true" multiValued="true" /> 
<copyField source="name" dest="suggestion"/>

suggestion字段即为suggest联想所取的字段。这里将suggestion字段设为text_suggest类型,text_suggest是一个自定义的类型,具体作用和配置后面再说。然后利用copyField将name字段拷贝到suggestion字段。那么为什么我们不直接对name字段进行联想,而是专门建立一个字段把name字段拷贝过去,乃至专门建立了一个字段类型呢?在配置中我们可以看到,name字段采用了IKAnalyzer进行了中文分词,如果我们直接对name字段进行分词,则联想出来的就会是分词之后的结果。例如期望联想的记录是“先吃水果然后吃雪糕”,最后联想出来的却是“先吃”。

2. 配置联想字段

接下来就需要建立一个专门的字段类型来配合suggest模块进行检察建议了。这里该字段名称为text_suggest,配置如下(managed-schema文件):

<fieldType name="text_suggest" class="solr.TextField">
   <analyzer type="index">
     <tokenizer class="solr.KeywordTokenizerFactory"/>
     <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
   <analyzer type="query">
     <tokenizer class="solr.KeywordTokenizerFactory"/>
     <filter class="solr.LowerCaseFilterFactory"/>
   </analyzer>
</fieldType>

在这里我们要对整个字段进行联想,因此采用KeywordTokenizerFactory作为分词器,并且使用LowerCaseFilterFactory来保证其可以不区分大小写。可以根据需要替换成自己需要的analyzer。

3. suggest模块配置

现在我们的记录表结构已经建立好了,下面我们进行suggest模块的配置。

首先我们来添加suggest模块。编辑solrconfig.xml文件,添加如下配置:

<searchComponent name="suggest" class="solr.SuggestComponent">
        <lst name="suggester">
            <str name="name">suggest</str>
            <str name="lookupImpl">AnalyzingLookupFactory</str>
            <str name="dictionaryImpl">DocumentDictionaryFactory</str>
            <str name="field">suggestion</str>
            <str name="suggestAnalyzerFieldType">text_suggest</str>
            <str name="buildOnStartup">false</str>
        </lst>
</searchComponent>

说明:在本配置中

  name为该suggest模块的名字;
  lookUpImpl为查找器,默认为JaspellLookupFactory;
  dictionaryImpl为字典实现;
  field为要联想的字段;
  suggestAnalyzerFieldType规定了进行联想操作所使用类型所对应的Analyzer(该字段必填);
  buildOnStartup表示是否在启动时建立索引。

具体配置信息详见https://cwiki.apache.org/confluence/display/solr/Suggester。

4. requestHandler配置

接下来我们需要配置suggest模块的requestHandler。编辑solrconfig.xml文件,添加如下配置:

<requestHandler name="/suggest" class="org.apache.solr.handler.component.SearchHandler">
        <lst name="defaults">
            <str name="suggest">true</str>
            <str name="suggest.dictionary">suggest</str>
            <str name="suggest.count">10</str>
        </lst>
        <arr name="components">
            <str>suggest</str>
        </arr>
    </requestHandler>

下面解释配置中涉及到的参数。suggest参数不用说了,必须为true;

suggest.dictionary为suggest操作所需要用到的字典,应当与上面suggest模块配置中的name属性保持一致;

suggest.count为候选词数量,这里为10。

具体配置可在solr官网中找到:https://lucene.apache.org/solr/guide/6_6/suggester.html

5. 建立索引

这里我们就已经把suggest模块配置完毕了。如果suggest模块配置中buildOnStartup设置为false,则需要手动建立一次索引。建立索引链接形如:

http://192.168.29.110:8080/solr/collection1/suggest?suggest=true&suggest.dictionary=suggest&wt=json&suggest.q=Ath&suggest.build=true

6. 测试

六、集成zookeeper

本实例采用zookeeper3.4.10

1. 把solrhome中的配置文件上传到zookeeper集群

使用:zookeeper的客户端上传。   

[root@solr_1 ~]# cd /opt/solr/server/scripts/cloud-scripts/
[root@solr_1 cloud-scripts]# ./zkcli.sh -zkhost 192.168.29.110:2181,192.168.29.120:2181,192.168.29.130:2181 -cmd upconfig -confdir /opt/solr/solr-home/core_shopdemo_product2/conf/ -confname myconf

查看配置文件是否上传成功:

[root@bogon bin]# bash /usr/local/zookeeper/zoo1/zookeeper-3.4.10/bin/zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[configs, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /configs
[myconf]
[zk: localhost:2181(CONNECTED) 2] ls /configs/myconf
[admin-extra.menu-top.html, currency.xml, protwords.txt, mapping-FoldToASCII.txt, _schema_analysis_synonyms_english.json, _rest_managed.json, solrconfig.xml, _schema_analysis_stopwords_english.json, stopwords.txt, lang, spellings.txt, mapping-ISOLatin1Accent.txt, admin-extra.html, xslt, synonyms.txt, scripts.conf, update-script.js, velocity, elevate.xml, admin-extra.menu-bottom.html, clustering, schema.xml]

2. 修改每一台solr的tomcat 的 bin目录下catalina.sh文件

在其中加入DzkHost指定zookeeper服务器地址:

JAVA_OPTS="$JAVA_OPTS $JSSE_OPTS"

# Register custom URL handlers
# Do this here so custom URL handles (specifically 'war:...') can be used in the security policy
JAVA_OPTS="$JAVA_OPTS -Djava.protocol.handler.pkgs=org.apache.catalina.webresources"
JAVA_OPTS="$JAVA_OPTS -DzkHost=192.168.29.110:2181,192.168.29.120:2181,192.168.29.130:2181"

(上方标红的为添加内容)

3. 重新启动tomcat

 

4. 使用collections管理功能

添加collection

 

 说明:

  config set:配置文件存放位置
  numShards:片区数量
  replicationFactor:每一个片区提供服务的机器数量(小于机器总数)
  Show advanced 显示高级设置
  maxShardsPerNode:最大片区数量

(5、6 非必须步骤)

5. 创建一个两片的collection,每片是一主一备。

在浏览器中访问:

http://192.168.29.110:8080/solr/admin/collections?action=CREATE&name=collection2&numShards=2&replicationFactor=2

连接中需要修改的内容:

ip:服务器ip

name:数据集名称

numShards:数据集有几个片区

replicationFactor:每一个片区提供服务的机器数量(小于机器总数)

6. 删除collection1.

http://192.168.29.110:8080/solr/admin/collections?action=DELETE&name=collection1

连接中需要修改的内容:

ip:服务器ip

name:数据集名称

七、Solr集群的使用

1. 使用solrj操作集群环境的索引库

在pom.xml增加solr的jar

<dependency>
<groupId>org.apache.solr</groupId>
      <artifactId>solr-solrj</artifactId>
      <version>6.6.0</version>
</dependency>

代码:

package com.demo.util.solr;
import java.io.IOException; 
import java.util.ArrayList; 
import java.util.Collection;
import org.apache.solr.client.solrj.SolrClient; 
import org.apache.solr.client.solrj.SolrQuery; 
import org.apache.solr.client.solrj.SolrServerException; 
import org.apache.solr.client.solrj.impl.CloudSolrClient; 
import org.apache.solr.client.solrj.response.QueryResponse; 
import org.apache.solr.common.SolrDocument; 
import org.apache.solr.common.SolrDocumentList; 
import org.apache.solr.common.SolrInputDocument; 

//SolrCloud 索引增删查 

public class SolrCloudTest {
    private static CloudSolrClient cloudSolrClient;
    private static synchronized CloudSolrClient getCloudSolrClient(final String zkHost) { 
        if (cloudSolrClient == null) { 
            try { 
                cloudSolrClient = new CloudSolrClient(zkHost); 
            } catch (Exception e) { 
                e.printStackTrace(); 
            } 
        } 
        return cloudSolrClient; 
    }
    private static void addIndex(SolrClient solrClient) { 
        try { 
            SolrInputDocument doc1 = new SolrInputDocument(); 
            doc1.addField("id", "421245251215121452521251"); 
            doc1.addField("name", "张三"); 
            doc1.addField("age", 30); 
            doc1.addField("desc", "张三是个农民,勤劳致富,奔小康"); 
 
            SolrInputDocument doc2 = new SolrInputDocument(); 
            doc2.addField("id", "4224558524254245848524243"); 
            doc2.addField("name", "李四"); 
            doc2.addField("age", 45); 
            doc2.addField("desc", "李四是个企业家,白手起家,致富一方"); 

            SolrInputDocument doc3 = new SolrInputDocument(); 
            doc3.addField("id", "2224558524254245848524299"); 
            doc3.addField("name", "王五"); 
            doc3.addField("age", 60); 
            doc3.addField("desc", "王五好吃懒做,溜须拍马,跟着李四,也过着小康的日子"); 

            Collection<SolrInputDocument> docs = new ArrayList<SolrInputDocument>(); 
            docs.add(doc1); 
            docs.add(doc2); 
            docs.add(doc3); 
            solrClient.add(docs); 
            solrClient.commit(); 
        } catch (SolrServerException e) { 
            System.out.println("Add docs Exception !!!"); 
            e.printStackTrace(); 
        } catch (IOException e) { 
            e.printStackTrace(); 
        } catch (Exception e) { 
            System.out.println("Unknowned Exception!!!!!"); 
            e.printStackTrace(); 
        } 
    }

  public static void search(SolrClient solrClient, String String) { 
        SolrQuery query = new SolrQuery(); 
        query.setQuery(String); 
        try { 
            QueryResponse response = solrClient.query(query); 
            SolrDocumentList docs = response.getResults();
            System.out.println("文档个数:" + docs.getNumFound()); 
            System.out.println("查询时间:" + response.getQTime()); 
 
            for (SolrDocument doc : docs) { 
                String id = (String) doc.getFieldValue("id"); 
                String name = (String) doc.getFieldValue("name"); 
                Integer age = (Integer) doc.getFieldValue("age"); 
                String desc = (String) doc.getFieldValue("desc"); 
                System.out.println("id: " + id); 
                System.out.println("name: " + name); 
                System.out.println("age: " + age); 
                System.out.println("desc: " + desc); 
                System.out.println(); 
            } 
        } catch (SolrServerException e) { 
            e.printStackTrace(); 
        } catch (Exception e) { 
            System.out.println("Unknowned Exception!!!!"); 
            e.printStackTrace(); 
        } 
    }

    public static void deleteAllIndex(SolrClient solrClient) { 
        try { 
            solrClient.deleteByQuery("*:*");// delete everything! 
            solrClient.commit(); 
        } catch (SolrServerException e) { 
            e.printStackTrace(); 
        } catch (IOException e) { 
            e.printStackTrace(); 
        } catch (Exception e) { 
            System.out.println("Unknowned Exception !!!!"); 
            e.printStackTrace(); 
        } 
    }

    public static void main(String[] args) throws IOException { 
         final String zkHost = "192.168.29.110:2181,192.168.29.120:2181,192.168.29.130:2181";
         final String defaultCollection = "collection1";   
         final int zkClientTimeout = 20000;   
         final int zkConnectTimeout = 1000;

         CloudSolrClient cloudSolrClient = getCloudSolrClient(zkHost);          
        System.out.println("The Cloud cloudSolrClient Instance has benn created!");             
        cloudSolrClient.setDefaultCollection(defaultCollection);   
        cloudSolrClient.setZkClientTimeout(zkClientTimeout);   
        cloudSolrClient.setZkConnectTimeout(zkConnectTimeout);                    
        cloudSolrClient.connect();   

        System.out.println("The cloud Server has been connected !!!!");  

        //创建索引

   SolrCloudTest.addIndex(cloudSolrClient); 

        //查询 

  SolrCloudTest.search(cloudSolrClient, "name:李四");   

    //删除

    SolrCloudTest.deleteAllIndex(cloudSolrClient);
    SolrCloudTest.search(cloudSolrClient, "name:李四");
        cloudSolrClient.close(); 
    }
}

2 在solr的管理页面增加collection1的索引字段“name”,“age”,“desc”。

name和desc的字段类型使用添加的IK分词“text_ik”,

age的字段类型使用int

 

八 对数据库数据进行索引

数据库主机以及账号密码:

    mysql: 192.168.29.100:3306
    user:root
    password:123456

1.复制相关jar文件

将solr自带的solr-dataimporthandler-6.6.0.jar, solr-dataimporthandler-extras-6.6.0.jar和mysql-connector-java-5.1.44.jar拷贝到tomcat中solr的lib下

2.修改solrconfig.xml

找到“<requestHandler name="/select" class="solr.SearchHandler">”,在其上方增加配置
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
      <lst name="defaults">
        <str name="config">data-config.xml</str>
      </lst>
    </requestHandler>

3.在solrconfig.xml的同级目录下新建data-config.xml

详细配置:

<?xml version="1.0" encoding="UTF-8" ?>
<dataConfig>
  <dataSource name="source1" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://192.168.29.100:3306/test1" user="root" password="123456"/>
  <document name="salesDoc">
    <entity pk="id" dataSource="source1" name="user"
      query="select id,name,sex,age,insertTime from user"
      deltaQuery="select id,name,sex,age,insertTime from user where insertTime >'${dih.last_index_time}'">
      <field name="id" column="id"/>
      <field name="name" column="name"/>
      <field name="sex" column="sex"/>
      <field name="age" column="age"/>
      <field name="insertTime" column="insertTime"/>
    </entity>
  </document>
</dataConfig>

配置说明:

 

  dataSource:设置数据源
  document:Solr的信息的基本单位,它是一组描述某些事物的数据集合
  entity:对应数据表
  pk:表主键
  dataSource:指定使用哪个数据源
  name:表名
  query:查询sql
   deltaQuery:增量更新时使用的查询sql
   ${dih.last_index_time}:最后更新时间
  field:表字段

 

4.在solrconfig.xml的同级目录下新建dataimport.properties

dataimport.properties内容:

#Mon Nov 06 13:03:53 CST 2017

last_index_time=2017-11-06 13\:03\:50

user.last_index_time=2017-11-06 13\:03\:50

user.last_index_time指定user表的最后更新时间(建议使用此种方式,因为如果有多张表的话可以分别更新)

5.修改managed-schema

    <field name="id" type="int" indexed="true" stored="true" required="true" multiValued="false" />
    <field name="name" type="text_ik" indexed="true" stored="true"/>
    <field name="sex" type="int" indexed="true" stored="true"/>
    <field name="age" type="int" indexed="true" stored="true"/>
    <field name="insertTime" type="int" indexed="true" stored="true"/>

6.上传文件

如solr的配置已上传至zookeeper,需重复“集成zookeeper”中的第一步将配置文件上传至zookeeper。(也可以执行“常用命令”中的“更新solr配置到zookeeper”进行单个文件上传)

7.重启tomcat,执行数据导入操作

说明:

  full-import:全量索引
  delta-import:增量索引
  clean:清除原有索引
  commit:执行后提交
  entity:数据源表

8.验证是否成功

九 、定时增量更新索引

1.将solr-dataimportscheduler-1.1.jar拷贝到tomcat中solr的lib目录下面

2.修改tomcat中solr下WEB-INF/web.xml

在servlet节点前面增加:

<listener>
<listener-class>org.apache.solr.handler.dataimport.scheduler.ApplicationListener</listener-class>
</listener>

3.在solr-home下创建conf文件夹

进入conf,在其中新建dataimport.properties

dataimport.properties配置

[root@solr_1 ~]# vim /opt/solr/solr-home/conf/dataimport.properties
 1 #################################################
 2 #                                               #
 3 #       dataimport scheduler properties         #
 4 #                                               #
 5 ################################################
 6 #  to sync or not to sync
 7 #  1 - active; anything else - inactive
 8 syncEnabled=1
 9 #  which cores to schedule
10 #  in a multi-core environment you can decide which cores you want syncronized
11 #  leave empty or comment it out if using single-core deployment
12 syncCores=collection1
13 #  solr server name or IP address
14 #  [defaults to localhost if empty]
15 server=localhost
16 #  solr server port
17 #  [defaults to 80 if empty]
18 port=8080
19 #  application name/context
20 #  [defaults to current ServletContextListener's context (app) name]
21 webapp=solr
22 #  URL params [mandatory]
23 #  remainder of URL
24 #增量
25 params=/dataimport?command=delta-import&clean=false&commit=true
26 #  schedule interval
27 #  number of minutes between two runs
28 #  [defaults to 30 if empty]
29 interval=1
30 #  重做索引的时间间隔,单位分钟,默认7200,即1天;
31 #  为空,为0,或者注释掉:表示永不重做索引
32 reBuildIndexInterval=7200
33 #  重做索引的参数
34 reBuildIndexParams=/dataimport?command=full-import&clean=true&commit=true
35 #  重做索引时间间隔的计时开始时间,第一次真正执行的时间=reBuildIndexBeginTime+reBuildIndexInterval*60*100036 #  两种格式:2012-04-11 03:10:00 或者  03:10:00,后一种会自动补全日期部分为服务启动时的日期
37 reBuildIndexBeginTime=03:10:00

4.重启tomcat,并验证是否成功

在mysql中增加一条数据,等待1分钟,在solr的管理页面查看是否有增加数据

更新solr配置到zookeeper

修改schema.xml配置文件之后,根本不用登录zookeeper删除原有文件,文件会自动覆盖,这里直接上传即可,命令如下:

[root@solr_1 ~]# cd /opt/solr/server/scripts/cloud-scripts/
[root@ cloud-scripts]# ./zkcli.sh -zkhost 192.168.29.110:2181,192.168.29.120:2181,192.168.29.130:2181 -cmd upconfig -confdir /opt/solr/solr-home/core_shopdemo_product2/conf/ -confname myconf

此命令是在配置上传至zookeeper后,修改配置时使用的

posted @ 2018-03-31 07:53  Sunzz  阅读(1375)  评论(0编辑  收藏  举报