Solr5.3.1整合IKAnalyzer
由于solr5.3.1本身不支持中文分词,而msseg4j的分词效果不明显。因而采用IK进行分词,然而参考http://www.superwu.cn/2015/05/08/2134/在google上下载的jar包放到solr目录下直接报如下异常。
严重: Servlet.service() for servlet [default] in context with path [/solr] threw exception [Filter execution threw an exception] with root cause java.lang.AbstractMethodError at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:179) at org.apache.solr.handler.AnalysisRequestHandlerBase.analyzeValue(AnalysisRequestHandlerBase.java:91) at org.apache.solr.handler.FieldAnalysisRequestHandler.analyzeValues(FieldAnalysisRequestHandler.java:221) at org.apache.solr.handler.FieldAnalysisRequestHandler.handleAnalysisRequest(FieldAnalysisRequestHandler.java:182) at org.apache.solr.handler.FieldAnalysisRequestHandler.doAnalysis(FieldAnalysisRequestHandler.java:102) at org.apache.solr.handler.AnalysisRequestHandlerBase.handleRequestBody(AnalysisRequestHandlerBase.java:63) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.doRun(AprEndpoint.java:2522) at org.apache.tomcat.util.net.AprEndpoint$SocketProcessor.run(AprEndpoint.java:2511) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:745)
一开始以为是配置问题,怎么配都不行。后来看了下源码,发现solr5.3.1中 Luecene的Analyzer接口的createComponents方法把第二个参数去掉了。因此修改源码是在所难免了。源码的修改可参考:http://iamyida.iteye.com/blog/2193513。也可以直接获取改好的源码重新打包即可。
主要修改部分、IKAnalyzer.java
/** * 重载Analyzer接口,构造分词组件 */ @Override protected TokenStreamComponents createComponents(String text) { Reader reader = new BufferedReader(new StringReader(text)); Tokenizer _IKTokenizer = new IKTokenizer(reader , this.useSmart()); return new TokenStreamComponents(_IKTokenizer); }
IKTokenizer.java中添加如下构造函数
public IKTokenizer(AttributeFactory factory, boolean useSmart) { super(factory); offsetAtt = addAttribute(OffsetAttribute.class); termAtt = addAttribute(CharTermAttribute.class); typeAtt = addAttribute(TypeAttribute.class); _IKImplement = new IKSegmenter(input , useSmart); }
其它都是一些零零碎碎的修改。可查看修改后的源文件。
新建一个工程(附件中的IK-Analyzer-extra),添加工厂类IKTokenizerFactory,方便程序的扩展和维护。
package org.wltea.analyzer.util; import java.util.Map; import org.apache.lucene.analysis.Tokenizer; import org.apache.lucene.analysis.util.TokenizerFactory; import org.apache.lucene.util.AttributeFactory; import org.wltea.analyzer.lucene.IKTokenizer; public class IKTokenizerFactory extends TokenizerFactory { private boolean useSmart; public IKTokenizerFactory(Map<String, String> args) { super(args); useSmart = getBoolean(args, "useSmart", false); } @Override public Tokenizer create(AttributeFactory attributeFactory) { Tokenizer tokenizer = new IKTokenizer(attributeFactory,useSmart); return tokenizer; } }
最后是schema.xml中添加如下配置
<fieldType name="text_ik" class="solr.TextField"> <!--索引时候的分词器--> <analyzer type="index"> <tokenizer class="org.wltea.analyzer.util.IKTokenizerFactory" useSmart="true"/> </analyzer> <!--查询时候的分词器--> <analyzer type="query"> <tokenizer class="org.wltea.analyzer.util.IKTokenizerFactory" useSmart="false"/> </analyzer> </fieldType>
最后将IK-Analyzer-5.3.1.jar和IK-Analyzer-extra-5.3.1.jar拷贝至solr项目的lib目录下即可。
另外提醒下各位,IK的源码已经搬迁至这了:http://git.oschina.net/wltea/IK-Analyzer-2012FF/。
工程文件:
http://pan.baidu.com/s/1skv1jCp
http://pan.baidu.com/s/1c1o0gI8
参考文献:
http://iamyida.iteye.com/blog/2220474
http://iamyida.iteye.com/blog/2193513
扫描公众号,关注更多信息
---------------------------------------------------------------------------------我是分割线--------------------------------------------------------------------------to be a better me, talk is cheap show me the code
版权所有,转载请注明原文链接。
文中有不妥或者错误的地方还望指出,以免误人子弟。如果觉得本文对你有所帮助不妨【推荐】一下!如果你有更好的建议,可以给我留言讨论,共同进步!
再次感谢您耐心的读完本篇文章。
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· 周边上新:园子的第一款马克杯温暖上架
· Open-Sora 2.0 重磅开源!
· 分享 3 个 .NET 开源的文件压缩处理库,助力快速实现文件压缩解压功能!
· Ollama——大语言模型本地部署的极速利器
· DeepSeek如何颠覆传统软件测试?测试工程师会被淘汰吗?