Stanford CoreNLP使用需要注意的一点

1、Stanford CoreNLP maven依赖,jdk依赖1.8

<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.6.0</version>
</dependency>
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.6.0</version>
<classifier>models</classifier>
</dependency>
<dependency>
<groupId>edu.stanford.nlp</groupId>
<artifactId>stanford-corenlp</artifactId>
<version>3.6.0</version>
<classifier>models-chinese</classifier>
</dependency>

2、Stanford CoreNLP分词、分句、词性标注、命名实体识别、语法分析本身支持很多,但是全部

使用会导致性能很差,比如我们实际使用中需要使用ner, parse, mention, coref可以先不适用。

annotators = segment, ssplit, pos, lemma,ner, parse, mention, coref
因为涉及复杂解析时时间复杂度很高

连接

https://stackoverflow.com/questions/29543274/stanford-nlp-annotate-text-is-very-slow

Is the text a single long sentence? The runtime of the parser is O(n^3) with respect to the length of the sentence, which gets quite slow on sentences longer than ~40 words. If you remove the "parse, dcoref, regexner" annotators, does it speed up? And, does it then slow down again if you re-add "parse"?

3、程序的配置使用还是很方便的,可以在自己开发中借鉴使用。

posted @ 2017-08-29 09:28  杉枫  阅读(2817)  评论(0编辑  收藏  举报