【ElasticSearch】异常错误

【ElasticSearch】异常错误

================================================================

1、磁盘满了造成 Elasticsearch 成为只读

2、index.highlight.max_analyzed_offset

3、CentOS8 docker 换成了 podman

4、达到最大分分片书 this cluster currently has [1000]/[1000] maximum shards open 此群集当前最多打开[1000]/[1000]个碎片

5、实体内容太长 org.apache.http.ContentTooLongException: entity content is too long [111214807] for the configured buffer limit [104857600]

6、Document contains at least one immense term in field="content" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms. 

================================================================

1、磁盘满了造成 Elasticsearch 成为只读

把磁盘重新扩容后还抱下面的错

blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]

把多有的索引设置为非只读 https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html

curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

 

2、index.highlight.max_analyzed_offset

PUT assembly-service-2021.02.03/_settings
{
    "index" : {
        "highlight.max_analyzed_offset" : 6000000
    }
}

 

4、达到最大分片数 this cluster currently has [1000]/[1000] maximum shards open 此群集当前最多打开[1000]/[1000]个碎片

错误

{
    "error": {
        "root_cause": [{
            "type": "validation_exception",
            "reason": "Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"
        }],
        "type": "validation_exception",
        "reason": "Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"
    },
    "status": 400
}

更新最大分片数

修改有三种方式

kibana

PUT /_cluster/settings
{
  "persistent": {
    "cluster": {
      "max_shards_per_node":10000
    }
  }
}

 终端

curl -XPUT http://localhost:9200/_cluster/settings -H 'Content-Type:application/json' -d '
{
  "persistent": {
    "cluster": {
      "max_shards_per_node":10000
    }
  }
}'

修改配置文件

# vim elasticsearch.yml
cluster.max_shards_per_node: 10000

 

5、实体内容太长 org.apache.http.ContentTooLongException: entity content is too long [111214807] for the configured buffer limit [104857600]

参考:https://blog.csdn.net/qq_34412985/article/details/122122330?spm=1001.2014.3001.5501

// 自定义请求选项
RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder(); builder.setHttpAsyncResponseConsumerFactory(new HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory(500 * 1024 * 1024));//修改为500MB RequestOptions requestOptions = builder.build(); SearchRequest searchRequest = new SearchRequest(index); restHighLevelClient.search(searchRequest, requestOptions); SearchScrollRequest scrollRequest = new SearchScrollRequest(scrollId); restHighLevelClient.scroll(scrollRequest, requestOptions);

 

6、Document contains at least one immense term in field="content" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.

keyword类型的字段默认最大匹配长度32766,超过时会报异常,通过设置ignore_above忽略超过长度的。

PUT /content/_mapping
{
  "properties": {
      "author": {
        "type": "keyword",
        "ignore_above":32766
      }
    }
}

 

posted @ 2020-05-19 21:21  翠微  阅读(1916)  评论(0编辑  收藏  举报