elasticsearch-dump导入导出elasticsearch数据
elasticsearch-dump可以将es中索引的数据、mapping、setting等导出为json格式的文件,或者将相应的文件导入到其他索引中,使用比较灵活。本文将简单介绍docker环境下elasticsearch-dump的操作。
获取docker镜像
docker pull taskrabbit/elasticsearch-dump
导出数据
首先要新建一个存放数据文件的文件夹,如/tmp/data
- 导出索引内的数据
docker run --rm -ti -v /tmp/data:/tmp taskrabbit/elasticsearch-dump --input=http://es_address:9200/my_index --output=/tmp/index_data.json --type=data
执行完毕将在/tmp/data/目录下生成index_data.json文件
- 导出索引的mapping
docker run --rm -ti -v /tmp/data:/tmp taskrabbit/elasticsearch-dump --input=http://es_address:9200/my_index --output=/tmp/index_mapping.json --type=mapping
执行完毕将在/tmp/data/目录下生成index_mapping.json文件
导入数据
- 导入索引数据
将index_data.json文件放到/tmp/data/目录下
docker run --rm -ti -v /tmp/data:/tmp taskrabbit/elasticsearch-dump --output=http://es_address:9200/my_index --input=/tmp/index_data.json --type=data
- 导入索引mapping
将index_mapping.json文件放到/tmp/data/目录下
docker run --rm -ti -v /tmp/data:/tmp taskrabbit/elasticsearch-dump --output=http://es_address:9200/my_index --input=/tmp/index_mapping.json --type=mapping
参数设置
更多实用方法参考 https://github.com/elasticsearch-dump/elasticsearch-dump
elasticdump: Import and export tools for elasticsearch
version: %%version%%
Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]
--input
Source location (required)
--input-index
Source index and type
(default: all, example: index/type)
--output
Destination location (required)
--output-index
Destination index and type
(default: all, example: index/type)
--overwrite
Overwrite output file if it exists
(default: false)
--limit
How many objects to move in batch per operation
limit is approximate for file streams
(default: 100)
--size
How many objects to retrieve
(default: -1 -> no limit)
--concurrency
The maximum number of requests the can be made concurrently to a specified transport.
(default: 1)
--concurrencyInterval
The length of time in milliseconds in which up to <intervalCap> requests can be made
before the interval request count resets. Must be finite.
(default: 5000)
--intervalCap
The maximum number of transport requests that can be made within a given <concurrencyInterval>.
(default: 5)
--carryoverConcurrencyCount
If true, any incomplete requests from a <concurrencyInterval> will be carried over to
the next interval, effectively reducing the number of new requests that can be created
in that next interval. If false, up to <intervalCap> requests can be created in the
next interval regardless of the number of incomplete requests from the previous interval.
(default: true)
--throttleInterval
Delay in milliseconds between getting data from an inputTransport and sending it to an
outputTransport.
(default: 1)
--debug
Display the elasticsearch commands being used
(default: false)
--quiet
Suppress all messages except for errors
(default: false)
--type
What are we exporting?
(default: data, options: [settings, analyzer, data, mapping, alias, template])
--delete
Delete documents one-by-one from the input as they are
moved. Will not delete the source index
(default: false)
--searchBody
Preform a partial extract based on search results
when ES is the input, default values are
if ES > 5
`'{"query": { "match_all": {} }, "stored_fields": ["*"], "_source": true }'`
else
`'{"query": { "match_all": {} }, "fields": ["*"], "_source": true }'`
--headers
Add custom headers to Elastisearch requests (helpful when
your Elasticsearch instance sits behind a proxy)
(default: '{"User-Agent": "elasticdump"}')
--params
Add custom parameters to Elastisearch requests uri. Helpful when you for example
want to use elasticsearch preference
(default: null)
--sourceOnly
Output only the json contained within the document _source
Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
sourceOnly: {SOURCE}
(default: false)
--ignore-errors
Will continue the read/write loop on write error
(default: false)
--scrollTime
Time the nodes will hold the requested search in order.
(default: 10m)
--maxSockets
How many simultaneous HTTP requests can we process make?
(default:
5 [node <= v0.10.x] /
Infinity [node >= v0.11.x] )
--timeout
Integer containing the number of milliseconds to wait for
a request to respond before aborting the request. Passed
directly to the request library. Mostly used when you don't
care too much if you lose some data when importing
but rather have speed.
--offset
Integer containing the number of rows you wish to skip
ahead from the input transport. When importing a large
index, things can go wrong, be it connectivity, crashes,
someone forgetting to `screen`, etc. This allows you
to start the dump again from the last known line written
(as logged by the `offset` in the output). Please be
advised that since no sorting is specified when the
dump is initially created, there's no real way to
guarantee that the skipped rows have already been
written/parsed. This is more of an option for when
you want to get most data as possible in the index
without concern for losing some rows in the process,
similar to the `timeout` option.
(default: 0)
--noRefresh
Disable input index refresh.
Positive:
1. Much increase index speed
2. Much less hardware requirements
Negative:
1. Recently added data may not be indexed
Recommended to use with big data indexing,
where speed and system health in a higher priority
than recently added data.
--inputTransport
Provide a custom js file to use as the input transport
--outputTransport
Provide a custom js file to use as the output transport
--toLog
When using a custom outputTransport, should log lines
be appended to the output stream?
(default: true, except for `$`)
--transform
A javascript, which will be called to modify documents
before writing it to destination. global variable 'doc'
is available.
Example script for computing a new field 'f2' as doubled
value of field 'f1':
doc._source["f2"] = doc._source.f1 * 2;
May be used multiple times.
Additionally, transform may be performed by a module. See [Module Transform](#module-transform) below.
--awsChain
Use [standard](https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/) location and ordering for resolving credentials including environment variables, config files, EC2 and ECS metadata locations
_Recommended option for use with AWS_
--awsAccessKeyId
--awsSecretAccessKey
When using Amazon Elasticsearch Service protected by
AWS Identity and Access Management (IAM), provide
your Access Key ID and Secret Access Key.
--sessionToken can also be optionally provided if using temporary credentials
--awsIniFileProfile
Alternative to --awsAccessKeyId and --awsSecretAccessKey,
loads credentials from a specified profile in aws ini file.
For greater flexibility, consider using --awsChain
and setting AWS_PROFILE and AWS_CONFIG_FILE
environment variables to override defaults if needed
--awsIniFileName
Override the default aws ini file name when using --awsIniFileProfile
Filename is relative to ~/.aws/
(default: config)
--support-big-int
Support big integer numbers
--retryAttempts
Integer indicating the number of times a request should be automatically re-attempted before failing
when a connection fails with one of the following errors `ECONNRESET`, `ENOTFOUND`, `ESOCKETTIMEDOUT`,
ETIMEDOUT`, `ECONNREFUSED`, `EHOSTUNREACH`, `EPIPE`, `EAI_AGAIN`
(default: 0)
--retryDelay
Integer indicating the back-off/break period between retry attempts (milliseconds)
(default : 5000)
--parseExtraFields
Comma-separated list of meta-fields to be parsed
--fileSize
supports file splitting. This value must be a string supported by the **bytes** module.
The following abbreviations must be used to signify size in terms of units
b for bytes
kb for kilobytes
mb for megabytes
gb for gigabytes
tb for terabytes
e.g. 10mb / 1gb / 1tb
Partitioning helps to alleviate overflow/out of memory exceptions by efficiently segmenting files
into smaller chunks that then be merged if needs be.
--fsCompress
gzip data before sending outputting to file
--s3AccessKeyId
AWS access key ID
--s3SecretAccessKey
AWS secret access key
--s3Region
AWS region
--s3Endpoint
AWS endpoint can be used for AWS compatible backends such as
OpenStack Swift and OpenStack Ceph
--s3SSLEnabled
Use SSL to connect to AWS [default true]
--s3ForcePathStyle Force path style URLs for S3 objects [default false]
--s3Compress
gzip data before sending to s3
--retryDelayBase
The base number of milliseconds to use in the exponential backoff for operation retries. (s3)
--customBackoff
Activate custom customBackoff function. (s3)
--tlsAuth
Enable TLS X509 client authentication
--cert, --input-cert, --output-cert
Client certificate file. Use --cert if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--key, --input-key, --output-key
Private key file. Use --key if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--pass, --input-pass, --output-pass
Pass phrase for the private key. Use --pass if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--ca, --input-ca, --output-ca
CA certificate. Use --ca if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--inputSocksProxy, --outputSocksProxy
Socks5 host address
--inputSocksPort, --outputSocksPort
Socks5 host port
--help
This page
可能遇到的问题
- 运行后报错es连接失败
es的地址问题,用宿主机ip或者使用docker容器的内部ip,若es节点的网络在新建的网络中,那么运行elasticsearch-dump也应该位于与es相同的网络中。
分类:
elasticsearch
标签:
elasticsearch
, docker
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 25岁的心里话
· 闲置电脑爆改个人服务器(超详细) #公网映射 #Vmware虚拟网络编辑器
· 零经验选手,Compose 一天开发一款小游戏!
· 通过 API 将Deepseek响应流式内容输出到前端
· AI Agent开发,如何调用三方的API Function,是通过提示词来发起调用的吗