如何将elasticsearch数据从一台服务器移动到另一台服务器

如何将Elasticsearch数据从一台服务器移动到另一台服务器?

我有服务器运行Elasticsearch 1.1.1在一个本地节点上有多个索引。 我想将这些数据复制到运行Elasticsearch 1.3.4的服务器B.

迄今为止的程序

  1. 在服务器和服务器上closuresES
  2. 将所有数据scp到新服务器上正确的数据目录。 (数据似乎位于/ var / lib / elasticsearch /我的debian框)
  3. 将权限和所有权更改为elasticsearch:elasticsearch
  4. 启动新的ES服务器

当我用ES头插件查看集群时,没有索引出现。

看来,数据没有加载。 我错过了什么吗?

选定的答案使它听起来稍微复杂一些,下面是你需要的(首先在你的系统上安装npm)。

npm install -g elasticdump elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data 

如果映射保持不变,则可以跳过后续副本的第一个elasticdump命令。

我刚刚完成了从AWS到Qbox.io的迁移,上面没有任何问题。

更多细节在:

https://www.npmjs.com/package/elasticdump

帮助页面(截至2016年2月)包括完整性:

 elasticdump: Import and export tools for elasticsearch Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS] --input Source location (required) --input-index Source index and type (default: all, example: index/type) --output Destination location (required) --output-index Destination index and type (default: all, example: index/type) --limit How many objects to move in bulk per operation limit is approximate for file streams (default: 100) --debug Display the elasticsearch commands being used (default: false) --type What are we exporting? (default: data, options: [data, mapping]) --delete Delete documents one-by-one from the input as they are moved. Will not delete the source index (default: false) --searchBody Preform a partial extract based on search results (when ES is the input, (default: '{"query": { "match_all": {} } }')) --sourceOnly Output only the json contained within the document _source Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}} sourceOnly: {SOURCE} (default: false) --all Load/store documents from ALL indexes (default: false) --bulk Leverage elasticsearch Bulk API when writing documents (default: false) --ignore-errors Will continue the read/write loop on write error (default: false) --scrollTime Time the nodes will hold the requested search in order. (default: 10m) --maxSockets How many simultaneous HTTP requests can we process make? (default: 5 [node <= v0.10.x] / Infinity [node >= v0.11.x] ) --bulk-mode The mode can be index, delete or update. 'index': Add or replace documents on the destination index. 'delete': Delete documents on destination index. 'update': Use 'doc_as_upsert' option with bulk update API to do partial update. (default: index) --bulk-use-output-index-name Force use of destination index name (the actual output URL) as destination while bulk writing to ES. Allows leveraging Bulk API copying data inside the same elasticsearch instance. (default: false) --timeout Integer containing the number of milliseconds to wait for a request to respond before aborting the request. Passed directly to the request library. If used in bulk writing, it will result in the entire batch not being written. Mostly used when you don't care too much if you lose some data when importing but rather have speed. --skip Integer containing the number of rows you wish to skip ahead from the input transport. When importing a large index, things can go wrong, be it connectivity, crashes, someone forgetting to `screen`, etc. This allows you to start the dump again from the last known line written (as logged by the `offset` in the output). Please be advised that since no sorting is specified when the dump is initially created, there's no real way to guarantee that the skipped rows have already been written/parsed. This is more of an option for when you want to get most data as possible in the index without concern for losing some rows in the process, similar to the `timeout` option. --inputTransport Provide a custom js file to us as the input transport --outputTransport Provide a custom js file to us as the output transport --toLog When using a custom outputTransport, should log lines be appended to the output stream? (default: true, except for `$`) --help This page Examples: # Copy an index from production to staging with mappings: elasticdump \ --input=http://production.es.com:9200/my_index \ --output=http://staging.es.com:9200/my_index \ --type=mapping elasticdump \ --input=http://production.es.com:9200/my_index \ --output=http://staging.es.com:9200/my_index \ --type=data # Backup index data to a file: elasticdump \ --input=http://production.es.com:9200/my_index \ --output=/data/my_index_mapping.json \ --type=mapping elasticdump \ --input=http://production.es.com:9200/my_index \ --output=/data/my_index.json \ --type=data # Backup and index to a gzip using stdout: elasticdump \ --input=http://production.es.com:9200/my_index \ --output=$ \ | gzip > /data/my_index.json.gz # Backup ALL indices, then use Bulk API to populate another ES cluster: elasticdump \ --all=true \ --input=http://production-a.es.com:9200/ \ --output=/data/production.json elasticdump \ --bulk=true \ --input=/data/production.json \ --output=http://production-b.es.com:9200/ # Backup the results of a query to a file elasticdump \ --input=http://production.es.com:9200/my_index \ --output=query.json \ --searchBody '{"query":{"term":{"username": "admin"}}}' ------------------------------------------------------------------------------ Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here` 

使用ElasticDump

1)yum安装epel-release

2)yum install nodejs

3)yum安装npm

4)npm安装elasticdump

5)cd node_modules / elasticdump / bin

6)

 ./elasticdump \ --input=http://192.168.1.1:9200/original \ --output=http://192.168.1.2:9200/newCopy \ --type=data 

您可以使用Elasticsearch中提供的快照/恢复function。 一旦设置了基于文件系统的快照存储,您就可以在群集之间移动它并在不同的群集上进行恢复

如果您可以将第二台服务器添加到群集,则可以这样做:

  1. 将服务器B添加到服务器A群集
  2. 增加索引的副本数量
  3. ES会自动将索引复制到服务器B
  4. closures服务器A
  5. 减less索引副本的数量

这只有在replace次数等于节点数时才有效。

我尝试在Ubuntu上将数据从ELK 2.4.3移到ELK 5.1.1

以下是步骤

$ sudo apt-get update

$ sudo apt-get install -y python-software-properties python g++ make

$ sudo add-apt-repository ppa:chris-lea/node.js

$ sudo apt-get update

$ sudo apt-get install npm

$ sudo apt-get install nodejs

$ npm install colors

$ npm install nomnom

$ npm install elasticdump

在主目录goto $ cd node_modules/elasticdump/

执行命令

如果你需要基本的httpauthentication,你可以像这样使用它:

--input=http://name:password@localhost:9200/my_index

从生产中复制索引:

$ ./bin/elasticdump --input="http://Source:9200/Sourceindex" --output="http://username:password@Destination:9200/Destination_index" --type=data

如果有人遇到同样的问题,当试图从elasticsearch <2.0到> 2.0转储时,你需要做的:

 elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=analyzer elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=mapping elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=data --transform "delete doc.__source['_id']"