Using snapshot/restore
You can perfectly use snapshot/restore for this task as long as you have a shared file system or a single-node cluster. The shared FS should meet the following criteria:
In order to register the shared file system repository it is necessary to mount the same shared filesystem to the same location on all master and data nodes.
So it's not a problem if you have a single-node cluster. In this case just make a snapshot and copy it over to other machine.
It might though be a challenging task if you have many nodes running. You may use one of the supported plugins for S3, HDFS and other cloud storages.
The advantage of this approach is that the data and the indices are snapshotted entirely.
Using _reindex API
It might be easier to use _reindex API to transfer data from one ES cluster to another. There is a special Reindex from Remote mode that allows exactly this use case.
What reindex actually does is a scroll on the source index and a lot of bulk inserts to the target index (which can be remote).
There are couple of issues you should take care of:
- setting up the target index (no mapping, no settings will be set by reindex)
- if some fields on the source index are excluded from
_source then their contents won't be copied to the target index
Summing up
For snapshot/restore
Pros:
- all data and the indices are saved/restored as they are
- 2 calls to the ES API are needed
Cons:
- if cluster has more than 1 node, you need to setup a shared FS or to use some cloud storage
For _reindex
Pros:
- Works for cluster of any size
- Data is copied directly (no intermediate storage required)
- 1 call to the ES API is needed
Cons:
- Data excluded from
_source will be lost
Here's also a similar SO question from some three years ago.
Hope that helps!