0

I'm doing a testing of transferring 10 TB of files from one server to other server via AWS.

I have mounted the file system:

lsblk file -s /dev/sdb mkfs -t xfs /dev/sdb mkdir /data mount /dev/sdb /data cd /data 

Created a 10 TB file of data using the following command:

dd if=/dev/nvme1n1 of=test10t.img bs=1 count=0 seek=10T 

(Is this correct or any other good options to download sample files of 10 TB size)

the ls -laSh shows as 10 TB of file got downloaded. However, df -h shows as:

Filesystem Size Used Avail Use% Mounted on devtmpfs 464M 0 464M 0% /dev tmpfs 472M 0 472M 0% /dev/shm tmpfs 472M 596K 472M 1% /run tmpfs 472M 0 472M 0% /sys/fs/cgroup /dev/nvme0n1p1 8.0G 8.0G 16K 100% / tmpfs 95M 0 95M 0% /run/user/1000 /dev/nvme1n1 11T 79G 11T 1% /data tmpfs 95M 0 95M 0% /run/user/0 

10 TB of size not utilized fully. Can anyone explain on this please?

Also, while doing SCP from one server to other one using the command:

scp -i <password-of-second-server> <10TB file> ec2-user@<ip-address of other server>:~ 

it showing as transferring files. But the transfer didn't complete yet because of the larger file size and becoming slow once it crosses 100GB. I'm transferring from server1 to server2. In the server1 cli, it 's showing as in progresswith around 25 GB/hour average progress. Can anyone guide and correct if this is the right way that help to complete my requirement?

7
  • 2
    dd doesn't download anything. You've shown us that you created a sparse file that's 20TB is size but using very much less storage. Commented Dec 28, 2022 at 14:01
  • Could you share the exact command that downloads 10 TB dummy data. I have tried from fallocate, but got some errors. Commented Dec 28, 2022 at 14:05
  • fallocate doesn't download anything either. Your dd and any use of fallocate will create a sparse file that looks like it's 10TB but takes up very little storage. Do you understand what "sparse" means? Commented Dec 28, 2022 at 14:41
  • You seem to be trying to use scp to copy (to download) the data file. But your description of even that is suspect. Please edit your question to explain which parts you have actually succeeded with, and which part(s) you are having problems with. Since you are dealing with two servers, please make it quite clear which set of instructions applies to which server (source or destination) Commented Dec 28, 2022 at 14:42
  • yes. I understood that as it just creating a file which can accommodate the 10 TB of size. If i'm transferring this file to the other server, will the whole file will get transferred. If my requirement is to test 10 TB data transfer from server 1 to server 2, shouldn't I fill some some data of that size inside the file? Please correct me if I'm wrong. Commented Dec 28, 2022 at 14:49

1 Answer 1

0

Since you're using scp... Maybe the cause of the badness could be that the remote host isn't seeing activity on the connection and is timing out.

Confirm the following entries in sshd_config on the remote host:

ClientAliveInterval 250 ClientAliveCountMax 16 

Another thing you could do is to split the 10TB file up using the shar command. Then scp each shar file.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.