If you can install rmt on the system with the tape drive, you can have tar access the drive over the network. By default tar will use the rsh protocol to run rmt on the tape server, but if you have GNU tar, you can give it the --rsh-command='ssh tapeserver /usr/sbin/rmt' option.
If you have LTO tapes, a blocking factor of 20 may be too small to keep the tape streaming; 126 is what we used with LTO4. But I think some rmt implementations restrict you to 20-block transfer sizes, so you may want to look at @schily's implementation of rmt.
In a comment you asked
rmt is a good choice, but how to divide it's time? I have about twenty servers who needs backup. How can I queue them to use rmt?
If the backup commands for each server can be packaged into a shell script, there are probably some flexible batch queuing systems out there that can guarantee sequential processing of them, but I don't know of any offhand and I realize you don't want to have a lot of complexity here.
As a start, you could try something like this, on a system that can ssh to all the servers:
#!/bin/sh lock=/var/run/doalldumps.lock status=/var/run/doalldumps.status for s in $(cat ~/servers) do ( flock -e 9 echo started $s at $(date) > $status ssh $s -n command-to-do-backups echo finished $s at $(date) > $status ) 9> $lock done
Alternatively, a simple way to serialize access to the tape drive is to use flock to lock a file on the server with the tape drive. You could use this in the tar --rsh-command option:
tar ... --rsh-command='ssh tapeserver flock -e /var/run/tape.lock /usr/sbin/rmt'
rmton the system with the tape drive, you can have tar access the drive that way. If you have GNU tar, you can give it the--rsh-command='ssh ...'option if you don't want to runrsh. If you have LTO tapes, a blocking factor of 20 may be too small to keep the tape streaming; 126 is what we used with LTO4. But I think somermtimplementations restrict you to 20-block transfer sizes, so you may want to look at @schily 's implementation of rmt