0

I'm trying to backup my project, databases & nginx environnement. For that, I'm doing backup from my main server and place it in /home/backup/. Everything work on the main server.

Then, from my second server, I'm creating a cron to get that files through SCP.

Here is my command :

0 13 * * * sudo sshpass -p MyPassword sudo scp -P 40511 -r [email protected]:/home/backup /home 

I'm using port 40511 as SSH. The command works if launched manually, but doesn't work with cron.

MyPassword contain an "!". I tried with and without double-quote.

What am I doing wrong ?

5
  • 2
    Whose (as in: which user's?) crontab is that? Also, not a fan at all of using a password in a crontab, that's a very bad idea (you should probably set up a public / private key pair for the computer). The sudo some_program_that_launches_another_program sudo … makes no sense, either. Commented Jan 20 at 12:58
  • User isn't root but it's the same user I do use when I'm launching it locally. Should I use a separator, like commat or && ? Commented Jan 20 at 13:18
  • Since the command part of the crontab line is, by default, interpreted by /bin/sh, which has a simpler syntax than /bin/bash, I recommend having command be a call to a bash script (executable, mounted, starts with #!/bin/bash) which sets up the environment, then calls the desired program, doing all the necessary setup and logging. sudo seems unnecessary. Commented Jan 20 at 13:44
  • 1
    @P.Jerome that's not a separator! No, you shouldn't. If you need sudo to run this command to begin with, then this should be part of root's crontab, not yours. The second sudo is, in either case, completely useless, as you'd already be root (either by running as root through sudo or by being root to begin with). Also, do you really mean to do a full, non-incremental backup using something slow as scp every time? this sounds like a job for rsync, so you're not copying files you haven't changed. (or, really, a backup tool like restic, which would come with better ways than this.) Commented Jan 20 at 13:53
  • 1
    you'd at least want to tell scp to keep modification times, permissions and other file attributes, using the -p flag (especially on the server from which the data originates). Really, all of this says "please don't do it using scp, do it using something that is meant for backups". That'd be much easier, and fail-safer! Commented Jan 20 at 13:55

1 Answer 1

1

Admittedly, a bit of speculation, because I can't know how all your SSH and askpass are set up, but, since that's the answer to easily 90% of crontab-related questions here:

When you run something from crontab, the environment is different than when you run it from your session. That's an architectural limitation of crontab; it just doesn't start an interactive login shell, but simply runs the specified commands as your user, which is not the same.

Paired with the fact that you're doing two user changes (through your double-sudo), my guess is that scp/ssh look into the wrong configuration files, or can't read them. Doesn't really matter: you'd really want to do this without the sshpass and without the sudo:

  1. This is clearly a system job – the fact that you need to become root through the use of sudo shows that beyond doubt. So, instead of your user's crontab, this should be root's crontab. This completely removes the need for sudo.
  2. Since you seem to control the backup server, you very clearly should not be using password authentication (since your the password as part of the command line can be read by any process, including rogue PHP scripts etc through simple things like ps -AF as long as the backup job runs). I'd consider that password compromised now and would heavily suggest you change it immediately. Especially since you just told us one character of your password and that makes it easy for any interested party to be sure that they found the right one. Also, make sure your passwords are securely generated and not just something like "usernamePassword!" or other easy to automatically try out combination. (Guidedly) Brute forcing passwords is still a thing.
    Passwordless public-key authentication for user root is really easy to set up (sudo -i -H ssh-keygen, simply press enter to accept defaults and don't specify a password; then sudo -H -i ssh-copy-id [email protected], enter password once, done). You want to do this as root, since you want these keys to be available (only!) to the root user, not your normal local user (which has no business being able to log in without knowing the password). No excuses there.
  3. Since that reduces the amount of surprise in the environment you get, I'd personally recommend (that's a preference, so hm, heed it or not) that you'd avoid using crontabs alltogether, and just go for the easier to set up systemd-timers. (That has a lot of advantages, as the environment is then more like an interactive session, but you also get better logging, the ability to say something like "OK, get the backup every workday at 00:13:00 +- randomized 10min and every boot just before nginx runs"). If you're interested in getting that to work and this instruction isn't helpful (and you can't find an existing answer here), do open a new question post!
  4. Your scp seems to be, all in all, a bad solution: it copies the whole backup, whether all files (or any) changed or not, resets the modification time (which, by the way, your nginx delivers to the clients, which means you're invalidating their caches every day, thus putting unnecessary load on your server), and isn't careful with file attributes. At the very least here, you'd use rsync instead of scp. More realistically, though:

Aside from changing your auth method, which you really must do here, and from changing the user trying to download the backup, you should probably think of this in a less "bespoke scripts that work (or don't) for you today" way, to be honest.

I've written down what I would do instead in your situation, see my answer here:
How do I automatically do daily backups and on every shutdown, and restore them elsewhere daily and on boot?

4
  • splitting off the "what you should do" part, this has gotten too long. Commented Jan 20 at 15:43
  • Thanx Marcus. Trust me, I read everything. First of all, I'm now using publickey and have disabled password auth. I don't know how does rsync works. I'm stocking my backups (NGINX / Database / cron list) under a Ymd folder to have one backup for each day. But thoses folders doesn't remains on the remote server. Will rsync delete previous backup too, or will only catch new ones ? I'm using a raspberry with SSD to store my backups. I'm not searching to do a snapshot of the whole server. Is rsync still better than scp in my case ? I have snapshot online but hoster keep them on same host. Commented Jan 21 at 17:12
  • 1
    super cool that you've switched to public key auth! That makes sooo many things easier! rsync works, pretty much, like scp, but checks whether files are already on the target in the same version as the source, and if so, doesn't copy them. You can instruct rsync to remove files that aren't present locally, too. Commented Jan 21 at 17:14
  • Switched from scp -P 40511 -r [email protected]:/home/backup /home to rsync -e "ssh -p 40511" [email protected]:/home/backup/* /home/backup and everything's work ! Thanx you ! Commented Jan 21 at 17:41

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.