7

I'm trying to follow these guides and answers

and anything else I can find to make this work:

  • SSH from macOS to ubuntu 18.04
  • Forward my local gpg agent so I can gpg --decrypt on the remote machine.

I already --exported and --imported my public key to the remote machine. The remote gpg reports the agent-socket to reside in /run/user/1001/gnupg/S.gpg-agent, and the extra socket in /run/user/1001/gnupg/S.gpg-agent.extra.

However, trying to ssh -v -R /run/user/1001/gnupg/S.gpg-agent:/Users/rasmus/.gnupg/S.gpg-agent.extra -l rasmus <remote-host> warns that

Warning: remote port forwarding failed for listen path /run/user/1001/gnupg/S.gpg-agent

Which is presumably because systemd already owns the remote socket.

$ sudo journalctl -xe … Mar 11 15:06:21 pact-cube sshd[4972]: error: bind: Address already in use Mar 11 15:06:21 pact-cube sshd[4972]: error: unix_listener: cannot bind to path: /run/user/1001/gnupg/S.gpg-agent 

What must I do to forward gpg agent from macOS to Ubuntu 18.04? The required GPG and SSH versions are used on both machines.

2
  • I would start by asking what are the port numbers being used and I would test using different ones that I am sure no other process is listening on. if this doesn't work please share more details about your networking and aso look all aspects regarding privileges and security -e.g. process owner has access to required files, forward ports, others are prevented from reading security files where you keep the secrets, etc. Commented Jun 5, 2021 at 9:13
  • There are no TCP port numbers involved @JoseManuelGomezAlvarez, this question is about forwarding UNIX-domain sockets. Commented Jul 23, 2022 at 20:51

2 Answers 2

4

The easiest way to get gpg forwarding functional is to first tell systemd to stop messing with those sockets. This can be done for the current user without requiring sudo permissions by executing the following on the remote side:

systemctl --user 'disable' 'gpg-agent.socket' systemctl --user 'disable' 'gpg-agent-extra.socket' systemctl --user 'stop' 'gpg-agent.socket' systemctl --user 'stop' 'gpg-agent-extra.socket' 

With that unnecessary problematic component gone, the next step is to determine where sockets needs to be placed. Easily done by running gpgconf --list-dirs. (Note that those socket dirs are hardcoded, as can be seen in homedir.c. They can not be configured without patching gnupg and recompiling.)

All which is required is that the clients socket gets forwarded to the server, and that it gets left alone by misbehaving third parties. An ssh config block can be generated by running: (update $remote)

remote='host.example.com' local_sock=$( gpgconf --list-dirs | sed -n 's/agent-extra-socket://p' ) remote_sock=$( ssh $remote "gpgconf --list-dirs" | sed -n 's/agent-socket://p' ) echo "Host $remote\n RemoteForward $remote_sock $local_sock" 

Should typically result in something similar to: (the numerical uid:s will differ, for sure)

Host host.example.com RemoteForward /run/user/1/gnupg/S.gpg-agent /run/user/2/gnupg/S.gpg-agent.extra 

Even with systemd evicted, there's another uninvited guest who tends to show up to these parties. Namely an ssh-agent on the remote system. As the only one we wish to have is the local one, any remotely running agent should be killed to stop it from blocking the socket forwarding:

ssh $remote "pkill gpg-agent" 

That last step will likely be required after each attempt to use gpg without having agent-forwarding active. Unless updating gpg:s configuration to exit with an error message when the socket is unavailable, rather than attempting to launch a silly remote agent which never ever will be trusted with any keys. Can be done with:

ssh $remote "echo 'no-autostart' >> .gnupg/gpg.conf" 

With the above, ssh -v should be saying: remote forward success for....

1
  • I spent hours and hours trying to make gpg over ssh work and this really helped, I had gpg-agent starting anyway and the no-autostart is all I needed. Lots of online documentation seem outdated on this since gpg-agent is now mandatory. However I have to forward the regular socket, the extra one doesn't work when I try to decrypt on the server... Commented Feb 24, 2024 at 13:21
1

Delete the stale socket file:

ssh mylinuxserver 'rm /run/user/1001/gnupg/S.gpg-agent' 

and then connect:

ssh -vvv mylinuxserver 

This assumes you have set up the RemoteForward in your ~/.ssh/config. It would look something like this:

# File: ~/.ssh/config [...] Host mylinuxserver HostName mylinuxserver.example.com #RemoteForward <socket_on_remote_box> <extra_socket_on_local_box> RemoteForward /run/user/1001/gnupg/S.gpg-agent /Users/rasmus/.gnupg/S.gpg-agent.extra [...] 

In the verbose output, you should see something like:

debug1: remote forward success for: listen /run/user/1001/gnupg/S.gpg-agent:-2, \ connect /Users/rasmus/.gnupg/S.gpg-agent.extra:-2 

Using gpg-agent on the remote machine should then work.

To avoid needing to delete the stale socket file each time, add StreamLocalBindUnlink yes to the server's /etc/ssh/sshd_config, as suggested here: https://wiki.gnupg.org/AgentForwarding

A note to anyone trying to get this to work on Fedora Linux: I needed to enable two sockets on my Fedora Linux Workstation to get the socket activation to work:

systemctl --user enable gpg-agent.socket systemctl --user enable gpg-agent-extra.socket 

The gpg-agent.service then reported that it will be triggered on-demand by those two sockets:

$ systemctl --user status gpg-agent.service ○ gpg-agent.service - GnuPG cryptographic agent and passphrase cache Loaded: loaded (/usr/lib/systemd/user/gpg-agent.service; static) Active: inactive (dead) TriggeredBy: ● gpg-agent-extra.socket ● gpg-agent.socket Docs: man:gpg-agent(1) 

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.