6

I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted".

The NFS server has the following share.

/mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check) 

The share seems to be active for both clients.

# exportfs -s /mnt/share_dir 192.168.7.101(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash) /mnt/share_dir 192.168.7.11(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash) 

The client 192.168.7.101 can see the share.

$ sudo showmount -e 192.168.7.10 Export list for 192.168.7.10: /mnt/share_dir 192.168.7.101 

192.168.7.101 's mount destination:

# ls -lah /mnt/share_dir/ total 8.0K drwxr-xr-x 2 me me 4.0K Aug 28 19:21 . drwxr-xr-x 3 root root 4.0K Aug 28 19:21 .. 

When I try to mount the share, the client says "operation not permitted" with either nfs or nfs4 type.

$ sudo mount -vvv -t nfs 192.168.7.10:/mnt/share_dir /mnt/share_dir mount.nfs: timeout set for Sun Aug 28 21:56:03 2022 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.7.10,clientaddr=192.168.7.101' mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options 'addr=192.168.7.10' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.7.10 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.7.10 prog 100005 vers 3 prot UDP port 46169 mount.nfs: mount(2): Operation not permitted mount.nfs: Operation not permitted 

I've set fsid=0 and insecure to the export options, but it didn't work.

RPCInfo from the client's side:

# rpcinfo -p 192.168.7.10 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 59675 mountd 100005 1 tcp 37269 mountd 100005 2 udp 41354 mountd 100005 2 tcp 38377 mountd 100005 3 udp 46169 mountd 100005 3 tcp 39211 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 100003 3 udp 2049 nfs 100227 3 udp 2049 100021 1 udp 46745 nlockmgr 100021 3 udp 46745 nlockmgr 100021 4 udp 46745 nlockmgr 100021 1 tcp 42571 nlockmgr 100021 3 tcp 42571 nlockmgr 100021 4 tcp 42571 nlockmgr 

Using another client, 192.168.7.11, I was able to mount that share with no issues.

I can not see any issue or misconfiguration, and could not find a fix anywhere. There's no firewall in the way and both server and client are using Debian 11.

Any idea of what's going on?

4
  • From your showmount /mnt/backup/backup1/Videos 192.168.7.101 ... to me /mnt/backup/backup1/Videos and 192.168.7.10:/mnt/share_dir don't look all that similar ... ? Commented Aug 29, 2022 at 3:05
  • Oh sorry. It is the same. I changed the names for ease of reading... and typing. Commented Aug 30, 2022 at 11:03
  • 1. mount -r ...? i.e. try mounting read-only since that's how it's exported. 2. You've shown us the client error messages; what does the server tell you? Commented Aug 30, 2022 at 20:20
  • @roaima, using the -r option outputs the same error. I could not find a specific NFS log file. I could only find some syslog messages where Systemd starts the NFS deamon. Something like: kernel: [ 38.121183] FS-Cache: Loaded --- next line --- kernel: [ 38.135725] FS-Cache: Netfs 'nfs' registered for caching Commented Aug 31, 2022 at 1:53

3 Answers 3

13

I found the issue.

Basically, I've created a Debian unprivileged container in Proxmox. That means NFS is unavailable. Until now, I was unaware of that restriction while using Proxmox containers.

To be able to access the NFS share within that container, I followed some suggestions from Proxmox forum.

First, I mounted the NFS share in the Proxmox host (no issues there). Then, in Proxmox, I created a "bind mount" to bind that NFS partition to my container.

# pct set 903 -mp0 /mnt/host_dir,mp=/mnt/guest_dir 

I'm not sure this is the best approach, but now I can access that NFS share from within the container.

Another possibility is to recreate the container with privilege and NFS enabled.

1
  • One point to keep in mind - this command should be applied when container with ID 903 is off, otherwise it will fail Commented Mar 24, 2024 at 14:13
0

I use RHEL 7.9, and for what it's worth I am disappointed with NFS 22 years into the 21st century....

my experience is if you edit /etc/nfs.conf or /etc/sysconfig/nfs then the mount often defers for version 3. For me, again in RHEL 7.9 I can't speak for other distributions, to get NFS v4.1 working I must not change anything in either of these two files, and then at best only v4.1 will work; I have never been able to get NFS v4.2 to work even though it is listed in /etc/nfs.conf.

So, make sure between your nfs server and client, all the details within /etc/nfs.conf and /etc/sysconfig/nfs match, the things like mountd and statd port numbers... by default it should all inherently happen under port 2049 for NFS4. If there are port number discrepancies between the nfs server and client, that will prevent the mount from happening.

for reference, here is the bare minimum needed:

on nfs server in /etc/exports have /bkup *(rw,no_root_squash) then do exportfs -av followed by exportfs -s to validate.

on nfs client a simple mount 192.168.1.1:/bkup /bkup should mount {change ip address to match your nfs server, and folder name accordingly}.

also do a service firewalld stop along with a setenforce 0 to turn off the firewall and turn off selinux, respectively. I don't think selinux typically prevents nfs, but one step at a time to get mount working... what I've mentioned here has always gotten nfs to at least work. hope that helps.

3
  • Both systems are using Debian 11. Also, there's no SELinix and no FirewallD. I've checked /etc/default/nfs-common file and both server and client are equal with no options set. On the server side, /etc/default/nfs-kernel-server is pretty much default. Commented Aug 31, 2022 at 1:45
  • There's no /etc/nfs.conf or /etc/sysconfig/nfs either. Commented Aug 31, 2022 at 2:47
  • so it seems Debian implements NFS in a different manner... than what I know of from using RHEL/CentOS, not sure how else I could help other than say look for a debian specific question/answer site for help Commented Sep 1, 2022 at 13:21
0

What I found to be the simplist answer and is mentioned briefly at the end of the first answer as a possibility:

"Another possibility is to recreate the container with privilege and NFS enabled."

I used RockyLinux9 from the templates, I'm sure it works on Ubuntu also. If you create an LXC container, find the first tab and uncheck the "Unprivileged container" option which will also pull the check off the "Nesting" below it. Finish creating your LXC but don't boot it yet.

enter image description here

Then you have the "Options" tab under the LXC vm where you will edit the "Features" option. Under here check both "NFS" and "nesting".

enter image description here

Lastly don't forget to add the package nfs-common And that's it

1
  • As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. Commented May 29 at 19:59

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.