1

I'm having trouble since 2 days with autofs. I have a Solaris 11 Server. There I share the folder /export/home with the following command:

share -o rw -d „Freigabe von /export/home“ /export/home

My Client is a Fedora 17. On Fedora I created the folder /ahome, where all home folders should be mounted with autofs. I gave permissions 777 to /ahome.

After that I configured the /etc/auto.master and added

/ahome auto.homes 

Than I created the file /etc/auto.homes

read1 192.168.0.3:/export/home/read1 read2 192.168.0.3:/export/home/read2 

After that I restart autofs with

systemctl restart autofs.service 

Both Users (read1 and read2) exist on both systems with the same UID and GID. But when I cd to /ahome/read1 and make an ls on Fedora I get the following error.

ls: cannot open directory .: Permission denied 

In /var/log/messages

Mar 30 23:43:34 fe-19 pulseaudio[1474]: [alsa-sink] alsa-sink.c: We were woken up with POLLOUT set -- however a subsequent snd_pcm_avail() returned 0 or another value < min_avail. Mar 30 23:43:49 fe-19 dbus-daemon[582]: ** Message: No devices in use, exit Mar 30 23:45:31 fe-19 systemd[1]: Cannot add dependency job for unit mdmonitor-takeover.service, ignoring: Unit mdmonitor-takeover.service failed to load: No such file or directory. See system logs and 'systemctl status mdmonitor-takeover.service' for details. Mar 30 23:45:32 fe-19 automount[1100]: umount_autofs_indirect: ask umount returned busy /ahome Mar 30 23:47:49 fe-19 dbus-daemon[582]: (packagekitd:1508): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed Mar 30 23:55:03 fe-19 systemd[1]: Cannot add dependency job for unit mdmonitor-takeover.service, ignoring: Unit mdmonitor-takeover.service failed to load: No such file or directory. See system logs and 'systemctl status mdmonitor-takeover.service' for details. Mar 30 23:55:03 fe-19 automount[1933]: umount_autofs_indirect: ask umount returned busy /ahome 

Please, Can anybody help me? I start hating autofs

3
  • first of all. Are you able to mount via NFS? I see the share -o but not if you tried to mount it manually. Commented Mar 31, 2013 at 7:20
  • Yes, you are right. I also have no permissions when I mount the folder with the mount command. Commented Mar 31, 2013 at 8:25
  • this might be a bit off. but I remember having issues when mounting NFS v4 between linux and Solaris. I had to force NFS v3 server in Solaris. Maybe you can try that. Commented Apr 1, 2013 at 11:02

2 Answers 2

2

Solaris basically assumes that both the client and the server have the same UIDs/GIDs for every user. What's probably happening is that your 'read1' and 'read2' users don't exist on the Solaris server, so the NFS requests are happening as the NFS anon user. There are two ways to fix it.

Find the numerical UID of the read1 and read2 users, on the Linux host, then for example, if read1 was UID 101, and read2 was UID 102, you could:

  1. You can either chown the /export/home/read1 and /export/home/read2 users to their respective UID, on the Solaris server, via chown -R 101 /export/home/read1 chown -R 102 /export/home/read2

  2. Or you can set the NFS anon user to those UIDs for each user, and individually share each directory.

    share -o rw -o anon=101 -d „Freigabe von /export/home“ /export/home/read1

    share -o rw -o anon=102 -d „Freigabe von /export/home“ /export/home/read2

However, if you're using ZFS on Solaris 11, which you probably are, you can share these directly in ZFS:

  1. Single share for everyone

    zfs set share=name=homedirs,path=/export/home,prot=nfs,sec=sys,rw rpool/export/home

  2. Individual shares

    zfs set share=name=read1-homedir,path=/export/home/read1,prot=nfs,sec=sys,rw rpool/export/home/read1

    zfs set share=name=read2-homedir,path=/export/home/read2,prot=nfs,sec=sys,rw rpool/export/home/read2

Doing it this way saves the NFS shares in the metadata of the zpool, and zfs will share those shares anytime that pool is mounted. Perhaps not totally useful on the rpool, but you have pools made from external disks, it can be handy, especially if you ever need to move disks to a new host.

2
  • "Solaris basically assumes that both the client and the server have the same UIDs/GIDs for every user." You should replace Solaris by nfsv3 in this sentence as this is a NFS v3 and older requirement. NFS v4 user mapping is done differently. You also wrote read1 and read2 users probably do not exist on the Solaris side. Technically, this is not required. Commented May 2, 2013 at 6:22
  • No, the user's don't need to exist on the solaris server. If you're using NFSv3, though, you will need to chown the home directories to the right UID and GID, if you want the remote users to be able to read and write to them. If you're using NFSv4, you can accomplish similar results setting ZFS ACLs for the read1 and read2 users (chmod A+user:$UID:full_set:fd:allow /export/home/$dir). I was unclear in my answer. Commented May 2, 2013 at 13:38
0

Assuming NFS v3 is used, /export/home/read1 must be owned by read1 and /export/home/read2 by read2.

If you use NFSv4, extra configuration is required for proper mapping between user ids.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.