I am running openVPN gateway to an AWS VPC. I can normally mount AWS FSx for Lustre on my bare metal CENTOS7 machine using a command like this
sudo mount -t lustre -o noatime,flock 10.1.1.90@tcp:/fsx /fsx However if I try to do the same thing in a vagrant CENTOS 7 box on the same network, I will encounter this apparent networking related error:
[vagrant@localhost ~]$ sudo mount -t lustre -o noatime,flock 10.1.1.90@tcp:/fsx /fsx mount.lustre: mount 10.1.1.90@tcp:/fsx at /fsx failed: Input/output error Is the MGS running? The vagrant box has no problem mounting NFS shares from the same AWS subnet, so this is a mystery to me. Being able to get it working in the vagrant image matters even though I can get it to work on bare metal because we use the vagrant environment for testing.
I can also share an example vagrant file that I can replicate the problem with.
Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vagrant.plugins = ['vagrant-vbguest', 'vagrant-disksize', 'vagrant-reload'] config.vm.provider "virtualbox" do |v| v.gui = true v.memory = 2048 v.cpus = 2 end config.disksize.size = "65000MB" config.vm.network "public_network", use_dhcp_assigned_default_route: true config.vm.provision "shell", inline: "sudo yum update -y" config.vm.provision "shell", inline: "sudo yum install wget -y" config.vm.provision "shell", inline: "sudo wget https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc -O /tmp/fsx-rpm-public-key.asc" config.vm.provision "shell", inline: "sudo rpm --import /tmp/fsx-rpm-public-key.asc" config.vm.provision "shell", inline: "sudo wget https://fsx-lustre-client-repo.s3.amazonaws.com/el/7/fsx-lustre-client.repo -O /etc/yum.repos.d/aws-fsx.repo" config.vm.provision "shell", inline: "sudo yum install -y kmod-lustre-client lustre-client" config.vm.provision :reload end