This post is more than 5 years old
7 Posts
0
3985
April 18th, 2011 12:00
Failures with: server_export -option nfsv4only
My attempts to limit an NFS export to nfsv4-only fail on the Celerra simulator (NAS 6.0.36). This worked okay for me with NAS 5.6.43.
I have correctly changes netd from hivers=3 to hivers=v4 and rebooted as documented. The command fails claiming NFS was started with V3 inspite of the change to .../netd.
I even rebooted the whole appliance out of desperation.
My complete sequence of command line configurations are shown below. Please advise.
server_param server_2 -facility nfsv4 -modify domain -value eng.e-dialog.com
cat /nas/server/slot_2/netd
routed
nfs start hivers=4
statd
lockd
pax
rquotad action=start
ndmp port=10000
xattrp
snmpd
kerbinit
server_cpu server_2 -reboot now
/nas/sbin/uc_config -convert start 8859-1.txt -mover server_2
/nas/sbin/uc_config -convert start big5.txt -mover server_2
/nas/sbin/uc_config -convert start 8859-1.txt -mover server_2
nas_volume -name stv1 -create -Stripe 1024000 d7,d8,d9,d10
nas_volume -name stv2 -create -Stripe 1024000 d7,d8,d9,d10
nas_volume -name stv2 -create -Stripe 1024000 d11,d12,d13,d14
nas_volume -name mtv1 -create -Meta stv1
nas_volume -name mtv2 -create -Meta stv2
nas_fs -name queue -create mtv1
nas_fs -name mm3 -create mtv2
server_mountpoint server_2 -create /queue
server_mountpoint server_2 -create /mm3
server_mount server_2 -option accesspolity=MIXED,nooplock,rwlock,nfsv4delegation=NONE queue /queue
server_mount server_2 -option accesspolity=MIXED,nooplock,rwlock,nfsv4delegation=NONE mm3 /mm3
server_export server_2 -Protocol nfs -option access=10.0.13.0/255.255.252.0,root=10.0.13.0/255.255.252.0,nfsv4only -comment 'NFS v4 only export for
queue' /queue
server_2 :
Error 22: server_2 : Invalid argument
Export Error: Nfs has been started with HI_VERS 3. Cannot export as nfsv4 only.
bergec
275 Posts
0
April 23rd, 2011 03:00
If this is DART 6.0 then the file to edit for NFSv4 is different, look at the documentation (I think it's "config")
Claude
wblair1
7 Posts
0
April 19th, 2011 07:00
I already did that as shown in the first few lines of my posting. And, yes, I did triple check that am referencing the correct server_2 and slot_2.
I even verified the inodes for the netd file were the same for server_2 and slot_2. I did edit what I believe is the correct file and rebooted the correct datamover. I also rebooted the whole appliance just incase....
gbarretoxx1
366 Posts
0
April 19th, 2011 07:00
Hi,
Ensure that the /nas/server/slot_ /config file where is the slot number of the Data Mover has the hivers set to 4. The nfs entry in the /nas/server/slot_ /config file should appear similar to :
nfs config hivers=4
Then, reboot the Data Mover.
Gustavo Barreto.
gbarretoxx1
366 Posts
0
April 19th, 2011 10:00
It would be /nas/server/slot_2/config
wblair1
7 Posts
0
April 19th, 2011 10:00
PLEASE NOTE:
There was a typo with the accesspolicy option in the original server_mount command:
server_mount server_2 -option accesspolity=MIXED,nooplock,rwlock,nfsv4delegation=NONE queue /queue
I have done a permenent unmount of "/queue" and repeated the commands with the correct syntax.
The nfsv4only export option continues to fail with the same error in my original post:
server_export server_2 -Protocol nfs -option access=10.0.13.0/255.255.252.0,root=10.0.13.0/255.255.252.0,nfsv4only -comment 'NFS v4 only export for
queue' /queue
server_2 :
Error 22: server_2 : Invalid argument
Export Error: Nfs has been started with HI_VERS 3. Cannot export as nfsv4 only.
gbarretoxx1
366 Posts
0
April 19th, 2011 10:00
Did you check the config file instead the netd ?
Thanks,
wblair1
7 Posts
0
April 19th, 2011 12:00
Content of /nas/server/slot_2/netd
routed
nfs start hivers=4
statd
lockd
pax
rquotad action=start
ndmp port=10000
xattrp
snmpd
kerbinit
Content of /nas/server/slot_2/config
nfs config
wblair1
7 Posts
0
April 22nd, 2011 16:00
I also find that after editing /etc/server/slot_2/netd, setting hivers=4, then rebooting the datamover, nfs v4 is not running.
Here are some specifics showing the specific release, content of my netd, the reboot operation and the NFS failed status after the reboot.
Suggestions???
[nasadmin@Celerra-v6 ~]$ cat /etc/emc-release
EMC Celerra Control Station Linux release 3.0 (NAS 6.0.36)
[nasadmin@Celerra-v6 ~]$ cat /nas/server/slot_2/netd
routed
nfs start hivers=4
statd
lockd
pax
rquotad action=start
ndmp port=10000
xattrp
snmpd
kerbinit
[nasadmin@Celerra-v6 ~]$ server_cpu server_2 -reboot -monitor now
server_2 : reboot in progress 5.0.0.0.0.0.0.3.3.3.3.3.3.3.4.done
[nasadmin@Celerra-v6 ~]$ server_nfs server_2 -v4
server_2 :
NFSv4 is not enabled
Restart system and use nfs option hivers=4
[nasadmin@Celerra-v6 ~]$ server_nfs server_2 -v4 -service -start
server_2 :
Error 4020: server_2 : failed to complete command
jukokkon
51 Posts
0
April 23rd, 2011 12:00
File to edit is
/nas/server/slot_2/config
And put there hivers=4
And you get nfsv4 started
--
Jussi
wblair1
7 Posts
0
April 25th, 2011 08:00
Thanks.
I was using V6 documentation and the section on the NFS manual, page 41 "Specify NFS v4 access" said to edit the start command in the netd file.
I misread the reference on page 28 that said "the /nas/server/slot_x/conf file". I didn't take it literally and missed it since the section that explictly talked about "netv4only" explicted said the start command in netd.
It was a little obscure becuase the "hivers=4" I put into /net/server/slot_2/netd was not rejected or reported nay warnings on the console.
I never knew nfs v4 failed to start until long after trying to figure out what the nfsv4only error really meant.
BTW:
I was very unhappy that the ysstem let me put an invalid parameter into netd then nicely ignore it. Just for fun I put a bogus parameter into /nas/server/slot_2/netd the reboot went into limbo..... I never got an error, I never got the "reboot in progress"...
This was ignored: nfs start hivers=4
This caused limbo: nfs start thisIsStupid
Yes, this was eventually user error but I stared at it too long and missed it. Also, the CD I downloaded did not have release notes. Where was the drastic change between the v5 and v6 configuration described?
Again, thanks.