Unsolved
This post is more than 5 years old
13 Posts
0
2572
May 27th, 2015 07:00
NS40G file systems exist but won't mount
Inherited this system as part of a re-org of several large departments.
Warranty had lapsed and they have already moved most of the shares over to a new NAS.
NS40G with a CX3-80 back end.
Problem is there is an archive folder which they did not move and all files must be kept for 50 years.
So I need to get the data back to the other NAS.
Server can see and ping the Celerra, but no mount points are available.
Celerra GUI reports the File System ARCHIVE as being present and status OK
However when I look at the Mount tab for Archive, i see the following
"File system properties cannot be modified because the file system is temporarily unmounted"
Server_df server_2 results in Error 2: server_2 : No such file or directoryer
Server_mount server_2 results as follows
server_2 :
root_fs_2 on / uxfs,perm,rw,
root_fs_common on /.etc_common uxfs,perm,rw,
root_fs_vdm_vdm01 on /root_vdm_1/.etc uxfs,perm,rw,
imagecache01 on /imagecache01 uxfs,perm,rw,
archive on /archive uxfs,perm,rw,
server_mountpoint server_2 -list results as follows
/.etc_common
/root_vdm_1
/root_vdm_1/.etc
/imagecache01
/archive
server_mountpoint server_2 -exist /archive results as follows
server_2 : /archive : exists
server_umount server_2 -perm /archive results in the following
server_2 :
Error 10281: server_2 archive : unable to unmount. Cannot determine file system status from the server
server_df server_2 /archive results in the following
Error 2: server_2 : No such file or directory
failed to complete command
dynamox
9 Legend
•
20.4K Posts
0
May 27th, 2015 07:00
hopefully no one reclaimed back-end storage on clariion. Can you compare output from nas_disk -l and what you see in the storage group on Clariion.
LivingforFriday
13 Posts
0
May 27th, 2015 08:00
Yes, first thing we checked, all LUNs in the storage group correspond to the LUNs seen by the celerra ...
26 drives in the storage group, 26 drives seen by nas_disk -l
LivingforFriday
13 Posts
0
May 27th, 2015 08:00
more info ... when I do a nas_fs -list i get
id inuse type acl volume name server
2 y 1 0 12 root_fs_2 1
however when I try nas_fs -info -size root_fs_2
It spits out an error 2205 : root_fs_2 : requires root command ........ even though I am logged in as root.
I believe this is the cause of my issues as the root_fs_2 which should mount to / is currently unmounted.
I cannot mount it nor unmount it,
dynamox
9 Legend
•
20.4K Posts
0
May 27th, 2015 13:00
output from getreason looks ok ?
/nasmcd/getreason
LivingforFriday
13 Posts
0
May 29th, 2015 07:00
yep ... all is well ...
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
dynamox
9 Legend
•
20.4K Posts
0
May 29th, 2015 13:00
try
/nas/sbin/rootnas_fs -info -size root_fs_2
LivingforFriday
13 Posts
0
June 1st, 2015 12:00
seems all good ...
[root@xxxxxxxxxxxx /nas/sbin/rootnas_fs -info -size root_fs_2
id = 2
name = root_fs_2
acl = 0
in_use = True
type = uxfs
worm = off
volume = root_volume_2
pool =
rw_servers= server_2
ro_servers=
rw_vdms =
ro_vdms =
auto_ext = no,virtual_provision=no
deduplication = unavailable
size = total = 256 (sizes in MB) ( blockcount = 524288 )
stor_devs = APM000807xxxxx-000A,APM-000807xxxxx-000B
disks = root_disk,root_ldisk
disk=root_ stor_dev=APM000807xxxxx-000A addr=c0t0l0 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000A addr=c48t0l0 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000A addr=c16t0l0 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000A addr=c32t0l0 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000B addr=c0t0l1 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000B addr=c48t0l1 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000B addr=c16t0l1 server=server_2
disk=root_ stor_dev=APM000807xxxxx-000B addr=c32t0l1 server=server_2
dynamox
9 Legend
•
20.4K Posts
0
June 1st, 2015 19:00
so what happens if you try to mount root_fs_2 on /
LivingforFriday
13 Posts
0
June 2nd, 2015 06:00
i would imagine it would cause an error since the / is already mounted as the celerra OS is operational.
so made a directory called /nas/temp and tried to mount the device there.
mount root_fs_2 /nas/temp .... results in
mount: special device root_fs_2 does not exist
however when I try ..... server_mount server_2 /nas/temp ............ results in
Error 4109: server_2 root_fs_2 : is mounted on /
LivingforFriday
13 Posts
0
June 2nd, 2015 07:00
Been digging around and found the following file in /nasmcd ... ".nas_service_stop"
in the /nasmcd/.emc_login ... found the following statement.
# NAS Service Stop File. If this file exists, NAS service is still stopping.
# Non Existance of this file does NOT mean NAS services are running. This file
# is typically located in MCDHOME (/nasmcd) . This knowledge is necessary in the
# scripts using this definition
NAS_STOP_FILE=".nas_service_stop"
If I read this correctly, any script can look for this variable and will stop its execution.
thus if the server_mount script verifies for his existance, it will not mount anything since it figures the server is shutting down.
comments ?
LivingforFriday
13 Posts
0
June 4th, 2015 10:00
no change ... red herring on the stop file ...
seems like it's missing a file system but all systems are there.
Can't unmount them to perform an fsck
Waiting for Time and Material contract so we can get support involved.