Unsolved
This post is more than 5 years old
11 Posts
0
3579
May 6th, 2012 08:00
NS-350, How to delete volumes and create new NFS shares
Hello all,
I'm working with the NS350 cellera storge system.
This is what i want to do.
- Setup 1 EMC storge systems that:
- NFS / iscsi targes to work with KVM(Kernel virtualisation) (Debian 6)
- use the spare EMC storage system as spare part.
This is what i have,
- 2x Control station
- 2x Disk shelfs
- 2x Emty disk shelfs
- 4x datamovers
- 2x battery backup
- All the licenese
- NO EMC service contract
This is where i'm
- I can log in via https + ssh
- deleted all old filesystems
- deleted all replication
- deleted all nfs cifs iscsi
Where i'm stuck.
- cann't delete al old volumes and start fresh
my email is tom@dt-it.nl
my phone is +31(0)88 0119500
No Events found!
DT-IT
11 Posts
0
May 6th, 2012 08:00
I want to start compleet fresh with a new raid configuration. but i cann't not dele volumes or create file systems. I think that is forget to delete some stuff.
Here is the list from the CS. what can i savely delete?
CLI
[nasadmin@cs_emc02 nasadmin]$ nas_volume -list
id inuse type acl name cltype clid
1 y 4 0 root_disk 0 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,52
2 y 4 0 root_ldisk 0 35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51
3 y 4 0 d3 1 77
4 y 4 0 d4 1 78
5 y 4 0 d5 1 79
6 y 4 0 d6 1 80
7 n 1 0 root_dos 0
8 n 1 0 root_layout 0
9 y 1 0 root_slice_1 1 10
10 y 3 0 root_volume_1 2 1
11 y 1 0 root_slice_2 1 12
12 y 3 0 root_volume_2 2 2
13 y 1 0 root_slice_3 1 14
14 y 3 0 root_volume_3 2 3
15 y 1 0 root_slice_4 1 16
16 y 3 0 root_volume_4 2 4
17 y 1 0 root_slice_5 1 18
18 y 3 0 root_volume_5 2 5
19 y 1 0 root_slice_6 1 20
20 y 3 0 root_volume_6 2 6
21 y 1 0 root_slice_7 1 22
22 y 3 0 root_volume_7 2 7
23 y 1 0 root_slice_8 1 24
24 y 3 0 root_volume_8 2 8
25 y 1 0 root_slice_9 1 26
26 y 3 0 root_volume_9 2 9
27 y 1 0 root_slice_10 1 28
28 y 3 0 root_volume_10 2 10
29 y 1 0 root_slice_11 1 30
30 y 3 0 root_volume_11 2 11
31 y 1 0 root_slice_12 1 32
32 y 3 0 root_volume_12 2 12
33 y 1 0 root_slice_13 1 34
34 y 3 0 root_volume_13 2 13
35 y 1 0 root_slice_14 1 36
36 y 3 0 root_volume_14 2 14
37 y 1 0 root_slice_15 1 38
38 y 3 0 root_volume_15 2 15
39 y 1 0 root_slice_16 1 40
40 y 3 0 root_volume_16 2 16
41 n 1 0 root_rdf_channel 0
42 n 1 0 root_log_2 0
43 n 1 0 root_log_3 0
44 n 1 0 root_log_4 0
45 n 1 0 root_log_5 0
46 n 1 0 root_log_6 0
47 n 1 0 root_log_7 0
48 n 1 0 root_log_8 0
49 n 1 0 root_log_9 0
50 n 1 0 root_log_10 0
51 n 1 0 root_log_11 0
52 n 1 0 root_log_12 0
53 n 1 0 root_log_13 0
54 n 1 0 root_log_14 0
55 n 1 0 root_log_15 0
56 n 1 0 root_log_16 0
57 y 1 0 root_ufslog_1 1 73
58 y 1 0 root_ufslog_2 1 73
59 y 1 0 root_ufslog_3 1 73
60 y 1 0 root_ufslog_4 1 73
61 y 1 0 root_ufslog_5 1 73
62 y 1 0 root_ufslog_6 1 73
63 y 1 0 root_ufslog_7 1 73
64 y 1 0 root_ufslog_8 1 73
65 y 1 0 root_ufslog_9 1 73
66 y 1 0 root_ufslog_10 1 73
67 y 1 0 root_ufslog_11 1 73
68 y 1 0 root_ufslog_12 1 73
69 y 1 0 root_ufslog_13 1 73
70 y 1 0 root_ufslog_14 1 73
71 y 1 0 root_ufslog_15 1 73
72 y 1 0 root_ufslog_16 1 73
73 y 3 0 root_ufslog 2 17
74 y 1 0 root_ldisk_reserve 0 53,54,55,56
75 y 1 0 root_disk_reserve 1 76
76 y 3 0 root_panic_reserve 2 18
77 y 3 0 md3 2 19
78 y 3 0 md4 2 20
79 y 3 0 md5 2 21
80 y 3 0 md6 2 22
81 n 1 0 root_panic_2 0
82 y 1 0 root_s54_2 1 12
83 n 1 0 root_panic_3 0
84 y 1 0 root_s56_3 1 14
85 n 4 0 d7 0
86 y 4 0 d8 0 136
87 n 4 0 d9 0
88 n 4 0 d10 0
312 y 1 0 TEST 0
313 y 5 0 root_avm_vol_group_1 0
This errror iam getting when i try to create a new file system from https:
https:
This is what i get when i try to create a filesystem
Create file system TEST. System error message (Error #5005):"failed to complete command"
Create file system TEST. System was unable to create a file system with the given parameters.
cli:
server_log server_2
2012-11-08 05:04:41: CHAMIIENCMON: 4: encmon: Power Supply B AC power restored.
2012-11-08 05:04:41: CHAMIIENCMON: 4:15: encmon: Power Supply B OK.
2012-11-08 05:04:41: CHAMIIENCMON: 4:29: encmon: Enclosure OK.
2012-11-08 05:32:51: STORAGE: 3: Basic Volume 86 not created, Device c16t1l1 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume disk 86 c16t1l1 disk_id=8 size=521539
Logical Volume 315 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 315
Logical Volume 314 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 314
Logical Volume 312 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 312
Logical Volume 86 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 86
dynamox
9 Legend
•
20.4K Posts
0
May 6th, 2012 08:00
why do you want to delete volumes (disks), are you going to change raid group configuration ?
dynamox
9 Legend
•
20.4K Posts
0
May 6th, 2012 13:00
Can you post output from nas_disk -l and nas_fs -l
DT-IT
11 Posts
0
May 7th, 2012 01:00
CLI:
[nasadmin@cs_emc02 nasadmin]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CK200065000641-0000 CLSTD root_disk 1,2
2 y 11263 CK200065000641-0001 CLSTD root_ldisk 1,2
3 y 2047 CK200065000641-0002 CLSTD d3 1,2
4 y 2047 CK200065000641-0003 CLSTD d4 1,2
5 y 2047 CK200065000641-0004 CLSTD d5 1,2
6 y 2047 CK200065000641-0005 CLSTD d6 1,2
7 n 521539 CK200065000641-0010 CLSTD d7 1,2
8 y 521539 CK200065000641-0011 CLSTD d8 1,2
9 n 1099246 CK200065000641-0012 CLSTD d9 1,2
10 n 1099246 CK200065000641-0013 CLSTD d10 1,2
[nasadmin@cs_emc02 nasadmin]$ nas_fs -l
id inuse type acl volume name server
1 n 1 0 10 root_fs_1
2 y 1 0 12 root_fs_2 1
3 y 1 0 14 root_fs_3 2
4 n 1 0 16 root_fs_4
5 n 1 0 18 root_fs_5
6 n 1 0 20 root_fs_6
7 n 1 0 22 root_fs_7
8 n 1 0 24 root_fs_8
9 n 1 0 26 root_fs_9
10 n 1 0 28 root_fs_10
11 n 1 0 30 root_fs_11
12 n 1 0 32 root_fs_12
13 n 1 0 34 root_fs_13
14 n 1 0 36 root_fs_14
15 n 1 0 38 root_fs_15
16 y 1 0 40 root_fs_common 2,1
17 n 5 0 73 root_fs_ufslog
18 n 5 0 76 root_panic_reserve
19 n 5 0 77 root_fs_d3
20 n 5 0 78 root_fs_d4
21 n 5 0 79 root_fs_d5
22 n 5 0 80 root_fs_d6
SAMEERK1
296 Posts
0
May 7th, 2012 04:00
Hi,
was there any LUNs deleted from backend or unmasked with celerra?
you can try to delete the volume :
nas_volume -delete TEST
let me know if that helps.
Sameer
DT-IT
11 Posts
0
May 7th, 2012 04:00
I don't know? but there are 14disk in the shelff.
i there away to discover this?
i can provide you with a team viewer session (version7)
SAMEERK1
296 Posts
0
May 7th, 2012 04:00
for the above error,
# nas_storage -modify id=x -security
Enter the Global CLARiiON account information
Username: nasadmin
Password: ******** Retype your response to validate
Password: ********
Setting security information for APM000xxxxxxxx
done
# nas_storage -check -all
Discovering storage (may take several minutes)
done
(You may need to run nas_storage -sync id=x if the Flare code has been changed on the Backend)
check nas_storage command and let me know for any errors
to check how many LUNs are masked with celerra :
$/nas/sbin/navicli -h storagegroup -list
DT-IT
11 Posts
0
May 7th, 2012 04:00
[nasadmin@cs_emc02 nasadmin]$ nas_storage -modify id=x -security
Error 2206: Root uid required : Permission denied
[nasadmin@cs_emc02 nasadmin]$ su root
Password:
[root@cs_emc02 nasadmin]# nas_storage -modify id=x -security
Enter the Global CLARiiON account information
Username: nasadmin
Password: ***** Retype your response to validate
Password: *****
Error 2211: id=x : invalid id specified
SAMEERK1
296 Posts
0
May 7th, 2012 04:00
was there any LUNs deleted from backend or unmasked with celerra?
any error from this command :
nas_storage -check -all
DT-IT
11 Posts
0
May 7th, 2012 04:00
[root@cs_emc02 nasadmin]# nas_storage -l
id acl name serial_number
1 0 CK200065000641 CK200065000641
[root@cs_emc02 nasadmin]# nas_storage -modify id=1 -security
Enter the Global CLARiiON account information
Username: nasadmin
Password: ***** Retype your response to validate
Password: *****
Setting security information for CK200065000641 Error 3502: CK200065000641: Storage API code=3057: SYMAPI_C_INVALID_IP_ADDRESS
The ip address provided is not valid
SAMEERK1
296 Posts
0
May 7th, 2012 04:00
you are still missing a question from my previous posts
please let me know if there was any LUN unmasked or deleted from the backend, which was presented to Celerra
DT-IT
11 Posts
0
May 7th, 2012 04:00
Error 3501: Storage API code=3593: SYMAPI_C_CLARIION_LOAD_ERROR
An error occurred while data was being loaded from a Clariion
SAMEERK1
296 Posts
0
May 7th, 2012 04:00
open service request or chat session.
DT-IT
11 Posts
0
May 7th, 2012 04:00
CLI:
[nasadmin@cs_emc02 nasadmin]$ nas_volume -delete TEST
id = 312
name = TEST
acl = 0
in_use = False
type = slice
slice_name = TEST
slice_of = d8
offset(MB) = 0
size (MB) = 2500
disks = d8
Thats work's but now i'm deleting my fresh created volume called test.
I want to create filesystems but i'm getting a error we i do so
https:
This is what i get when i try to create a filesystem
Create file system TEST. System error message (Error #5005):"failed to complete command"
Create file system TEST. System was unable to create a file system with the given parameters.
cli:
server_log server_2
2012-11-08 05:04:41: CHAMIIENCMON: 4: encmon: Power Supply B AC power restored.
2012-11-08 05:04:41: CHAMIIENCMON: 4:15: encmon: Power Supply B OK.
2012-11-08 05:04:41: CHAMIIENCMON: 4:29: encmon: Enclosure OK.
2012-11-08 05:32:51: STORAGE: 3: Basic Volume 86 not created, Device c16t1l1 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume disk 86 c16t1l1 disk_id=8 size=521539
Logical Volume 315 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 315
Logical Volume 314 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 314
Logical Volume 312 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 312
Logical Volume 86 not found.
2012-11-08 05:32:51: ADMIN: 3: Command failed: volume delete 86
afp92Tq1w012558
86 Posts
0
May 7th, 2012 04:00
Hi,
Please replace x in "id=x " with the ID of the clariion
You can get the ID from the command:
nas_storage -l
example:
[nasadmin@ce18cs0 ~]$ nas_storage -l
id acl name serial_number
1 0 APM00073700706 APM00073700706
here the id is 1 . Hence the command will be
nas_storage -modify id=1 -security
Thanks
Vanitha