Unsolved
This post is more than 5 years old
7 Posts
0
80194
September 17th, 2013 19:00
EqualLogic PS6100XV Dedicated Management Network
Hi Experts,
I have to post it because I cannot get any constructive suggestions from Dell support engineer.
My environment description:
======================
- Device and connectivity summary.
- PowerConnect 5548 as public switch, 3 VLANs on the switch (1 for iDRAC and Storage Mgmt, 1 for public communication, 1 for failover cluster heartbeat).
- PowerConnect 6224 as storage switchs (only 1 VLAN for storage network).
- EqualLogic PS6100XV (ETH0-ETH3 are connected to PS 6224, and ETH4(Dedicated Mgmt Interface) is connected to iDRAC vlan on PC5548 switch)
- a couple of PowerEdge servers connected to public and storage networks.
- The question.
- ping iDRAC's IP of PowerEdge servers is ok.
- ping ETH4 on EQL is ok, ping the dedicated mgmt IP on EQL is ok.
- Once accessing web interface/CLI of EQL, the connection is lost, the ping result is time out;
any tips for troubleshooting?
thanks in advance!
No Events found!
DELL-Joe S
7 Technologist
•
729 Posts
0
September 18th, 2013 07:00
Using a serial connect to the array (connect to the active controller), try a ping and traceroute from the management port to the system you are trying to manage the array from. You may need to check each hop too.
From the array, ping/traceroute from the specific management interface
GroupName> ping “–I mgt_IP dest_IP”
(Note: the management IP is the highest eht4 port; you need the quote marks; and the –I is a capital eye)
For traceroute, use the following:
GroupName>support traceroute “-s mgt_IP dest_IP”
(Note: the management IP is the highest eht4 port; you need the quote marks; and the –s)
Also, if possible try a different system, to see if the problem follows. You can also try plugging directly into the management vlan switch to see if you have some kind of route issue.
-joe
DELL-Joe S
7 Technologist
•
729 Posts
0
September 18th, 2013 07:00
You can also try different cables and/or ports on the management switch to the array to see if the problem follows the port/cable. Also, you can try to failover the array to the secondary controller to see if the problem follows the controller (if so, this could indicate a cable/port/LAG issue and/or switch routing issue, or a controller issue)
-joe
FSGuest
7 Posts
0
September 21st, 2013 21:00
Thanks Joe,
I've not tried as per your advice, becuase the server room is far away from me, anyway, let me give you more information about the issue.
thanks in advacne!
DELL-Joe S
7 Technologist
•
729 Posts
0
September 23rd, 2013 08:00
Is there a reason why you didn't just put the management network interface (eth4) and Mgmt IP on the same 172. network that your management vlan is on? The requirements is that the mgmt. ports and Mgmt IP on the array be on the same subnet, as the dedicated mgmt network.
-joe
FSGuest
7 Posts
0
September 24th, 2013 20:00
Thanks Joe,
Our definition is that:
172.16.0.0/16 is public vlan
192.168.253.0/24 is management vlan (including iDRAC, Array dedicated mgmt IP, eth4 IP)
192.168.250.0/24 is storage vlan
with regards that why I didnt put mgmt interface and Mgmt IP on 172.16.0.0/16 network, I think the only reason is that it's management purpose.
by the way, I dont know the best practices of dedicated mgmt network since all are implemented by Dell onsite engineer.
would you please point me the right place for dedicated mgmt network best practices?
DELL-Joe S
7 Technologist
•
729 Posts
0
September 25th, 2013 06:00
Provided you have the Array management interface is physically connected to your dedicated management VLAN on the switch (192.168.253.x/24, and it sounds like you do), then you have a correct physical configuration. For some reason I thought the 172.x.x.x was your management subnet (but you clarified the VLAN layout in your post).
So the issue is the 172.x.x.x network not properly passing the reliably from the MGMT VLAN to the Public VLAN network reliably. As you stated, when you directly connected a temporary system to the management VLAN do didn’t have any issues with the GUI/CLI connection (passing the public VLAN).
A few things to check:
Please note the the PC55xx is no longer on the list.
This is the recommended configuration for the 5548 (adjust your ports to match your VLANs):
1. Turn off Flow Control on all 5528 switches. This is a global switch setting:
console(config)# interface range gigabitethernet 1/0/1-48
console(config-if-range)# no flowcontrol
console(config)# interface range tengigabitethernet 1/0/1-2
console(config-if-range)# no flowcontrol
NOTE: All switch ports must have flow control disabled for the switch to allow switch to change the way it manages switch resources appropriately
2. If changing flow control does not work…Change the connections as indicated:
a. Stacked Solution:
i. Move all Active Array port connections to switch ports 25-48 of each switch.
ii. Move all in-active array port connections to ports 1-24
iii. Move all host ports connections to switch ports 25-48 where possible on each switch
b. LAGged Solutions:
i. Ensure each LAG defined is using 10Gb ports ONLY.
ii. Move all Active Array port connections to switch ports 1-24 of each switch.
iii. Move all in-active array port connections to ports 25-48
iv. Move all host ports connections to switch ports 1-24 where possible on each switch
3. If changing interface connection location does not work…Turn off Jumbo Frames on all 5524 switches. This is a global switch setting:
Console# configure
Console(config)# no port jumbo-frames
NOTE: A switch restart is required for changes to jumbo frames to take effect.
There isn’t a specific best practices document for configuring the Management network, just what is in the Group Admin Guide, CLI Guide (both are on the Firmware Download page) and the Configuration guide (the same link I provided for the compatibility matrix)
-joe
FSGuest
7 Posts
0
November 9th, 2013 18:00
Hi Joe,
It's a long time, but the issue is still there.
as per your recommended configuration, I checked my system.
1. flow control, how to check the current setting? By the way, Dell engineer suggested that the setting should be on, though I don't know what's the meaning of flow control.
2. jumbo setting is on. (Because it's connected to storage device)
3. no stacked or lagged solutioin.
thanks very much.
DELL-Joe S
7 Technologist
•
729 Posts
0
November 11th, 2013 13:00
What about the ping and traceroute tests I suggested from the array back to the system you are trying to manage the group from, were these successful 100% of the time? If not, I would suggest that you configure the management interface (eth3) and the management group IP to the same network as your management subnet. You may have to add the management vlan on the switch to your uplink port as well (if you haven't done this yet).
-joe
FSGuest
7 Posts
0
November 13th, 2013 18:00
Hi Joe,
I've disabled the dedicated managment network last week, now I manage the storage device via group IP.
BTW, before disabling the dedicated management network, the average time of PING the management network (192.168.253.0/24) from public network (172.16.0.0/16) is almost 5ms, I think it's not normal situation. any tips for this issue?
正在 Ping 192.168.253.1 具有 32 字节的数据:
来自 192.168.253.1 的回复: 字节=32 时间=5ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=4ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=11ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=7ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=6ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=6ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=4ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=8ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=6ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=3ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=4ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=5ms TTL=63
来自 192.168.253.1 的回复: 字节=32 时间=4ms TTL=63