This post is more than 5 years old
3 Posts
0
1553
June 24th, 2016 07:00
new install PS6510e - no communication on Iscsi network
Getting desperate here now.
purchased refurbished ps6510e from third party reseller - have been flatly refused access to dell's site for support based on that.
got it connected to 2 R720 servers via teamed 10gb SFP+ cables through an 8024F switch to SAN with redundant 10gb connections on dual controllers. configured on private network for iscsi traffic only with private addresses for each of the servers, switch, san individual nics and san group ip.
configured corporate / management network for servers / san / switch management with works perfectly.
the 2 servers (2012r2) can communicate between each other via the 20gb links. no communication with san via ping or linking iscsi connectors at all. servers can ping each other and switch, not san. CLI to san - cant ping anything - not even its own individual ip addresses. SAN web interface reports which switch ports (15+16) it is plugged into on the switch so it knows it exists and is connecting to it seemingly happily.
switch configured according to guidelines from dell white papers, jumbo frames, LAG's. I have swapped 10gb connections between servers and san groups, and whichever combination of connections, servers talk, san doesn't.
web interface and SAN HQ all report everything to be hunky dory. only error is about free space as we have provisioned volumes to take up everything.
any bright ideas as to what has gone fundamentally wrong most welcomed.



Dee-Tay
3 Posts
0
June 27th, 2016 03:00
Hi don,
thanks for the useful info.
to answer your questions:
yes in the gui the ports show as online and connected at 10gbps.
switch has latest firmware and I have followed white paper to disable DCB.
Update - previously LACP was configured on the san ports, which has now been disabled, and one of my 2 test servers is able to communicate with the PS. I am currently rebuilding the other server in order to eliminate any tinkering which went on trying to get it to work. its hopefully now a case of getting the security settings right to get the server iscsi'ing to the PS.
Derek.
Dee-Tay
3 Posts
0
June 27th, 2016 09:00
Hi Don,
are you able to comment on MPIO Vs Teaming on the Nics on the hosts connected to the PS? I have just run the failover cluster manager and it gives warnings relating to only using a single network card on each subnet, and suggests teaming them, which is how I originally had it before breaking the team to install the mpio?
in hope.
Derek.
Origin3k
4 Operator
•
2.3K Posts
1
June 27th, 2016 09:00
Derek,
in EQL Land all iSCSI Ports have to be in the same subnet like 192.168.178.0/24 for example. As long as you have 2 dedicated nic ports for iSCSI in the Host you will be fine. I only support one Hyper-V* Cluster with CSVs within the Cluster Manager but as long as you leave every bonding/teaming software away and use HIT its easy and simple to setup.
Regards,
Joerg
* 99% of all my other customers use vSphere.