Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2348

June 17th, 2010 07:00

how to snoop datamover virtual nics

We have nics cge0, cge1 and cge2 trunked (trk0_p) on our ns40g and we have 8 IP addresses mapped to fsn0. We need to monitor traffic to two of these addresses (10.1.14.70 and 10.1.14.71). Is there any server_* command to snoop or tcpdump these ip addresses? We don't see anything helpful in /nas/bin. Or perhaps our inexperience with snoop and tcpdump is apparent ... can we use snoop from another unix system to watch traffic on 10.1.14.70? Thanks.

9 Legend

 • 

20.4K Posts

June 17th, 2010 08:00

you can specify device name and where connection is coming from, not necessarily what ip it's connecting to, maybe you can filter that out in wireshark


ID: emc49693
Domain: EMC1
Solution Class: 3.X Compatibility

Goal       How to use server_tcpdump to capture network traffic on the Data Mover

Goal       How to setup and use server_tcpdump 

Goal       How do I use TCPDUMP on Data Movers

Fact       Product: Celerra File Server (CFS)

Fact       Product: Celerra

Fact       server_tcpdump

Fact       EMC SW: NAS Code 4.0.x.x

Fact       EMC SW: NAS Code 4.1.x.x

Fact       EMC SW: NAS Code 4.2.x.x

Fact       EMC SW: NAS Code 5.0.x.x

Fact       EMC SW: NAS Code 5.4.x.x

Fact       EMC SW: NAS Code 5.3.x.x

Fact       EMC SW: NAS Code 5.2.x.x

Fact       EMC SW: NAS Code 5.1.x.x

Fact       network trace

Symptom    Networking issues are suspected in the performance or availability of Celerra exports

Symptom    Network issue on the data mover

Symptom    Performance problem

Fix        To capture external network traffic from the data mover do the following:

1. Issue the following command to create a link for the server_tcpdump command as a root user:
#ln -s /nas/bin/server_mgr /nas/sbin/server_tcpdump

2. The syntax for the command is as follows:
/nas/sbin/server_tcpdump
usage: server_tcpdump { movername | ALL } 
          -start device -w outfile
                 { -host host_ip }  { -s snaplen }
        | -stop device
        | -display
        
The "device" is the interface on the data mover you wish to capture the traffic from.
The "outfile" is the name of the file the data captured will be written to, it must be a file on a filesystem mounted on the data mover the capture is run from. So either a file must be created using a customer filesystem or a temporary filesystem can be created to hold the capture file.
The "host" can be specified by IP address only, name resolution will not be used.
The "snaplen" is the amount of packet payload data in (decimal) bytes to capture, (used to limit the amount of data captured, the default capture size is 96 bytes).

Example:
/nas/sbin/server_tcpdump server_2 ·start trk1  -w /la2dm2_1/tcpdump.log

3. The process of the capture can be monitored by using the following command:

           /nas/sbin/server_tcpdump server_x ·display 

4. The capture can be stopped by using the following command:

          /nas/sbin/server_tcpdump server_x -stop trk1

5. The Linux Control Station can be used to display the capture file or it can viewed in more detail with Ethereal which is available on Celerra Service Package CD. To view the capture file using the Control Station issue the following command as root:

         /usr/sbin/tcpdump ·r /nas/rootfs/slot_2/la2dm2_1/tcpdump.log   |more 

6. If no further tcpdump data collection will be done at this site it is advisable to remove the symbolic link that was created in step 1 above.
# rm /nas/sbin/server_tcpdump



Note       Capture file should be submitted to Global Network Services personnel and also provided to Engineering if the case is being escalated. Submit the file as a compressed binary image using the EMC ftp site.

Reference article "Using EMC FTP site for data transfer between EMC and Customer"



Note      

Some caveats about using tcpdump on Celerra:

- Exercise caution about the amount of data being captured and available space in the file system where the data capture file is being stored. Closely monitor this process, especially if the file system being used is a production file systems.

- server_tcpdump supports running captures on multiple interfaces at the same time. You must start them separately and they must be saved to different capture files.

- you can unmount a file system to which a capture is writing. When this happens the capture will be put in an error state. You must clean up the capture manually. You will see the error state if you use server_tcpdump -display.



Note       It is recommended that if using this utility for Windows troubleshooting, to use the -s 400 option for complete SMB header information capture. If a client can be identified, use the -host 168.158.xx.xx option for the client IP address

46 Posts

June 17th, 2010 10:00

thank you Dynamox!

117 Posts

June 17th, 2010 15:00

Yes, thanks Dynamox =)

I just want to reiterate what the article says about being careful about not filling up the FSes.  As a high-speed file server Celerra can generate a LOT of traffic, and the capture file may grow much, much faster than you expect it to.  Try not to write to the Data Mover's root filesystem (don't run, say, "server_tcpdump server_2 -w /file.pcap"), as it's relatively small, and if you fill it up accidentally it's possible that you could impact production.  Instead, a good best practice is to create a temporary filesystem, mount it on the DM you'll be capturing on, and write to that.

To avoid huge captures, too, you can use the "-max " option.  That will limit the size of the capture file, but be aware that Celerra will write up to TWO files of this size.  Celerra will start writing to a capture file, and when it reaches max_size, it begins writing to a second file.  When the second one fills, it overwrites the first again, and so on.  Of course, if the capture "wraps" in this way, you might not capture the event you're looking for, or not capture ALL of the event.  But that's always a standard concern with network traces.

No Events found!

Top