Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

3053

March 19th, 2012 10:00

Sharedup error - The local device name is already in use

Last weekend, we had planned migrating 2 CIFS servers from an older NAS storage to newer Celerra storage. After migrating the data using emcopy, I got sharedup error for only one server (for all source shares):-

Error message:- "Error=0x00000055: Unable to add the share on target server  NetShareAdd:The local device name is already in use."

I am using a Admin user (for both source & destination), and had no issues migrating other CIFS servers shares till now. I didn't find any forums & also KB's with this error. I might be ignoring something fundamental or might be silly for the NAS experts here, but still  I've to find the cause immediately.

I've attached a snippet of the log file.

Thanks in Advance

1 Attachment

674 Posts

March 28th, 2012 03:00

is the vdm.cfg of /root_vdm_1/.etc equal with /nas/server/vdm/vdm_1/vdm.cfg?

275 Posts

March 19th, 2012 12:00

Could it be that the share name is allready in use on the target already exists or that you are trying to create a "hone" share and homedir feature is already activated on tarhet?

Claude

15 Posts

March 21st, 2012 02:00

Thanks bergec for your comment, but unfortunately the possibilities which you pointed are not the cause of the problem.

   Firstly, there are no shares migrated (but data is migrated) to the destination NAS.

Also If the share name already existed, then the error appearing would be "Error=0x00000846: Unable to add the share on target server; NetShareAdd:The sharename is already in use on this server" (I tried sharedup for different CIFS server without /R option)
   Also If I try with /R option for the new cifs server, I am still getting the same error - local device name is already in use.

The Homedir feature is not activated on the target.

275 Posts

March 21st, 2012 14:00

Have you a case opened with Tech Support?

Claude

March 22nd, 2012 23:00

RM-Hsp,

I believe that "global shares" are the culprit here.  What leads me to believe this is that you are consolidating multiple CIFS servers to the VNX.  Of course this is not a problem; however, can you confirm the following:

1) Do the CIFS servers on the VNX reside on the same datamover (physical or virtual data mover)?

2) Referencing the original CIFS servers on the older NAS, are there share names that exist on both (but of course not on the same CIFS server)?  For instance, using the examples in your output does "cifs01$" share exist on both of the original CIFS servers?

If you confirm the above, we can discuss possible solutions.

March 23rd, 2012 00:00

Let me expand a bit on what I'm thinking.  On the source NAS (is this also a Celerra?), does the following scenario apply:

1) You have two separate CIFS servers each with their own set of files and folders being shared out but happen to have shares with the same name

2) On the destination Celerra both of your CIFS servers are on the same data mover (physical or virtual) and they are being created as "global shares"?

3) By chance is the following data mover parameter set to "1"?

Facility: cifs

Name: srvmgr.globalShares

On the other hand, do you actually have "global shares" on the source?  For a Celerra, in the GUI under the Share properties, a global share would be one where none of the listed CIFS Servers are checked.  This would make that share available via all CIFS servers residing on the data mover (physical or virtual).  Therefore, accessing the share via either of the CIFS servers would get you to the same filesystem contents.  If so then we may need to review your migration strategy.

15 Posts

March 26th, 2012 05:00

Chris,

I considered your response & following is the update:-

1) You have two separate CIFS servers each with their own set of files and folders being shared out but happen to have shares with the same name

-- There are few shares from two CIFS server with the same name, but they're not in the same filesystem. Will this be a problem ?

2) On the destination Celerra both of your CIFS servers are on the same data mover (physical or virtual) and they are being created as "global shares"?

--- No

3) By chance is the following data mover parameter set to "1"?

---No, srvmgr.globalShares is set to 0 on both the NAS systems.

15 Posts

March 26th, 2012 05:00

Bergec,

I've now opened a case with support.

March 26th, 2012 07:00

-- There are few shares from two CIFS server with the same name, but they're not in the same filesystem. Will this be a problem ?

Thank you for reviewing.  No, this point (by itself) is not necessarily a concern but was instead leading into the thought of global shares which it looks as if we've ruled out.

Please keep us informed of what support identifies as the culprit.

15 Posts

March 28th, 2012 00:00

I am still waiting for a conclusive solution from the support. Below is snap of the server log during sharedup execution :-

2012-03-27 23:44:46: SMB: 3:[VDM01] Unable to determine free space on disk (9=StaleHandle)

2012-03-27 23:44:46: SMB: 3:[VDM01] ShareDB: Cannot insert CIFS_server-share1, DB quota exceeded

2012-03-27 23:44:46: SMB: 6:[VDM01] Share error: Share cannot be created

2012-03-27 23:44:46: SMB: 3:[VDM01] Unable to determine free space on disk (9=StaleHandle)

2012-03-27 23:44:46: SMB: 3:[VDM01] ShareDB: Cannot insert CIFS_server-share2, DB quota exceeded

2012-03-27 23:44:46: SMB: 6:[VDM01] Share error: Share cannot be created

Any suggestions for the cause ?

275 Posts

March 28th, 2012 01:00

Could it be that the root FS of the vdm is full?

Do a server_df and check if any FS is close to 100%

Claude

15 Posts

March 28th, 2012 02:00

Claude,

I exactly thought the same, but no filesystems are close to 100%. I referred the primus emc96066 & below are the fs sizes:-


[nasadmin@CS0-NS120 ~]$ server_df server_2 root_fs_2
server_2 :
Filesystem          kbytes         used        avail capacity Mounted on
root_fs_2           258128         9328       248800    4%    /

[nasadmin@CS0-NS120 ~]$ server_df server_2 root_fs_vdm_VDM01
server_2 :
Filesystem          kbytes         used        avail capacity Mounted on
root_fs_vdm_VDM01   114592         9792       104800    9%    /root_vdm_1/.etc


[nasadmin@CS0-NS120 ~]$ server_df server_2 -inode root_fs_2
server_2 :
Filesystem          inodes         used        avail capacity Mounted on
root_fs_2            31486          225        31261    1%    /

[nasadmin@CS0-NS120 ~]$ server_df server_2 -inode root_fs_vdm_VDM01
server_2 :
Filesystem          inodes         used        avail capacity Mounted on
root_fs_vdm_VDM01   130942          348       130594    0%    /root_vdm_1/.etc

15 Posts

April 9th, 2012 04:00

We had new vdm's config file corruption & this was fixed by support.

Thanks everyone for the help.

4 Operator

 • 

8.6K Posts

April 10th, 2012 01:00

Thanks for the feedback

No Events found!

Top