Start a Conversation

Unsolved

This post is more than 5 years old

4391

September 16th, 2009 18:00

XAM/.Net Memory usage when creating many streams

I've written a performance test harness to insert, retrieve and delete items using the XAM api with .Net. This is being compared to carrying out similar actions against a Sql database (i.e.storing data as datatype image in Sql Server). The harness cycles through a range of file sizes, carrying out each action a specified number of times. I'm finding that each time a stream is created, memory usage increases by approximately that stream size. All stream creation and usage is bounded by a Using statement (i.e. the stream is being disposed), however I find that memory usage grows constantly. I put a manual call to the garbage collector at each filesize change, but it didn't appear to make much difference. I also find that once memory grows to a certain size, the process maxes out the cpu, and has to be killed. This doesn't occur all of the time however.Are the streams being disposed of correctly in the XAM Api?

Also, what other tweaks would you suggest to improve read/write performance? I have set the buffer size and embedded data threshold on the XSystem. Is there a buffer size setting at which performance improvement stops?

I have just run this again, and this time received the error:

Unhandled Exception: XAM Error <1002>: "xam/out of memory" [XSystem_OpenXSet]

Edit: I should point out that this is running on a windows server, and while the Peak Mem Usage stays within reasonable limits, the VM Size continues to grow quite massively - after inserting, retrieving and deleting 100 x 10K and 100 x 40K items, VM size is at ~477MB.

Edit: An additional question - where is file duplication detection set - I've seen in the Centera SDK API ref that there used to be a value FP_OPTION_ENABLE_DUPLICATE_DETECTION which is deprecated, and that duplicate detection is always enabled. Does this then mean that after 100 inserts of the same file in my tests, only 1 file will actually be stored, and all other CDFs will point to the same blob, and therefore this is not a true test?

16 Posts

September 17th, 2009 00:00

It would appear that the issue raised in this thread is related to the question asked in the other thread I've just created here:

https://community.emc.com/thread/4715?tstart=0

I increased the buffer to 10MB, and the performance test harness used memory at a much higher rate - I killed it before it brought down the box. I then set the buffer size back to 1000K, and memory usage went back to as before. What is causing this - is there a bug in the .Net API?

2 Intern

 • 

417 Posts

September 17th, 2009 01:00

I do not believe the two issues metioned are related - please see the other thread for my reply on it.

It may be a bug in the wrapper - have you tried debugging it?

Remember, the XAM .Net wrapper is Open Source community software - as such, if you think you have found a bug you could (indeed SHOULD) attempt to debug it and find a fix. This can then be implemented into the next release of the package.

16 Posts

September 17th, 2009 18:00

I believe that there is a bug within the wrapper where XamObjects aren't disposed of correctly, leading to constantly increasing memory usage . I have raised an issue on the codeplex site.

http://xamsdk.codeplex.com/WorkItem/View.aspx?WorkItemId=5587

Are you able to address the other questions raised in the first post?

16 Posts

September 30th, 2009 23:00

Note that this issue is different to that which this thread was initially created for, but so similar I thought I should continue here.

I have modified the Xam wrapper as mentioned in the previous post, and it has decreased the memory usage dramatically, however as part of load testing I am still experiencing "Out of memory" and general memory consumption issues when inserting many objects. It appears that this problem is further down the stack than the Xam wrapper though.

I have viewed the following situation when using the Xam sdk in 2 different scenarios - once when running load testing using load runner, with the Xam api being called by a web front end to insert files of differing size continually, and again when using a console application inserting files continually.

I have been monitoring the w3wp process (i.e. the IIS/Asp.net process) and the console application using Performance Monitor, looking at the counters "Private Bytes" (unmanaged, or native memory usage), and "# Bytes in all Heaps" (i.e. all memory consumed by the .Net part of the process). In both instances, when a file is stored to the Centera, Private Bytes increases and then decreases, but it does not decrease by the same amount as it increased, and therefore it is increasing over time.

Given that it is the Private Bytes counter that is increasing and not # Bytes in all Heaps, this suggests that the issue lies within the C portion of the API.

I have taken memory dumps of both processes once the memory has increased substantially. I have then carried out diagnostics on both of the memory dumps. In each case, the diagnostic has shown that the heap associated with "fpos32!fp_newTmpfile+366324" is the largest heap by both committed and reserved memory. It would appear that some memory is being allocated as part of the storing of the file to the Centera and then not being released.

This issue is exhibiting as an "Out of Memory" exception in the load test after approximately half an hour, and when approximately 25000 inserts have taken place of files ranging from 40k to 1MB in size.

Is this a known issue? Does this theory sound correct? Is there any way to resolve this problem?

Note that this problem appears to be most prevalent when dealing with files of a size > 100k - I am working with test files of 10k, 40k, 100k, 400k, 600k, 800k and 1MB, and am finding that memory consumption increases to a point then plateaus with files of size 10k and 100k, but anything larger and memory usage continually increases.

No Events found!

Top