Unsolved
This post is more than 5 years old
1 Rookie
•
52 Posts
0
5232
April 18th, 2012 09:00
SAVVOL Question
I understand that each FS has its own savvol and it is outside the fs that you created. Questions - 1. Where outside the fs does that savvol reside. 2. my savvol is at 20GB i am replicating from a vnxe to the celerra so far I transfered 180GB (which EMC is telling me its in the savvol for that fs) since the savvol is 20GB and I tranfered 180 GB that does not make any sinse what so ever, where is that 180 GB residing and how can I report that either on the celerra GUI or the CLI.
No Events found!
AdamFox
254 Posts
0
April 18th, 2012 12:00
Keep in mind, that the savvol is not a full copy of the filesystem, and you wouldn't want it to be since that would be a very ineffecient use of space. The savvol only contains blocks that have been over-written since the snapshot was taken. So the savvol starts at 20GB and will only grow if needed. The fact that it hasn't grown means that the filesystem hasn't changed enough to need it.
If you are replicating, you are replicating all of the data as it looked at the time of the snapshot upon which the replication is based. If this is the initial replication, it will be the amount of used data in the filesystem at the time of the snapshot. That data will most likely be a combination of the active filesystem and some of the data in the savvol. If the block being replicated hasn't changed since the time of the snap, replicator will simply read the block from the actual filesystem itself. If the block did change since the snap was taken, replicator will read the block from the savvol as that contents of that block represents the data at the time of the snapshot. So the fact that you have sent more data than the current capacity of the savvol is ok, and is actually a very normal event. On the initial replicaiton, all of the data must be sent. On subsequent updates, only the 8KB changed blocks that have changed since the last replication session will be moved so unless you data is changing constantly, typically the incremental moves are much smaller than the initial moves. Of course, this depends not only on the rate of data change, but also, how the time between replicaiton sessions.
I hope this helps clarify the situation. Feel free to post a follow-up.
BAKMAR
1 Rookie
•
52 Posts
0
April 18th, 2012 12:00
Ok basically we are seeding this (brand new ) data from one site to another, this is the FIRST time it is and then continue to replicate the data to the target after the seeding is complete. Still my questions 1 and 2 was not answered, you answered this question thinking the FS was there already and been replicating, but its still repl and have some time when it's done. It does not make any sense I am seeding new data to a new target Filesys and the savvol is 20GB and the seeding data is 800GB. Where is that 800GB going to squeez into that 20GB FS savvol? The target FS shows no space reduction EMC told me when the seeding finshes you wil see the data. So where is the 800GB beening written to? And where is this savvol residing on the celerra. Thank You.
BAKMAR
1 Rookie
•
52 Posts
0
April 18th, 2012 12:00
OBTY the savvol has increased , the more data that is getting replicated the more its using up the 20GB savvol. Its using 33% of the savvol and its only copied 1\4 of the total data.
AdamFox
254 Posts
0
April 18th, 2012 12:00
All of the replicated data does not go into the savvol.
Let's start with the size of the filesystem being replicated. How big is it and how much space is used in it?
BAKMAR
1 Rookie
•
52 Posts
0
April 18th, 2012 13:00
Ok where is it then on the celerra? easy question the FS size will be 800GB.I can see that 183GB was transefred to the target.
AdamFox
254 Posts
0
April 18th, 2012 13:00
Ok. On the Celerra (target) side it should be written to the main filesystem. Keep in mind for an initial transfer you may not see correct values for the filesystem space (at least the used space) until it's complete as the filesystem really isn't consistent until the initial transfer is done.
BAKMAR
1 Rookie
•
52 Posts
0
April 18th, 2012 13:00
and when I look at the 1.6 TB FS its empty and shows nothing being written to it.
BAKMAR
1 Rookie
•
52 Posts
0
April 18th, 2012 13:00
The FS i created is 1.6TB
BAKMAR
1 Rookie
•
52 Posts
0
April 19th, 2012 07:00
Great answer make sense, so sense its in the eco pool is there and way (CLI) to see that 180 GB? Reason why I have asked these questions is management is not covinced that the data is there and if there is a way to show them the 180Gb size, not files that would make them happy.
AdamFox
254 Posts
0
April 19th, 2012 07:00
When you set up a replication session, you define either a target filesystem or a target pool in which Replicator will create the filesystem. If you run a server_df on the target system, the filesystem will look empty until the transfer is complete, but the data is being written to that filesystem and therefore into whatever pool contains that filesystem.
BAKMAR
1 Rookie
•
52 Posts
0
April 19th, 2012 07:00
It belongs to the economy pool
AdamFox
254 Posts
0
April 19th, 2012 07:00
Then the data is there. It's just not readable until the initial transfer is done due to it being inconsistent during the initial transfer. What will happen is when the transfer is done, the data will just appear all at once in server_df and Unisphere. On subsequent transfers you will see similar behavior. You won't see the updates until the transfer is finished, and then you will see them all at once, however you will be able to read the filesystem as it looked at the last update so it's still available. This is for the same reason. Since we are trasnferring filesystem blocks and not files (which is a good thing as it's usually much more efficient), the data and metadata of the filesystem are not synced up until all of the blocks are transferred, so if we let you access it before the transfer, the filesystem would appear to be corrupt. Also, by not exposing it until the transfer is complete, it also allows us to gracefully handle situations where the transfer does not complete for whatever reason (e.g. the network connection isn't available). We don't ever want to expost an incomplete transfer to the user because the filesystem calls would fail, and bad things would happen.
So you always get to see the target filesystem as it existed at the last transfer until the current/next transfer is complete. So in the case of an initial transfer, the filesystem appears empty.
BAKMAR
1 Rookie
•
52 Posts
0
April 19th, 2012 07:00
Ok that makes sense, but where is that 180gb that transfered already? Economy pool? some other pool?
BAKMAR
1 Rookie
•
52 Posts
0
April 19th, 2012 07:00
and I did define a Fs for it.
AdamFox
254 Posts
0
April 19th, 2012 12:00
I don't know of a good way to see that while the replication is in progress other than looking at the status of the replication either through Properites in Unisphere or nas_replicate on the CLI. That will show you how much you have transferred. But I don't know of a place where you can see the space becoming comsumed because the two places you would normally look would be at the filesystem level which won't show it for the reasons above and the pool level which will just show the entire space for the filesystem allocated from it because it allocates the space from the pool when it creates the filesystem (or when you created it).
I'm afraid you may need to trust the software on this one.