Start a Conversation

Unsolved

This post is more than 5 years old

2503

September 10th, 2016 19:00

Can a volume be mapped to multiple SDCs

Hi

Can a ScaleIO volume be mapped to multiple SDCs?

We want apps in different servers to reference the same data.

the apps are glance services in the 2 controller nodes of Openstack.

We want the 2 glances to see the same image repository through SDCs with consistency.

Many thanks,

Seiji

68 Posts

September 11th, 2016 08:00

Hello Seiji,

with ScaleIO you can map a volume to multiple SDC, if you are using the GUI that operation is very simple, you have to go in FrontEnd -> Volumes, right click on the your volume, select "Map Volumes", then on the left pane flag all the SDC youwant to make the volume available to and then click on "Map Volumes" to complete the operation.

Now the volume is available to the selected SDC.

BEWARE: the volume mapping is made at the lowest level. You can't access concurrently to a "normal" filesystem from both nodes. What I'm trying to explain is that if you create a partition and create on that a EXT4 filesystem you can't mount it from both nodes concurrently. EXT4 is not a clustered filesystem, if you mount it concurrently from multiple nodes the filesystem will be corrupted.

1) If SDC are Windows you can create a CSV (Clustered Shared Volumes) from the "cluster manager" and then format that CSV using NTFS. The CSV will "coordinate" the operations on NTFS from the various nodes.

2) If SDC are Linux there are a lot of alternatives for example you can use GFS2 or OCFS2 as clustered filesystem.

Hyper-V uses CSV for virtual machine shared storage, VMware uses VMFS (Virtual Machine File System) that is a proprietary clustered filesystem.

Davide

14 Posts

September 13th, 2016 21:00

Hi Davide,

Thank you for the reply and the clear explanation.

We'll try using GFS2 or OCFS2 and update if we get valuable progress.

Seiji

110 Posts

September 16th, 2016 13:00

Alternatively, with the Mitaka release, you can deploy with volume backed glance. Link below.

BTW, if you want an instance to attach to multiple volumes, ScaleIO supports this, but OpenStack doesn't yet have support for multi-attach.


OpenStack Docs: Volume-backed image

Jason


14 Posts

September 20th, 2016 18:00

Hi Jason,

So you mean, with Mitaka release, by choosing cinder as a glance store backend we need neither to use a clustered filesystem nor to mount a ScaleIO volume on the controller nodes.

This could not be an alternative since we use Liberty release this time.

But thank you for sharing the information. We'd try next time with Mitaka or later release.

Seiji

14 Posts

September 28th, 2016 03:00

Hi,

We could managed to have multiple Openstack controller nodes mount a single ScaleIO volume using the followings.

GFS2

dlm

corosync

Corosync has a cluster among the 5 nodes.

set dlm "enable_fencing=0" in /etc/dlm/dlm.conf

mkfs.gfs2 -p lock_dlm -t :fs_name -j 5 /dev/disk/by-id/emc....

modprobe gfs2

mount /dev/disk/by-id/emc.... /var/lib/glance/images

*We don't use pacemaker to mount the device since the controller nodes are in an act/act cluster.

Now that clustered glances can use the volume as an image store.

Thank you very much for the helps.

Seiji

110 Posts

September 28th, 2016 18:00

That's also a great solution. Glad you were able to set it up.

68 Posts

September 28th, 2016 19:00

Hello Seiji,

thanks for your detailed updates: I worked on some different projects in the past with GFS2 + distributed lock manager and I never had problems with it. I'm happy to hear that this solution works also in your specific scenario.

Thanks,

Davide

No Events found!

Top