Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

2123

August 17th, 2015 01:00

How to approach the KVM-Based VDI shared-storage scenario with ScaleIO?

Hi All,

I'm a beginner of ScaleIO.

I had tried to map a virtual volume to multiple Hosts for KVM-Based VDI shared-storage purpose and

executed scli commands like below:

[MDM]

> scli --login --username admin --password mypasswd

> scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb 200 --volume_name vol_1

> scli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 192.168.100.10 --allow_multi_map

> scli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 192.168.100.20 --allow_multi_map

[SDC-192.168.100.10: vol_1=>/dev/scinia]

> mkfs.ext4 /dev/scinia

> mkdir /mnt/vm-pool01

> mount /dev/scinia /mnt/vm-pool01

[SDC-192.168.100.20: vol_1=>/dev/scinia]

> mkdir /mnt/vm-pool01

> mount /dev/scinia /mnt/vm-pool01

, however the contains (files) under /mnt/vm-pool01 cannot sync between hosts when some files had been modified on one side! Files that been generated at /mnt/vm-pool01 on Host 192.168.100.10 cannot be listed  at 192.168.100.20's /mnt/vm-pool01?

I am confused about the appropriate usage of "--allow_multi_map"!? And i have no idea how to approach the shared-storage major objective?

All suggestions and guidance are welcome.

Many thanks,

Beck

92 Posts

August 17th, 2015 06:00

Thanks Beck :)

allow_multi_map option enables you to present the same volume to multiple hosts, but you still need that the host itself to understand that a volume is shared with other hosts and to have some mechanism to provide file system integrity in this scenario.

For example, VMware has its VMFS, Microsoft has Cluster services, or more recently CSV.

You should check how to enable at kvm and your Linux distribution similar capabilities.

August 20th, 2015 03:00

Dear Rafa,

Thank you very much!!

I had tried to use the NFS for sharing ScaleIO virtual volumes in my KVM testbed,

and setup like below:


[MDM]

> scli --login --username admin --password mypasswd

> scli --add_volume --protection_domain_name default --storage_pool_name default --size_gb 320 --volume_name vol_1

> scli --map_volume_to_sdc --volume_name vol_1 --sdc_ip 192.168.100.100

<NFS server>

[SDC-192.168.100.100: vol_1=>/dev/scinia]

> fdisk /dev/scinia

> mkfs.ext4 /dev/scinia1

> mkdir /mnt/nfs-pub

> mount /dev/scinia1 /mnt/nfs-pub

> systemctl start nfs

> exportfs

/mnt/nfs-pub

[192.168.100.10]

> mkdir /mnt/vm-pool01

> mount -t nfs 192.168.100.100:/mnt/nfs-pub /mnt/vm-pool01

[192.168.100.20]

> mkdir /mnt/vm-pool01

> mount -t nfs 192.168.100.100:/mnt/nfs-pub /mnt/vm-pool01

,and i will test the operation performance of VMs.

Thanks again, and

any comments and suggestions are appreciated.

Best regards,

Beck

August 25th, 2015 03:00

Dear Victor,

Thank you for your instruction and advice.

I tried to use NFS-sharing scheme just for simple and convenient, and i worry about the I/O performance issue within VDI-apply scenario.

I will leverage some cluster file systems as my space-share candidates, but the GFS/OCFS performance tuning is still a big problem for me!!

I hope i can share my ScaleIO performance test status for our further discussion later.....^_^

Many thanks,

Beck

No Events found!

Top