Posts Tagged 'Raid 0'

October 29, 2015

How to measure the performance of striped block storage volumes

To piggyback on the performance specifications of block and file storage offerings, SoftLayer provides a high degree of volume size and performance combinations for your storage needs. But what if your storage performance or size requirements are much more specific than what is currently offered?

In this post, I’ll show you to configure and validate a sample RAID 0 configuration with:

  1. The use of LVM on CentOS to create a RAID 0 array with 3 volumes
  2. The use of FIO to apply IO load to the array
  3. The ability to measure throughput of the array

Without going into potential drawbacks of RAID 0, we should be able to observe the benefits of up to three times the throughput and size of any single volume. For example, if we needed a volume with 60GB and 240IOPS, we should be able to stripe three 20GB volumes each at 4 IOPS/GB. You can also extrapolate the benefits from this example to fit a range of performance and reliability requirements.

To start, we will provision 3x 20GB Endurance volumes at 4 IOPS/GB and make it accessible to our CentOS VM but stop short of creating a file system; e.g., you should stop once you are able to list three volumes with:

# fdisk -l | grep /dev/mapper
Disk /dev/mapper/3600a09803830344f785d46426c37364a: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/3600a09803830344f785d46426c373648: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/3600a09803830344f785d46426c373649: 21.5 GB, 21474836480 bytes, 41943040 sectors

Then proceed to create the three-stripe volume with the following commands:

# pvcreate /dev/mapper/3600a09803830344f785d46426c37364a /dev/mapper/3600a09803830344f785d46426c373648 /dev/mapper/3600a09803830344f785d46426c373649
 
# vgcreate new_vol_group /dev/mapper/3600a09803830344f785d46426c37364a /dev/mapper/3600a09803830344f785d46426c373648 /dev/mapper/3600a09803830344f785d46426c373649
 
# lvcreate -i3 -I16 -l100%FREE -nstriped_logical_volume new_vol_group

This creates a logical volume with three stripes (-i) and stripe size (-I) of 16KB with a volume size (-l) of 60GB or 100 percent of the free space.

You can now create the file system on the new logical volume, create a mount point, and mount the volume:

# mkfs.ext3 /dev/new_vol_group/striped_logical_volume
# mkdir /mnt
# mount /dev/mapper/new_vol_group-striped_logical_volume /mnt

Now download, build, and run FIO:

# yum install -y gcc libaio-devel
# cd /tmp
# wget http://freecode.com/urls/3aa21b8c106cab742bf1f20d60629e3f
# tar -xvf 3aa21b8c106cab742bf1f20d60629e3f
# cd fio-2.1.10/
# make
# make install
# cd /mnt
# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=16k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=50

This will execute the benchmark test at 16KB blocks (--bs), random sequence (--readwrite=randrw), at 50 percent read, and 50 percent write (rwmixread=50). This will run 64 threads (--iodepth=64) until the test file of 1GB (--size=1G) is size is completed.

Here is a snippet of output once completed:

read : io=51712KB, bw=1955.8KB/s, iops=122, runt= 26441msec
write: io=50688KB, bw=1917.3KB/s, iops=119, runt= 26441msec

This shows that throughput is rated at 122r + 119w = ~240 IOPS. To validate that it is what we expect, we provisioned 3x 20 GB x 4 IOPS/GB = 3 x 80 IOPS = 240 IOPS.

Here is a table showing how results would differ if we tuned the load with varying block sizes (--bs) :

As you can see from the results, you may not observe the expected 3x throughput (IOPS) in every case, so please be mindful of your logical volume configuration (stripe size) versus your load profile (--bs). Please refer to our FAQ for further details on other possible limits.

-Nam

Subscribe to raid-0