Monday, July 27, 2009

ZFS Tip: Multiple vdevs in a pool

Today we will look at spanning a pool over multiple disks (or for demo purposes: multiple 64MB files).

The basic building block of a ZFS pool is called a "vdev" (a.k.a. "virtual device")
A vdev can be one of:

• a single "block device" or a "regular file" (this is what we have used so far)
• a set of mirrored "block devices" and/or "regular files"
• a "raidz" group of "block devices" and/or "regular files" (raidz is an improved version of raid5)

A pool can contain multiple vdevs.

• The total size of the pool will be equal to sum of all vdevs minus overhead.
• Vdevs do not need to be the same size.

Let's jump to it and create a pool with two vdevs… where each vdev is a simple 64MB file. In this case our pool size will be 128MB minus overhead. We will leave mirroring and raidz for another day.


Please try this on an unused Solaris 10 box:

Create two 64MB temp files (if you don't have space in /tmp you can place the files elsewhere… or even use real disk partitions)

# mkfile 64M /tmp/file1
# mkfile 64M /tmp/file2

Create a ZFS pool called "ttt" with two vdevs. The only difference from yesterday's syntax is that we are specifying two 64MB files instead of one.

# zpool create ttt /tmp/file1 /tmp/file2


And create a extra file system called ttt/qqq using the default mount point of /ttt/qqq.

# zfs create ttt/qqq
# df -h | egrep 'ttt|Filesystem' # sorry for inconsistancies: yesterday I used "df -k"; today I switched to "df -h"

Filesystem Size Used Available Capacity Mounted on
ttt 87M 25K 87M 1% /ttt
ttt/qqq 87M 24K 87M 1% /ttt/qqq

We now have 87MB of usable space; this is a bit more than double what we had with only one vdev so it seems the ratio of overhead to usable space improves as we add vdevs.
But again, overhead is generally high because we are dealing with tiny (64MB) vdevs.
Okay.. Lets fill up /ttt/qqq which a bunch of zeros. This will take a minute or two to run and will generate an error.

# dd if=/dev/zero of=/ttt/qqq/large_file_full_of_zeros
write: No space left on device
177154+0 records in
177154+0 records out

We are not using quotas, so ttt/qqq was free to consume all available space. i.e. both /ttt and /ttt/qqq are now full file systems even though /ttt is virtually empty.
# df -h | egrep 'ttt|Filesystem'

Filesystem Size Used Available Capacity Mounted on
ttt 87M 25K 0K 100% /ttt
ttt/qqq 87M 87M 0K 100% /ttt/qqq


# mkfile 109M /tmp/file3

Let's add it to the pool

# zpool add ttt /tmp/file3

If we had been using Veritas or SVM we would have had a three step process: adding disk, resizing volumes, and growing the file systems.

With ZFS, as soon as disk space is added to the pool, the space becomes available to all the file systems in the pool.

So after adding a 109MB vdev to our pool, both /ttt and /ttt/qqq instantly show 104MB of available space. Very cool.

# df -h | egrep 'ttt|Filesystem'

Filesystem Size Used Available Capacity Mounted on
ttt 191M 25K 104M 1% /ttt
ttt/qqq 191M 87M 104M 46% /ttt/qqq

Notice that when talking about pools and vdevs today, I did not mention the words "striping" (raid-0) or "concatenation"… terms that we are used to seeing in the SVM and Veritas worlds.

ZFS pools don't use structured stripes or concatenations. Instead, the a pool will dynamically attempt to balance the data over all its vdevs.

If we started modifying data in our ttt pool, the pool would eventually balance itself
out so the data will be spread evenly over the entire pool.

i.e. No hot spots!

Time for cleanup.

# zpool destroy ttt
# rm /tmp/file[1-3]

Since we used the default mount points today, the directories "/ttt" and "/ttt/qqq" have been removed for us, so there is no more cleanup to do.

Readers who read this page, also read:




Bookmark and Share My Zimbio http://www.wikio.com

0 comments: