For those who asked, I will convert these tips to html and post on termite. If I can get this done today I will provide a URL tomorrow.
For those who did not try last week's exercises, I am afraid you will not be eligible for certificates, plaques, trophies or awards.
But the good news is that it is not too late to catch up. If you cut and paste, each exercise should take roughly two minutes.
Today we will look at some "status" or "informational" commands that will give us more information about our ZFS pools. First we need a pool and some file systems to work with. As per usual, we can build the pool on top of files instead of real disk.
Try this on a server near you:
# mkfile 119M /tmp/file1
# mkfile 119M /tmp/file2
# zpool create ttt /tmp/file1 /tmp/file2 # zfs create -o mountpoint=/apps_test ttt/apps # zfs create -o mountpoint=/work_test ttt/work
And lets put some data in one of our file systems.
# mkfile 50M /apps_test/50_meg_of_zeros
First list all the ZFS pools on the system:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tools 4.97G 121M 4.85G 2% ONLINE -
ttt 228M 50.2M 178M 22% ONLINE -
Notice that my system has two pools. The "tools" pool is created by jumpstart.
If we only want information for the "ttt" pool we can type:
# zpool list ttt
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
ttt 228M 50.2M 178M 22% ONLINE
The next command will list all the vdevs in the pool; our pool currently has two vdevs (each vdev is comprised for a 119MB file).
# zpool status ttt
pool: ttt
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
ttt ONLINE 0 0 0
/tmp/file1 ONLINE 0 0 0
/tmp/file2 ONLINE 0 0 0
errors: No known data errors
If you have a "tools" pool on your system you can run "zpool status tools" and see how a mirrored vdev is displayed. I promise I will dig into mirroring soon… but not today.
If we want to see how much data is in each vdev we can use another command:
# zpool iostat -v ttt
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
ttt 50.2M 178M 0 1 15 42.1K
/tmp/file1 24.1M 89.9M 0 0 6 20.2K
/tmp/file2 26.1M 87.9M 0 0 8 21.8K
------------ ----- ----- ----- ----- ----- -----
Notice that our 50MB file has been spread evenly over the two vdevs.
We can also add a time duration to repeatedly display statistics (similar to iostat(1M)).
# zpool iostat -v ttt 5 # this will display statistics every 5 seconds.
…
We can use "zpool iostat" to see how new writes are balanced over all vdevs in the pool.
Let's first add a third vdev to the pool.
# mkfile 119M /tmp/file3
# zpool add ttt /tmp/file3
# zpool iostat -v ttt
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
ttt 50.3M 292M 0 0 1 3.26K
/tmp/file1 24.1M 89.9M 0 0 0 1.51K
/tmp/file2 26.1M 87.9M 0 0 0 1.62K
/tmp/file3 8K 114M 0 18 0 80.9K
------------ ----- ----- ----- ----- ----- -----
Now we have an empty vdev. Notice that existing data has not been redistributed.
But if we start writing new data, the new data will be distributed over all vdevs (unless one or more vdevs is full).
# mkfile 50M /apps_test/50_meg_of_zeros_2
# zpool iostat -v ttt
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
ttt 100M 242M 0 0 1 6.06K
/tmp/file1 39.5M 74.5M 0 0 0 2.37K
/tmp/file2 41.6M 72.4M 0 0 0 2.48K
/tmp/file3 19.2M 94.8M 0 6 0 183K
------------ ----- ----- ----- ----- ----- -----
Let's close off with a self eplanatory command:
# zpool history ttt
History for 'ttt':
2008-02-12.11:27:02 zpool create ttt /tmp/file1 /tmp/file2
2008-02-12.11:29:29 zfs create -o mountpoint=/apps_test ttt/apps
2008-02-12.11:29:32 zfs create -o mountpoint=/work_test ttt/work
2008-02-12.16:31:00 zpool add ttt /tmp/file3
Now if you have a "tools" pool on your system, and you want to see how Jumpstart set it up, try running "zpool history tools".
Clean up time already:
# zpool destroy ttt
# rmdir /apps_test
# rmdir /work_test
# rm /tmp/file*
Thursday, July 30, 2009
ZFS Tip: "zpool list", "zpool status", "zpool iostat" & "zpool history"
Labels:
UNIX
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment