Friday, July 24, 2009

ZFS boot/root - bring on the clones

Today's ZFS tip is dedicated to anybody that has experienced corruption as a result of loading Solaris 10 patches.

Using ZFS cloning, it is possible to create bootable clones of / and /var.

A clone takes a few seconds to create, but could save hours or days if a patch installation does not go as planned.

Patching can be done on the clone or the original.
If the clone is corrupted, the rollback path is to simply boot the original.
If the original is corrupted, the rollback path is to simply boot the clone.


Let's have a look at where / and /var file systems are normally mounted:

# df -k / /var
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool/ROOT/blue 32772096 1902300 12809372 13% /
rpool/ROOT/blue/var 32772096 354808 12809372 3% /var

The two datasets which house the / and /var file systems form an entity that is referred to as a "boot environment" (a.k.a. BE).


Note: other miscellaneous file systems in the root pool are not considered to be part of the boot environment.

e.g. /home is not part of the boot environment.

Using the power of ZFS copy-on-write cloning, we can clone the boot environment in a matter of seconds.

A cloned boot environment appears as a complete bootable and modifiable copy of our original operating system.

Sun's ZFS documentation assumes everybody will want to use "live upgrade" to clone the boot environment.

The advantage of live upgrade is that the clone can be done with two commands.

The disadvantage of live upgrade is that it may be slightly buggy.

I have elected to provide you with a procedure that does not use live upgrade.


Clones are built from snapshots. Snapshots require an arbitrary "snapname"; we will use today's date for the snapname.

# SNAPNAME=`date +%Y%m%d`


Create snapshots of / and /var. Both snapshots can be created synchronously with a single command. The process should take about half a second.

# zfs snapshot -r rpool/ROOT/blue@$SNAPNAME


Optionally view the snapshots.

# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/blue@20090723 0 - 1.81G -
rpool/ROOT/blue/var@20090723 0 - 346M -


Now let's create a new boot environment named "red" by creating clones from the snapshots.

We need to clone each of the two datasets separately; plan for about half a second per dataset.

# zfs clone rpool/ROOT/blue@$SNAPNAME rpool/ROOT/red
# zfs clone rpool/ROOT/blue/var@$SNAPNAME rpool/ROOT/red/var


By default, the mountpoints for both clones will be set to "legacy". We need to change the mount points to "/" and "/var" but we also want to disable automatic mounting so we don't end up with multiple datasets trying to use the same mount points… remember we already have existing datasets mounted under "/" and "/var".

# zfs set canmount=noauto rpool/ROOT/red
# zfs set canmount=noauto rpool/ROOT/red/var
# zfs set mountpoint=/ rpool/ROOT/red
# zfs set mountpoint=/var rpool/ROOT/red/var


If you are working on a sparc system, add two lines to the menu.list file. If you wish you can use vi instead of echo.

# echo "title red" >> /rpool/boot/menu.lst
# echo "bootfs rpool/ROOT/red" >> /rpool/boot/menu.lst
# more /rpool/boot/menu.lst
title blue
bootfs rpool/ROOT/blue
title red
bootfs rpool/ROOT/red
The menu.list file can be used at boot time to get a list of available boot environments.


Optionally view the list of file systems in rpool.

# zfs list -t filesystem -r -o name,type,mountpoint,mounted,canmount,origin rpool
NAME TYPE MOUNTPOINT MOUNTED CANMOUNT ORIGIN
rpool filesystem /rpool yes on -
rpool/ROOT filesystem legacy no on -
rpool/ROOT/blue filesystem / yes noauto -
rpool/ROOT/blue/var filesystem /var yes noauto -
rpool/ROOT/red filesystem / no noauto rpool/ROOT/blue@20090723
rpool/ROOT/red/var filesystem /var no noauto rpool/ROOT/blue/var@20090723
rpool/home filesystem /home yes on -
rpool/marimba filesystem /opt/Marimba yes on -
rpool/openv filesystem /usr/openv yes on -

For the most part, the clones appear as standard datasets, but the "origin" property shows us that they are cloned from a pair of snapshots.

Notice that we did not clone the datasets associated with /home, /opt/Marimba, /rpool, /rpool/ROOT, or /usr/openv.

These file systems are not part of a boot environment; but since the "canmount" property for each of these datasets is set to "on", these file systems will automatically be mounted regardless of which boot environment we boot from.


Our cloned boot environment is now in place and is fully bootable.
The system is still configured to mount / and /var from original boot environment (blue) on reboot.

The default boot environment can be changed with the following command:

# zpool set bootfs=rpool/ROOT/red rpool

On the next reboot the server will mount / and /var from the cloned boot environment (red).

The default can easily be changed back to blue if required:

# zpool set bootfs=rpool/ROOT/blue rpool

###################################################################################
At this point, it is possible to boot either environment.
Any changes to any files in / or /var in either environment will not be reflected in the other environment.
Any changes to any files in /home, /rpool, etc. will show up in both environments because there is only one copy of these file systems.
###################################################################################


That’s all for now.



Readers who read this page, also read:




Bookmark and Share My Zimbio http://www.wikio.com

0 comments: