Sunday, June 28, 2009

Partition/slice & create file systems

Steps necessary to partition/slice & create file systems:

Steps:
1. unmount existing file systems
-umount /data2 /data3

2. confirm fdisk partitions via 'format' utility
-format - select disk - select fdisk

3. use partition - modify to create slices on desired drives
DISK1
-slice 0 - /dev/dsk/c0t1d0s0
DISK2
-slice 0 - /dev/dsk/c0t2d0s0

4. Create file system using 'newfs /dev/rdsk/c0t0d0s0'

5. Use 'fsck /dev/rdsk/c0t1d0s0' to verify the consistency of the file system

6. Mount file systems at various mount points
mount /dev/dsk/c0t1d0s0 /data2 && mount /dev/dsk/c0t2d0s0 /data3
7. create entries in Virtual File System Table (/etc/vfstab) file


###How to determine file system associated with device###
1. fstyp /dev/dsk/c0t0d0s0 - returns file system type
2. grep mount point from /etc/vfstab - returns matching line
grep /var /etc/vfstab
3. cat /etc/mnttab - displays currently mounted file system

###Temporary File System (TEMPFS) Implementation###
TempFS provides in-memory (RAM), very fast, storage and boosts application performance

Steps:
1. Determine available memory and the amount you can spare for TEMPFS
-prtconf
- allocate 100MB
2. Execute mount command:

mkdir /tempdata && chmod 777 /tempdata && mount -F tmpfs -osize=100m swap /tempdata

Note: TEMPFS data does NOT persist/survive across reboots
Note: TEMPFS data is lost when the following occurs:
1. TEMPFS mount point is unmounted: i.e. umount /tempdata
2. System reboot

Modify /etc/vfstab to include the TEMPFS mount point for reboots

swap - /tempdata tmpfs - yes -

###Swap File/Partition Creation###
swap -l | -s - to display swap information

mkfile size location_of_file - to create swap file
mkfile 512m /data2/swap2

swap -a /data2/swap2 - activates swap file

To remove swap file:
swap -d /data2/swap2 - removes swap space from kernel. does NOT remove file
rm -rf /data2/swap2

###Swap Partition Creation###
format - select disk - partition - select slice/modify
swap -a /dev/dsk/c0t2d0s1

Modify /etc/vfstab


###Volume Management###
Solaris' Volume Management permits the creation of 5 object types:
1. Volumes(RAID 0(concatenation or stripe)/1(mirroring)/5(striping with parity)
2. Soft partitions - permits the creation of very large storage devices
3. Hot spare pools - facilitates provisioning of spare storage for use when RAID-1/5 volume has failed
i.e. MIRROR
-DISK1
-DISK2
-DISK3 - spare

4. State database replica - MUST be created prior to volumes
- Contains configuration & status of ALL managed objects (volumes/hot spare pools/Soft partitions/etc.)

5. Disk sets - used when clustering Solaris in failover mode

Note: Volume Management facilitates the creation of virtual disks
Note: Virtual disks are accessible via: /dev/md/dsk & /dev/md/rdsk
Rules regarding Volumes:
1. State database replicas are required
2. Volumes can be created using dedicated slices
3. Volumes can be created on slices with state database replicas
4. Volumes created by volume manager CANNOT be managed using 'format', however, can be managed using CLI-tools (metadb, metainit) and GUI tool (SMC)
5. You may use tools such as 'mkfs', 'newfs', 'growfs'
6. You may grow volumes using 'growfs'


###State Database Replicas###
Note: At least 3 replicas are required for a consistent, functional, multi-user Solaris system.

3 - yields at least 2 replicas in the event of a failure
Note: if replicas are on same slice or media and are lost, then Volume Management will fail, causing loss of data.
Note: place replicas on as many distinct controllers/disks as possible

Note: Max of 50 replicas per disk set

Note: Volume Management relies upon Majority Consensu Algorithm (MCA) to determine the consistency of the volume information

3 replicas = 1.5(half) = 1-rounded-down +1 = 2 = MCA(half +1)

Note: try to create an even amount of replicas
4 replicas = 2(half) + 1 = 3

State database replica is approximately 4MB by default - for local storage

Rules regarding storage location of state database replicas:
1. dedicated partition/slice - c0t1d0s3
2. local partition that is to be used in a volume(RAID 0/1/5)
3. UFS logging devices
4. '/', '/usr', 'swap', and other UFS partitions CANNOT be used to store state database replicas

###Configure slices to accomodate State Database Replicas###
c0t1d0s0 -
c0t2d0s0 -
RAID 0 (STRIPE) - 60GB

###Create RAID 0 (STRIPE) - NOT REDUNDANT###
c0t1d0s0 -
c0t2d0s0 -
RAID 0 (STRIPE) - 60GB - /dev/md/dsk/d0
Note: Volumes can be created using slices from a single or multiple disks
Note: State database replicas serve for ALL volumes managed by Volume Manager

Note: RAID 0 Concatenation - exhausts DISK1 before writing to DISK2
Note: RAID 0 Stripe - distributes data evenly across members
Note: Use the same size slices when using RAID0 with Striping


Note: after defining volume, create file system
newfs /dev/md/rdsk/d0

###Suggested layout for creating volumes using volume manger###
SERVER
-DISK0 - SYSTEM DISK

VOLUME MANAGE SECONDARY DISKS
-DISK1 - SECONDARY DISK
-DISK2 - SECONDARY DISK

##RAID-1 Configuration###
Note: RAID-1 relies upon submirrors or existing RAID-0 volumes
c0t1d0s0 - /dev/md/dsk/d0
c0t2d0s0 - /dev/md/dsk/d1
/dev/md/dsk/d2

d0 - source sub-mirror
d1 - destination sub-mirror

Create file system on mirrored volume '/dev/md/dsk/d2'
newfs /dev/md/rdsk/d2

###RAID-5 Configuration###
Steps:
1. Ensure that 3 components(slices/disks) are available for configuration
2. Ensure that components are identical in size

Slices for RAID-5
c0t1d0s0 - 10GB
c0t1d0s0 - 10GB
c0t2d0s0 - 10GB

/dev/md/dsk/d0 = RAID-5 = 20GB

Note: You may attach components to RAID-5 volume, but they will not store parity information, however, their data will be protected.


###Using growfs to extend volumes###
growfs extends mounted/unmounted volumes(UFS/ZFS)

Steps to grow a mounted/unmounted file syste
1. Find free slice(s) to add as component(s) to volume using SMC or metattach CLI
2. Add component slice - wait for initialization(concatenation) to complete
3. execute 'growfs -M /d0 /dev/md/rdsk/d0'

Note: Once you've extended a volume, you CANNOT decrease it in size.
Note: Concatenation of RAID-1/5 volumes yields an untrue RAID-1/5 volume.
SLICE1
SLICE2
SLICE3
SLICE4 - Concatenated - NOT a true RAID-1/5 member (no parity is stored)

Note: When extending RAID-1 volumes, extend each sub-mirror first, and then Solaris will automatically extend the RAID-1 volume. Then run 'growfs.'


###Soft Partitions###
1. Provides an abstracted, extensible partition object
2. Permits virtually unlimitted segmentation of disk
c0t1d0 - s0-9 (0-7 except 2, usable)

3. Permits creation of partitions on top of 1 or more slices

Steps:
1. Clean up partitions on existing disks: c0t1d0 & c0t2d0










Readers who read this page, also read:




Bookmark and Share My Zimbio http://www.wikio.com

0 comments: