MR_zfs

download MR_zfs

If you can't read please download the document

description

zfs

Transcript of MR_zfs

zpool create tank mirror c1t1d0s0 c1t0s0 cache c1t2d0s--10/09zpool iostat -v pool10/08 --u6 --zfs bot zfs sendzfs rollback -fzfs send -I pool/fs@snapA pool/fs@snapB >/snap/fscomobo --incrementalzfs snd -I pool/fs@snap1 >snap/fsclonesnap -Izfs receive -F pool/clone wait continue panic --for device failurezpool create -o failmode=continue users mirror ctd ctdzpool historyzpool history -l poolnamezfs upgradezfs set delegation=off poolname --zfs allow /unallowset mountpoint=autoreplace=on -->For automatic replacement with out zpool replaceusb --automatially or then cfgadm -c configurezfs snapshot -r users/home@toddayzfs rename -r users/home#today @yesterday normal --lzjb compressionalso gzip as wellset compression=gzipset copies=1,2,3zfs set shareiscsi=oniscsiadm list targetradidz2 -11/06zfs promotezpool clear --> to clear the fault errorsfsstat zfshttps://hostnmae:6789/zfs/usr/sbin/smcwebserver start/usr/sbin/smcwebserver enable min 6/06 zfs set sharenfs=on pool/fsnew acl --nfsv4 version old acl -- > posix-draftefi --8 slice -> 8 mbmirror ctds ctds mirror ctds ctds raid 5/4/6/RDP --raid 5 wrtite hole -- > if only apart of raid 5 is written ,power lost before all blocks have made to disk ,parity will remain out of data there for use less ..raid z --variable width radi stripe --only full stipe write --take care by metadata has enough info about the underlying datd redundency first s/f soltion write hole issues 2 disk for raid z3 disks for raid z2 hybrid poll --unfied storages zfs root --smizpool attachzpool create data ctds log ctdsno mirror raid z for caching zpool create -n mypool mirror ctds ctdszpool create -m /export/home home c1t0d0 --default mount point zpool add pool mirror zpool add pool log mirror ctds ctdszpool add cacche ctds zpool attach --changing to diffzpool cretae a mirror zpool attach a zpool detach a zpool online /offline poll zpool clear pool zpool replace pool zpool replae pool --toatl pool -no redundencyzpool create z mirror spare zpool add pool spare zpool remove pool c2t3d0 --only for spare and cachezpool status -xzpool replace pool zpool replace pool altroot -- second mount point cache -file -diff locationzpool import -c failmode --wait -- >stop all io till device has been resoredcontinue --allow read for healthy ,but stop writepanic --crash dumpzpool export poolzpool import /zpool import zpool import zpool import -d /file zpool destroy poolzpoolimport -Dzpool import -D zpool upgrade -a768 mb --for zfs root poolzpool attach rootpool -- mirroring lucreate -n zfs1009BEboot -L --to see the avilable lusboot -Z rpool/ROOT/newbefor zone ->set zonepath=/zonepool/pool1****increase swap **zfs create -V 3g -b 8k rpool/swapswap -d /dev/zvol/dsk/pool/swapzfs volslice=2g /rpool/swapswap -a dev/zvol/dsk/pool/swapInstall boot --installbot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1dx86 # install grub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1ds0zfs inherit -r moutpoint /rpool/ROOT/s10u6 &zfs set mountpoint=/ rpool/ROOT/s10u6root disk replcement disk should be in SMIzps replace /zfs attach ..once syncing comletes need to install boot blksnap shot zfs creates rpool/snapspace -->any system(then share ) zfs snapshot -r rpool@0810zfs send -Rv rpool@0810 >/net/host/pool/snapresotrationboot cdrom -smount -F nfs host:/rpool/snapspace /mntrecreate the root poorlzpool create -f -o failmode = ..... rpool diskresorecat /mnt/rpool.0810|zfs receive -Fdu rpoolset boot fs zpool set bootfs=rpool/ROOT/zfs1009BE rpoolinstallboot /install grub --do rebootrool back using local snap shotboot -F failsafeselect rpool--optionzfs rolback ropool@0810 --old snap shotzfs rolback rpool/ROOT@0810zfs rollback rpool/ROOT/zfs..BE@0810init 6zfs rename old newset mountpoint=legacy --manage through vfstab & mount unmountvfstab pool/fs - /mnt zfs - yes -zfs mount zfs mount -azfs unmount zfs unshare/share -a or{ legacy --etc/dfs/dfstab}zfs userquota@student{user}=10g pool/fsgroupquota@group=zfs groupspace/userspace pool/fsquota -v zfs set reservation=10 g -- >it will be mandatorly avilable for a usersnap shotszfs snapshot pool/fs@datezfs snapshot -r pool/fs@nowzfs destroy snap shotzfs list -t snapshotzfs list -o spaceroll backzfs rolback pool/home/fs@abcclonezfs snapshot pool/fs@todayzfs clone pool/fs@today pool/abc/clonerestore from clonezfs snapshot pool/fs@todayzfs clone pool/fs@today pool/sbc/clonezfs pramote pool/sbc/clonesend /recivezfs send pool/fs-snap@today |zfs receive pool1/abczfs send pool/fs@today | ssh host2 zfs receive poll/abczfs send -i pool/fs@wed pool/fs@fr |zfs receive pool2/new --- > increamnetor zfs send pool/fs@today |gzip > a.zipzfs receive pool/fs@ys /bkup/fs@all -Izfs destry pool/fs@week1 and allrestorezfs recv -d -F pool/fs add datasetset name=pool/abcendit will show inside the zoneadding volumezonecfg>add deviceset match=/dev/zvol/desk/pool/volendmaintain the volume in seperate slicezpool import -R /mnt poolname -- >imprting with al rootfsck --repair and vaidation zpool scrub pool -- >explicit checkingzpool status -v poolzpool scrub -s pool -- >stop ISCSI#zfs create -V 500m(size) mypool/test(poolname/filesystem)#iscsitadm list target#zfs set shareiscsi=on mypool/test#iscsitadm list targetthen you need to install scsi initiator to other server, then configure ISCSI.how to upgrade the zpool version in the server?bash-3.00# zfs upgradeThis system is currently running ZFS filesystem version 3.All filesystems are formatted with the current version.If you want configure in lower version of solaris 10 06/06. You need to install the below packageZFS package NameSUNWzfsrSUNWzfsuZFS NFS:#zpool create mypool c1t0d0 c1t1d0#zfs create mypool/test#zfs set sharefs=on mypool/testor#sharemgr add-share -s mypool/test# sharemgr show -vp zfszfs nfs=() zfs/mypool/testmypool/test