Creating new ZFS filesystems

Moderator: cah

Post Reply
cah
General of the Army / Fleet Admiral / General of the Air Force
General of the Army / Fleet Admiral / General of the Air Force
Posts: 1342
Joined: Sun Aug 17, 2008 5:05 am

Creating new ZFS filesystems

Post by cah »

Code: Select all

mkdir /apps

zfs create rpool/apps
zfs set mountpoint=/apps rpool/apps
zfs set quota=10G rpool/apps

zfs get all rpool/apps
NOTICE: At least one pool has to be created first before any filesystems can be created. In this case, the rpool was created when installing the OS. zpool is used to create new pools with devices.
CAH, The Great
cah
General of the Army / Fleet Admiral / General of the Air Force
General of the Army / Fleet Admiral / General of the Air Force
Posts: 1342
Joined: Sun Aug 17, 2008 5:05 am

Change ZFS quota

Post by cah »

Increase quota from 10GB to 20 GB on /export/home file system:

Code: Select all

zfs set quota=20G rpool/export/home
After quota change, get all values from the file system:
%zfs get all rpool/export/home
NAME PROPERTY VALUE SOURCE
rpool/export/home type filesystem -
rpool/export/home creation Wed May 6 10:23 2009 -
rpool/export/home used 5.27G -
rpool/export/home available 14.7G -
rpool/export/home referenced 5.27G -
rpool/export/home compressratio 1.00x -
rpool/export/home mounted yes -
rpool/export/home quota 20G local
rpool/export/home reservation none default
rpool/export/home recordsize 128K default
rpool/export/home mountpoint /export/home inherited from rpool/export
rpool/export/home sharenfs off default
rpool/export/home checksum on default
rpool/export/home compression off default
rpool/export/home atime on default
rpool/export/home devices on default
rpool/export/home exec on default
rpool/export/home setuid on default
rpool/export/home readonly off default
rpool/export/home zoned off default
rpool/export/home snapdir hidden default
rpool/export/home aclmode groupmask default
rpool/export/home aclinherit restricted default
rpool/export/home canmount on default
rpool/export/home shareiscsi off default
rpool/export/home xattr on default
rpool/export/home copies 1 default
rpool/export/home version 3 -
rpool/export/home utf8only off -
rpool/export/home normalization none -
rpool/export/home casesensitivity sensitive -
rpool/export/home vscan off default
rpool/export/home nbmand off default
rpool/export/home sharesmb off default
rpool/export/home refquota none default
rpool/export/home refreservation none default
CAH, The Great
cah
General of the Army / Fleet Admiral / General of the Air Force
General of the Army / Fleet Admiral / General of the Air Force
Posts: 1342
Joined: Sun Aug 17, 2008 5:05 am

Creating new ZFS filesystems in non global zones

Post by cah »

From global zone (orazone01), create mount point for non-global zone oradev05:

Code: Select all

cd /zonepool/zones/oradev05/root
mkdir oradev05_vol
zfs create oracledatapool/oradev05_vol
zfs set mountpoint=/zonepool/zones/oradev05/root/oradev05_vol oracledatapool/oradev05_vol
zfs set quota=300G oracledatapool/oradev05_vol
zfs get all oracledatapool/oradev05_vol

NAME                         PROPERTY              VALUE                                       SOURCE
oracledatapool/oradev05_vol  type                  filesystem                                  -
oracledatapool/oradev05_vol  creation              Wed May  4 11:16 2011                       -
oracledatapool/oradev05_vol  used                  21K                                         -
oracledatapool/oradev05_vol  available             300G                                        -
oracledatapool/oradev05_vol  referenced            21K                                         -
oracledatapool/oradev05_vol  compressratio         1.00x                                       -
oracledatapool/oradev05_vol  mounted               yes                                         -
oracledatapool/oradev05_vol  quota                 300G                                        local
oracledatapool/oradev05_vol  reservation           none                                        default
oracledatapool/oradev05_vol  recordsize            128K                                        default
oracledatapool/oradev05_vol  mountpoint            /zonepool/zones/oradev05/root/oradev05_vol  local
oracledatapool/oradev05_vol  sharenfs              off                                         default
oracledatapool/oradev05_vol  checksum              on                                          default
oracledatapool/oradev05_vol  compression           off                                         default
oracledatapool/oradev05_vol  atime                 on                                          default
oracledatapool/oradev05_vol  devices               on                                          default
oracledatapool/oradev05_vol  exec                  on                                          default
oracledatapool/oradev05_vol  setuid                on                                          default
oracledatapool/oradev05_vol  readonly              off                                         default
oracledatapool/oradev05_vol  zoned                 off                                         default
oracledatapool/oradev05_vol  snapdir               hidden                                      default
oracledatapool/oradev05_vol  aclmode               groupmask                                   default
oracledatapool/oradev05_vol  aclinherit            restricted                                  default
oracledatapool/oradev05_vol  canmount              on                                          default
oracledatapool/oradev05_vol  shareiscsi            off                                         default
oracledatapool/oradev05_vol  xattr                 on                                          default
oracledatapool/oradev05_vol  copies                1                                           default
oracledatapool/oradev05_vol  version               4                                           -
oracledatapool/oradev05_vol  utf8only              off                                         -
oracledatapool/oradev05_vol  normalization         none                                        -
oracledatapool/oradev05_vol  casesensitivity       sensitive                                   -
oracledatapool/oradev05_vol  vscan                 off                                         default
oracledatapool/oradev05_vol  nbmand                off                                         default
oracledatapool/oradev05_vol  sharesmb              off                                         default
oracledatapool/oradev05_vol  refquota              none                                        default
oracledatapool/oradev05_vol  refreservation        none                                        default
oracledatapool/oradev05_vol  primarycache          all                                         default
oracledatapool/oradev05_vol  secondarycache        all                                         default
oracledatapool/oradev05_vol  usedbysnapshots       0                                           -
oracledatapool/oradev05_vol  usedbydataset         21K                                         -
oracledatapool/oradev05_vol  usedbychildren        0                                           -
oracledatapool/oradev05_vol  usedbyrefreservation  0                                           -
oracledatapool/oradev05_vol  logbias               latency                                     default
The quota has been configured on this newly created file system.

I then logged onto oradev05 to see if I need to mount the file system.
On non-global zone (oradev05):

Code: Select all

oradev05:/export/home/hsiaoc1%df -h          
Filesystem             size   used  avail capacity  Mounted on
/                      274G    38G   235G    14%    /
/dev                   274G    38G   235G    14%    /dev
/lib                   266G   6.5G   260G     3%    /lib
/opt/sfw               266G   6.5G   260G     3%    /opt/sfw
/platform              266G   6.5G   260G     3%    /platform
/sbin                  266G   6.5G   260G     3%    /sbin
/usr                   266G   6.5G   260G     3%    /usr
proc                     0K     0K     0K     0%    /proc
ctfs                     0K     0K     0K     0%    /system/contract
mnttab                   0K     0K     0K     0%    /etc/mnttab
objfs                    0K     0K     0K     0%    /system/object
swap                    53G   352K    53G     1%    /etc/svc/volatile
/platform/sun4v/lib/libc_psr/libc_psr_hwcap2.so.1
                       266G   6.5G   260G     3%    /platform/sun4v/lib/libc_psr.so.1
/platform/sun4v/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                       266G   6.5G   260G     3%    /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd                       0K     0K     0K     0%    /dev/fd
swap                    53G     0K    53G     0%    /tmp
swap                    53G    48K    53G     1%    /var/run
netapp01:/vol/orainstall/orainstall
                        50G    45G   4.9G    91%    /netapp01_orainstall
oracledatapool/oradev05_vol
                       300G    21K   300G     1%    /oradev05_vol
The new file system has already been mounted. I checked /etc/vfstab and it does NOT have the entry in there. It must be mounted automatically.

If unmount is needed, try the following command in global zone:

Code: Select all

zfs unmount oracledatapool/oradev05_vol
If device is busy, force can be applied:

Code: Select all

zfs unmount -f oracledatapool/oradev05_vol
Legacy umount provides the same power on zfs mount points:

Code: Select all

umount oracledatapool/oradev05_vol
Vice versa, manual mount command is:

Code: Select all

zfs mount oracledatapool/oradev05_vol
CAH, The Great
Post Reply