Introduction: 

In my previous article we have seen how we can create and configure Oracle Grid Infrastructure and ASM in Solaris 11 Non Global Zone.  Now in this article we will see how we can configure and manage additional devices for Oracle ASM Instance.  We have also discussed in detail about the global and non-global zones in my previous articles.

In the current business environments there is a huge Data growth and to manage this data growth there should be enough storage.  In this part of article we will see the process for adding additional devices into a non global zone and creating ASM Disk groups in 12c Grid Infrastructure within the non-global zone.

I will be using the same environment as used in my previous article. So to have better understanding about the used environment please refer to my previous article - "http://www.toadworld.com/platforms/oracle/w/wiki/11365.installing-and-configuring-oracle-12c-grid-infrasturucture-and-asm-in-solaris-11-non-global-zones

Demonstration:

In this demonstration we will see how we can configure the individual Disks devices/LUNS or ZFS volumes in a non-global zone and how we can create ASM Disk groups on these configured devices.

http://www.toadworld.com/platforms/oracle/w/wiki/11365.installing-and-configuring-oracle-12c-grid-infrasturucture-and-asm-in-solaris-11-non-global-zones

 1- Lets connect to the existing ASM Instance and check the configured devices in  ASM Instance:

grid12c@dbnode1:~$ export ORACLE_SID=+ASM
grid12c@dbnode1:~$ export ORACLE_HOME=/u01/grid12c/product/12.1.0/grid
grid12c@dbnode1:~$ export PATH=$ORACLE_HOME/bin:$PATH
grid12c@dbnode1:~$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 2 19:30:18 2015

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Automatic Storage Management option


SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
+ASM


SQL> select name, path from v$asm_disk;

NAME                               PATH
------------------------------ -------------------------
GRID_0000                      /dev/rdsk/c2t2d0s5
GRID_0001                      /dev/rdsk/c2t3d0s5

SQL>



The Existing configured devices are  "/dev/rdsk/c2t2d0s5"  and  "/dev/rdsk/c2t3d0s5". In this demonstration now we will add new devices "/dev/rdsk/c2t4d0s5"   and "/dev/rdsk/c2t5d0s5"   to non global zone.

Note: Before provisioning the new devices to non-global zone we should format and configure disk slices with the required capacity.

2 - Adding new devices to a non-global zone "dbnode1"

root@soltest:~# zonecfg -z dbnode1
zonecfg:dbnode1> add device
zonecfg:dbnode1:device> set match=/dev/rdsk/c2t5d0s5
zonecfg:dbnode1:device> set match=/dev/dsk/c2t5d0s5
zonecfg:dbnode1:device> end
zonecfg:dbnode1> commit
zonecfg:dbnode1> add device
zonecfg:dbnode1:device> set match=/dev/rdsk/c2t4d0s5
zonecfg:dbnode1:device> set match=/dev/dsk/c2t4d0s5
zonecfg:dbnode1:device> end
zonecfg:dbnode1> commit
zonecfg:dbnode1> exit

Now the devices are added to the zone configuration file and these devices will not be visible under NGZ we should apply the changed configuration of NGZ.

3 - Check the list of existing devices under NGZ:

root@dbnode1:/dev/rdsk# ls -l
total 0
crw-rw---- 1 grid12c dba 229, 197 Sep 2 19:06 c2t2d0s5
crw-rw---- 1 grid12c dba 229, 261 Sep 2 19:06 c2t3d0s5
root@dbnode1:/dev/rdsk#

4 - Apply the configuration changes on NGZ dynamically (without reboot).

root@soltest:~# zoneadm -z dbnode1 apply
zone 'dbnode1': Checking: Adding device match=/dev/rdsk/c2t4d0s5
zone 'dbnode1': Checking: Adding device match=/dev/rdsk/c2t5d0s5
zone 'dbnode1': Checking: Adding device match=/dev/dsk/c2t4d0s5
zone 'dbnode1': Checking: Adding device match=/dev/dsk/c2t5d0s5
zone 'dbnode1': Applying the changes
root@soltest:~#

After applying these changes the devices will be visible to NGZ.

5 - Check the devices on a non-global zone from OS prompt

root@dbnode1:/dev/rdsk# ls -l
total 0

crw-rw---- 1 grid12c dba 229, 197 Sep 2 19:06 c2t2d0s5
crw-rw---- 1 grid12c dba 229, 261 Sep 2 19:06 c2t3d0s5
crw-r----- 1 root sys 229, 325 Sep 2 23:32 c2t4d0s5
crw-r----- 1 root sys 229, 389 Sep 2 23:32 c2t5d0s5
root@dbnode1:/dev/rdsk#

For using these devices within ASM Disk group we should change the permissions of these devices as owner and group of Grid Infrastructure software. As we can see the owner  is "root" and group is "sys" for these newly provisioned devices. 

We can reboot the non global zone to reflect these newly added devices in a non-global zone or we can dynamically apply these changes on a non-global zone using command .

6 - Changing the ownership/permission of newly provisioned devices:

root@dbnode1:/dev/rdsk# chmod 660 c2t4d0s5 c2t5d0s5
root@dbnode1:/dev/rdsk# chown grid12c:dba c2t4d0s5 c2t5d0s5
root@dbnode1:/dev/rdsk# ls -l
total 0
crw-rw---- 1 grid12c dba 229, 197 Sep 2 19:06 c2t2d0s5
crw-rw---- 1 grid12c dba 229, 261 Sep 2 19:06 c2t3d0s5
crw-rw---- 1 grid12c dba 229, 325 Sep 2 23:33 c2t4d0s5
crw-rw---- 1 grid12c dba 229, 389 Sep 2 23:33 c2t5d0s5
root@dbnode1:/dev/rdsk#

7 - Check the devices from the ASM Instance:

SQL> select name, path from v$asm_disk;

NAME                             PATH
------------------------------ -------------------------
                                /dev/rdsk/c2t4d0s5
                                /dev/rdsk/c2t5d0s5
GRID_0000                       /dev/rdsk/c2t2d0s5
GRID_0001                       /dev/rdsk/c2t3d0s5

SQL>

The newly configured devices "c2t4d0s5" and "c2t5d0s6" doesn't belongs to any Disk Group and hence the DG column NAME values are empty.

Now these devices are configured properly and visible to ASM Instance that means we are now ready to configure the New Disk Group for these devices or we can add these devices to existing Disk group as per the storage requirements.

We can create Disk Group using CLI options or using "asmca". But in this demonstration we will use asmca for creation of new Disk Group "DATA".

8 - Use ASMCA for creating the  Disk group on newly provisioned devices:

We can use "asmca" or "CLI" for creation of ASM disk group. If we want to use asmca then GUI must be enabled in NGZ.

- Create ASM DG as per the requirement, In this demonstration new disk group "DATA" has been created on newly provisioned devices

- Disk group "DATA" has been created successfully on newly provisioned devices in a NGZ.

9 - Check the devices again from ASM Instance:

grid12c@dbnode1:~$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 2 23:42:18 2015

Copyright (c) 1982, 2013, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Automatic Storage Management option

SQL> column path format a25
SQL> select name, path from v$asm_disk;

NAME                               PATH
------------------------------ -------------------------
GRID_0000                       /dev/rdsk/c2t2d0s5
GRID_0001                       /dev/rdsk/c2t3d0s5
DATA_0000                       /dev/rdsk/c2t4d0s5
DATA_0001                       /dev/rdsk/c2t5d0s5

SQL>

After creation of DG the devices are now associated with respective ASM device names.

Using ZFS volumes in non-global zone for ASM Devices:

Before we begin withe demonstration lets understand what are the ZFS volumes. 

ZFS Volume is a data-set that represents an Individual block device. ZFS volume devices are located under "/dev/zvol/{dsk, rdsk}/pool.

The process for provisioning the devices are same but the only difference is we need to create the ZFS volumes in any of the existing pool and configure it inside the non global zone.

1- Check the status of the existing pools:

We must make sure that there are no data errors on the existing ZFS pools and there is enough space available in the target ZFS pool on which we are going to create ASM devices.

root@soltest:~# zpool status
pool: dbzone
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
dbzone ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0

errors: No known data errors

pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c2t0d0s1 ONLINE 0 0 0

errors: No known data errors
root@soltest:~# df -h /dbzone
Filesystem Size Used Available Capacity Mounted on
dbzone 49G 32K 43G 1% /dbzone
root@soltest:~#

2 - create ZFS volumes for ASM 

These volumes should be created in a global zone.

root@soltest:~# zfs create -V 2g dbzone/dbnode1/vol01
root@soltest:~# zfs create -V 2g dbzone/dbnode1/vol02

3 - Provision ZFS volumes to NGZ:

zonecfg:dbnode1> add device
zonecfg:dbnode1:device> set match=/dev/zvol/rdsk/dbzone/dbnode1/asmvol1
zonecfg:dbnode1:device> end
zonecfg:dbnode1> add device
zonecfg:dbnode1:device> set match=/dev/zvol/rdsk/dbzone/dbnode1/asmvol2
zonecfg:dbnode1:device> end
zonecfg:dbnode1>exit

4 - Check the devices in Global and non global zone before provisioning.

soladmin@soltest:/dev/zvol/dsk/dbzone/dbnode1$ pwd
/dev/zvol/dsk/dbzone/dbnode1
soladmin@soltest:/dev/zvol/dsk/dbzone/dbnode1$ ls -lrt
total 0
lrwxrwxrwx 1 root root 0 Sep 21 17:37 asmvol1 -> ../../../../..//devices/pseudo/zfs@0:3
drwxr-xr-x 2 root sys 0 Sep 21 17:37 rpool
lrwxrwxrwx 1 root root 0 Sep 21 17:37 asmvol2 -> ../../../../..//devices/pseudo/zfs@0:4
soladmin@soltest:/dev/zvol/dsk/dbzone/dbnode1$

grid12c@dbnode1:/dev/zvol/dsk$ ls -l
total 3
drwxr-xr-x 5 root sys 5 Sep 21 14:46 rpool
grid12c@dbnode1:/dev/zvol/dsk$

5 -  Applying new configuration dynamically into a NGZ:

root@soltest:~# zoneadm -z dbnode1 apply
zone 'dbnode1': Checking: Adding device match=/dev/zvol/rdsk/dbzone/dbnode1/asmvol1
zone 'dbnode1': Checking: Adding device match=/dev/zvol/rdsk/dbzone/dbnode1/asmvol2
zone 'dbnode1': Applying the changes
root@soltest:~#

6 -  Check the devices after dynamic reconfiguration in NGZ:

Once the dynamic reconfiguration is completed the ZFS volume block devices must be visible inside the non global zone

grid12c@dbnode1:/dev/zvol/dsk/dbzone/dbnode1$ ls -lrt
total 0
brw------- 1 root sys 303, 4 Sep 21 14:49 asmvol2
brw------- 1 root sys 303, 3 Sep 21 14:49 asmvol1
grid12c@dbnode1:/dev/zvol/dsk/dbzone/dbnode1$ pwd
/dev/zvol/dsk/dbzone/dbnode1
grid12c@dbnode1:/dev/zvol/dsk/dbzone/dbnode1$

- Check ASM devices before changing the permissions:

SQL> column path format a25
SQL> select name, path from v$asm_disk;

NAME                               PATH
------------------------------ -------------------------
GRID_0000                        /dev/rdsk/c2t2d0s5
GRID_0001                        /dev/rdsk/c2t3d0s5
DATA_0000                        /dev/rdsk/c2t4d0s5
DATA_0001                        /dev/rdsk/c2t5d0s5

SQL>

 7 - Change the permissions of the ZFS volumes:

Similar to the physical devices we must change the ownership and permissions of ZFS volumes.

root@dbnode1:/dev/zvol/dsk/dbzone/dbnode1# pwd
/dev/zvol/dsk/dbzone/dbnode1

root@dbnode1:/dev/zvol/dsk/dbzone/dbnode1# chmod 660 /dev/zvol/rdsk/dbzone/dbnode1/asmvol*
root@dbnode1:/dev/zvol/dsk/dbzone/dbnode1# chown grid12c:dba /dev/zvol/rdsk/dbzone/dbnode1/asmvol*
root@dbnode1:/dev/zvol/dsk/dbzone/dbnode1#


brw-rw---- 1 grid12c dba 303, 3 Sep 21 14:49 asmvol1
brw-rw---- 1 grid12c dba 303, 4 Sep 21 14:49 asmvol2
root@dbnode1:/dev/zvol/dsk/dbzone/dbnode1#

8 -   use dd command to copy the disk headers:

ASM must be able to read from and to write to the 'zvol' volume (as oracle asm user). You can use the 'dd' command to check if this is successfully:

grid12c@dbnode1:~$ dd if=/dev/zvol/rdsk/dbzone/dbnode1/asmvol2 of=/dev/zvol/rdsk/dbzone/dbnode1/asmvol2 bs=4096 count=1
1+0 records in
1+0 records out
grid12c@dbnode1:~$ dd if=/dev/zvol/rdsk/dbzone/dbnode1/asmvol1 of=/dev/zvol/rdsk/dbzone/dbnode1/asmvol1 bs=4096 count=1
1+0 records in
1+0 records out
grid12c@dbnode1:~$

9 - Add new disk string parameter to ASM Initialization parameter.

It si not mandatory to modify this parameter from here, we can use asmca to configure this and asmca will update this parameter in Initialization parameter file.


SQL> alter system set asm_diskstring='/dev/rdsk/*','/dev/zvol/dsk/dbzone/dbnode1/*' scope=both;

System altered.

SQL> show parameter asm_diskstring

NAME                                   TYPE                VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                         string     /dev/rdsk/*, /dev/zvol/dsk/dbzone/dbnode1/*
SQL>

10 -  Creation of ASM Disk group on ZFS volumes.

We will use asmca for creating the new ASM DG on ZFS volume.

Earlier if the new Discovery path is not configured we can add new discovery path at this stage.

"DATA2" Disk Group created successfully on ZFS volumes.

ASM Alert log:

Mon Sep 21 15:44:03 2015
SUCCESS: CREATE DISKGROUP DATA2 NORMAL REDUNDANCY DISK '/dev/zvol/rdsk/dbzone/dbnode1/asmvol1' SIZE 2047M ,
'/dev/zvol/rdsk/dbzone/dbnode1/asmvol2' SIZE 2047M ATTRIBUTE 'compatible.asm'='12.1.0.0.0','au_size'='1M' /* ASMCA */
Mon Sep 21 15:44:03 2015
NOTE: diskgroup resource ora.DATA2.dg is online

11 - Check the devices from ASM Instance:

After the disk group is created check the list of ASM devices from ASM Instance.

SQL> column name format a12
SQL> column PATH format a40
SQL> select name, path from v$asm_disk;

NAME                   PATH
------------ ----------------------------------------
GRID_0000           /dev/rdsk/c2t2d0s5
GRID_0001           /dev/rdsk/c2t3d0s5
DATA_0000           /dev/rdsk/c2t4d0s5
DATA_0001           /dev/rdsk/c2t5d0s5
DATA2_0000          /dev/zvol/rdsk/dbzone/dbnode1/asmvol1
DATA2_0001          /dev/zvol/rdsk/dbzone/dbnode1/asmvol2

6 rows selected.

SQL>

12 - Create  files inside newly created disk group for verification:

In order to make sure that the disk groups are created correctly we will create some files for verification.

ASMCMD> ls
DATA/
DATA2/
GRID/
ASMCMD> cd DATA2
ASMCMD> ls
ASMCMD> mkdir test1
ASMCMD> mkdir test2

ASMCMD> ls
test1/
test2/
ASMCMD>

Conclusion:

In this article we have seen how we can configure the additional devices in Solaris 11 non global zone for using under ASM Instance. We can add any number of physical devices/LUNS and ZFS volumes to a non global zone. The process of configuring these devices in a non global zone is simple but we need to make sure that we are following all the steps in sequence in order to avoid any issues during the device provisioning into the non-global zone.  

In this article we have also seen the provisioning of ZFS volume to be used under ASM Instance. But we need to make it clear that Oracle ASM already includes the database file system and its own volume manager. Oracle doesn't support the third party volume manager for using under ASM Instance. Though ZFS is owned by oracle now but still its not recommended to use under ASM Instance.  Because any other volume manager will have its own I/O layer and ASM too has its own I/O layer and it may have some disadvantages. Using other volume manager may degrade the I/O performance for ASM disk groups.