Solaris is one of the widely used Operating Environment for Oracle Databases. The proper configuration of operating systems is one of the key factor for Database and Application Performance. If the database is configured perfectly for optimal performance but the Operating system on which the Database is deployed is not configured properly then definitely the Database performance will not work as expected. So for a better database performance all layers are very Important that includes Operating system, Database, Network and Virtuallization.
Starting from Sun Solaris 10 "ZFS" file system has been introduced and ZFS is a very strong file system with multiple capabilities. In solaris 10 using ZFS file system for "root" file system was optional but starting from Oracle Solaris 11 ZFS is the default "root" file system. ZFS performance is very good compared to traditional UFS file system.
Oracle Solaris ZFS file system uses "Adaptive Replacement Cache" (ZFS ARC) from system main memory for performing faster I/O operations. By default ZFS uses all available free memory on the system for performing I/O operations performed against ZFS file systems. But oracle allows us to limit this parameter by setting to specific value so that ZFS can utilize only the specified amount of memory from the system for performing caching operations.
In this article we will explore how to configure the ZFS ARC appropriately for better system performance.
Oracle Solaris 11.2 - X86
Oracle Database 220.127.116.11
ZFS file system
>> Check the memory utilization of OS before starting an Oracle Instance:
>> Check the current memory utilization with Database up and running
Here note the value of ZFS File Data and Free (freelist). ZFS file data is ZFS cache data which is their in main memory and "freelist" is amount of free memory currently available on the system.
>> Now we will perform some I/O operation so that we can see the utilization of ZFS file data.
First I/O operation:
root@soltest:/u01/oradb/oracle/oradata# lsTESTDB TESTDB1root@soltest:/u01/oradb/oracle/oradata# du -sh * 3K TESTDB 3.7G TESTDB1root@soltest:/u01/oradb/oracle/oradata# cp -r TESTDB1 TESTDB1.BKP & 3346root@soltest:/u01/oradb/oracle/oradata#
- we have copied 3.7 GB of Data on a ZFS file system.
>> Now check the ZFS memory utilization:
The total memory is utilized by ZFS file system now. At this stage certain system starts facing problem, ZFS will not free memory pages to be utilized by Database/Application. This will result in the bad performance at application/database Layer.
The Below graph displays the memory utilization before and after I/O operation on ZFS file system:
- The free memory graph is continuously decreasing after ZFS I/O operations started.
>> Shutdown the Database and check the memory utilization
SQL*Plus: Release 18.104.22.168.0 Production on Thu Mar 31 20:00:40 2016
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:Oracle Database 12c Enterprise Edition Release 22.214.171.124.0 - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> show sga
Total System Global Area 1970864128 bytesFixed Size 2362648 bytesVariable Size 469762792 bytesDatabase Buffers 1493172224 bytesRedo Buffers 5566464 bytesSQL> shut immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL>
- Now it freed the memory that was allocated to the SGA
Here if we start the Database it will start without issue. But on certain system I've seen its not able to start due to insufficient memory as ZFS is not freeing the cache memory.
>> Error during startup of database due to Insufficient free memory:
oracle@soltest:/oradb01/oracle/oradata$ sqlplus / as sysdba
SQL*Plus: Release 126.96.36.199.0 Production on Sun Jan 24 14:39:25 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomountORA-27102: out of memorySVR4 Error: 12: Not enough spaceAdditional information: 1671Additional information: 16106127360Additional information: 4815060992SQL>
On this system the total system memory is 32 GB from which 26GB is utilized by ZFS file data and its not freeing the ZFS file data memory when Database is requesting memory.
If the database is up and if any of the application process requesting memory from system then that process is not allowed to take memory due to this problem and eventually its resulting in bad application and Database performance.
To overcome this issue we need to reboot the server to freeup the memory.
>> check the ZFS ARC:
root@soltest:~# kstat -p -m zfs -n arcstatszfs:0:arcstats:buf_size 856576zfs:0:arcstats:c 79317672zfs:0:arcstats:c_max 7498948608zfs:0:arcstats:c_min 67108864zfs:0:arcstats:class misczfs:0:arcstats:crtime 10.045481101zfs:0:arcstats:data_size 70853312zfs:0:arcstats:deleted 247266zfs:0:arcstats:demand_data_hits 4891628zfs:0:arcstats:demand_data_misses 18829zfs:0:arcstats:demand_metadata_hits 1050892zfs:0:arcstats:demand_metadata_misses 10239zfs:0:arcstats:evict_l2_cached 0zfs:0:arcstats:evict_l2_eligible 27127086592zfs:0:arcstats:evict_l2_ineligible 2900732416zfs:0:arcstats:evict_mfu 11182559232zfs:0:arcstats:evict_mru 18845259776zfs:0:arcstats:hash_chain_max 6zfs:0:arcstats:hash_chains 5221zfs:0:arcstats:hash_collisions 95890zfs:0:arcstats:hash_elements 37288zfs:0:arcstats:hash_elements_max 122845zfs:0:arcstats:hits 6058515zfs:0:arcstats:l2_abort_lowmem 0zfs:0:arcstats:l2_cksum_bad 0zfs:0:arcstats:l2_evict_lock_retry 0zfs:0:arcstats:l2_evict_reading 0zfs:0:arcstats:l2_feeds 0zfs:0:arcstats:l2_hdr_size 0zfs:0:arcstats:l2_hits 0zfs:0:arcstats:l2_io_error 0zfs:0:arcstats:l2_misses 29068zfs:0:arcstats:l2_read_bytes 0zfs:0:arcstats:l2_rw_clash 0zfs:0:arcstats:l2_size 0zfs:0:arcstats:l2_write_bytes 0zfs:0:arcstats:l2_writes_done 0zfs:0:arcstats:l2_writes_error 0zfs:0:arcstats:l2_writes_hdr_miss 0zfs:0:arcstats:l2_writes_sent 0zfs:0:arcstats:memory_throttle_count 0zfs:0:arcstats:meta_limit 0zfs:0:arcstats:meta_max 48748056zfs:0:arcstats:meta_used 8354032zfs:0:arcstats:mfu_ghost_hits 19244zfs:0:arcstats:mfu_hits 5468733zfs:0:arcstats:misses 261357zfs:0:arcstats:mru_ghost_hits 15595zfs:0:arcstats:mru_hits 312947zfs:0:arcstats:mutex_miss 851zfs:0:arcstats:other_size 7497456zfs:0:arcstats:p 35364864zfs:0:arcstats:prefetch_data_hits 70836zfs:0:arcstats:prefetch_data_misses 214018zfs:0:arcstats:prefetch_metadata_hits 45159zfs:0:arcstats:prefetch_metadata_misses 18271zfs:0:arcstats:size 79207344zfs:0:arcstats:snaptime 27590.580059777root@soltest:~#
To overcome this issue its highly recommended to CAP the ZFS ARC memory configuration to specific amount that it should utilize for caching operations.
To reserve ZFS ARC memory size there are two options available:
1 - Configure "zfs_arc_max" value in /etc/system
2 - Use script "set_user_reserve.sh"
option-1 is supported till Oracle Solaris 11.1 and starting from Oracle Solaris 11.2 and higher Oracle recommends to use "set_user_reserve.sh" script . But option-2 cannot be used on pre solaris 11.2 operating environments.
- Option-1 needs reboot of the system for the parameter to be effective
- Option-2 doesn't need reboot but we need to update "/etc/system" file for the settings to be persistent after reboot.
Script "set_user_reserve.sh " can be downloaded from MOS tech note (Doc ID 1663862.1)
root@soltest:~/scripts# ./set_user_reserve.sh -fp 20Adjusting user_reserve_hint_pct from 0 to 20Adjustment of user_reserve_hint_pct to 20 successful.Make the setting persistent across reboot by adding to /etc/system
## Tuning based on MOS note 1663862.1, script version 1.0# added Friday, April 1, 2016 03:00:55 AM AST by system administrator : <me>set user_reserve_hint_pct=20
- 20 value will reserve only 20% of system memory reserved for applications/database. The good thing is this value can be changed any time as desired
>> check the memory utilization:
>> Now change the user_reserve_hint value to 80 and we can see it will change dynamically
root@soltest:~/scripts# ./set_user_reserve.sh -fp 80Adjusting user_reserve_hint_pct from 20 to 80Friday, April 1, 2016 01:12:57 PM AST : waiting for current value : 43 to grow to target : 45Friday, April 1, 2016 01:13:03 PM AST : waiting for current value : 47 to grow to target : 50Friday, April 1, 2016 01:13:16 PM AST : waiting for current value : 50 to grow to target : 55Friday, April 1, 2016 01:13:35 PM AST : waiting for current value : 57 to grow to target : 60Friday, April 1, 2016 01:13:52 PM AST : waiting for current value : 60 to grow to target : 65Adjustment of user_reserve_hint_pct to 80 successful.Make the setting persistent across reboot by adding to /etc/system
## Tuning based on MOS note 1663862.1, script version 1.0# added Friday, April 1, 2016 01:14:12 PM AST by system administrator : <me>set user_reserve_hint_pct=80
>> Now start an I/O operation and check the ZFS file data utilization:
root@soltest:/u01/oradb/oracle/oradata# cp -r TESTDB2 TESTDB2.BKP & 1828root@soltest:/u01/oradb/oracle/oradata# jobs+ Running cp -r TESTDB2 TESTDB2.BKP &root@soltest:/u01/oradb/oracle/oradata#
Though we are running a copy job for file around 4GB, still ZFS file data utilization is still 480 MB. Before setting this parameter it was utilizing all available memory.
In pre oracle Solaris 11.2 the only option to reserve ZFS ARC size is to set "zfs_arc_max" parameter to the required value and reboot the server for the parameter to be effective
cat /etc/system####ZFS ARCH 3.2 GB######set zfs:zfs_arc_max = 3435973837oracle@soltest:/oradb01/oracle/oradata$
This parameter will limit the utilization of ARC size upto 3.2 GB. In Solaris 11.2 and Solaris 11.3 if we configure this parameter in "/etc/system" file then it will work but this setting is deprecated. So its recommended to use option-2 as listed above.
This parameter will update the system how much memory from the server it should utilize for ZFS caching, So its highly recommended to set this value appropriately before deploying any application/database on oracle Solaris operating Environment. In Solaris 11.2 to resever ZFS ARC reboot is not required. By setting this parameter administrators can meet all future application/database memory requirements.
This is something new for me and it will nice source for DBAs who manages database on solaris to understand new features :)