ZFS Memory Tuning for Oracle Databases & Application on Oracle Solaris 11

Oracle Community

ZFS Memory Tuning for Oracle Databases & Application on Oracle Solaris 11


Solaris is one of the widely used Operating Environment for Oracle Databases.  The proper configuration of  operating systems is one of the key factor for Database and Application Performance.  If the database is configured perfectly for optimal performance but the Operating system on which the Database is deployed is not configured properly then definitely the Database performance will not work as expected. So for a better database performance all layers are very Important that includes Operating system, Database, Network and  Virtuallization. 

Starting from Sun Solaris 10 "ZFS" file system has been introduced and ZFS is a very strong file system with multiple capabilities.   In solaris 10 using ZFS file system for "root" file system was optional but starting from Oracle Solaris 11 ZFS is the default "root" file system. ZFS performance is very good compared to traditional UFS file system.

Oracle Solaris ZFS file system uses "Adaptive Replacement Cache" (ZFS ARC) from system main memory for performing faster I/O operations. By default ZFS uses all available free memory on the system for performing I/O operations performed against ZFS file systems. But oracle allows us to limit this parameter by setting to specific value so that ZFS can utilize only the specified amount of memory from the system for performing caching operations.

In this article we will explore how to configure the ZFS ARC appropriately for better system performance.

Environment Details:

Oracle Solaris 11.2 - X86

Oracle Database

ZFS file system


>> Check the memory utilization of OS before starting an Oracle Instance:

>> Check the current memory utilization with Database up and running

Here note the  value of ZFS File Data and Free (freelist). ZFS file data is ZFS cache data which is their in main memory  and "freelist" is amount of free memory currently available on the system.

>> Now we will perform some I/O operation so that we can see the utilization of ZFS file data.

First I/O operation:

root@soltest:/u01/oradb/oracle/oradata# ls
root@soltest:/u01/oradb/oracle/oradata# du -sh *
root@soltest:/u01/oradb/oracle/oradata# cp -r TESTDB1 TESTDB1.BKP &
[1] 3346

- we have copied 3.7 GB of Data on a ZFS file system.

>> Now check the ZFS memory utilization:

The total memory is utilized by ZFS file system now. At this stage certain system starts facing problem, ZFS will not free memory pages to be utilized by Database/Application. This will result in the bad performance at application/database Layer.

The Below graph displays the memory utilization before and after I/O operation on ZFS file system:


- The free memory graph is continuously decreasing  after ZFS I/O operations started.

>> Shutdown the Database and check the memory utilization

SQL*Plus: Release Production on Thu Mar 31 20:00:40 2016

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> show sga

Total System Global Area 1970864128 bytes
Fixed Size 2362648 bytes
Variable Size 469762792 bytes
Database Buffers 1493172224 bytes
Redo Buffers 5566464 bytes
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

- Now it freed the memory that was allocated to the SGA

Here if we start the Database it will start without issue. But on certain system I've seen its not able to start due to insufficient memory as ZFS is not freeing the cache memory.

>> Error during startup of database due to Insufficient free memory:

oracle@soltest:/oradb01/oracle/oradata$ sqlplus / as sysdba

SQL*Plus: Release Production on Sun Jan 24 14:39:25 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount
ORA-27102: out of memory
SVR4 Error: 12: Not enough space
Additional information: 1671
Additional information: 16106127360
Additional information: 4815060992

On this system the total system memory is 32 GB from which 26GB is utilized by ZFS file data and its not freeing the ZFS file data memory when Database is requesting memory.

If the database is up and if any of the application process requesting memory from system then that process is not allowed to take memory due to this problem and eventually its resulting in bad application and Database performance.

To overcome this issue we need to reboot the server to freeup the memory.

>> check the ZFS ARC:

root@soltest:~# kstat -p -m zfs -n arcstats
zfs:0:arcstats:buf_size 856576
zfs:0:arcstats:c 79317672
zfs:0:arcstats:c_max 7498948608
zfs:0:arcstats:c_min 67108864
zfs:0:arcstats:class misc
zfs:0:arcstats:crtime 10.045481101
zfs:0:arcstats:data_size 70853312
zfs:0:arcstats:deleted 247266
zfs:0:arcstats:demand_data_hits 4891628
zfs:0:arcstats:demand_data_misses 18829
zfs:0:arcstats:demand_metadata_hits 1050892
zfs:0:arcstats:demand_metadata_misses 10239
zfs:0:arcstats:evict_l2_cached 0
zfs:0:arcstats:evict_l2_eligible 27127086592
zfs:0:arcstats:evict_l2_ineligible 2900732416
zfs:0:arcstats:evict_mfu 11182559232
zfs:0:arcstats:evict_mru 18845259776
zfs:0:arcstats:hash_chain_max 6
zfs:0:arcstats:hash_chains 5221
zfs:0:arcstats:hash_collisions 95890
zfs:0:arcstats:hash_elements 37288
zfs:0:arcstats:hash_elements_max 122845
zfs:0:arcstats:hits 6058515
zfs:0:arcstats:l2_abort_lowmem 0
zfs:0:arcstats:l2_cksum_bad 0
zfs:0:arcstats:l2_evict_lock_retry 0
zfs:0:arcstats:l2_evict_reading 0
zfs:0:arcstats:l2_feeds 0
zfs:0:arcstats:l2_hdr_size 0
zfs:0:arcstats:l2_hits 0
zfs:0:arcstats:l2_io_error 0
zfs:0:arcstats:l2_misses 29068
zfs:0:arcstats:l2_read_bytes 0
zfs:0:arcstats:l2_rw_clash 0
zfs:0:arcstats:l2_size 0
zfs:0:arcstats:l2_write_bytes 0
zfs:0:arcstats:l2_writes_done 0
zfs:0:arcstats:l2_writes_error 0
zfs:0:arcstats:l2_writes_hdr_miss 0
zfs:0:arcstats:l2_writes_sent 0
zfs:0:arcstats:memory_throttle_count 0
zfs:0:arcstats:meta_limit 0
zfs:0:arcstats:meta_max 48748056
zfs:0:arcstats:meta_used 8354032
zfs:0:arcstats:mfu_ghost_hits 19244
zfs:0:arcstats:mfu_hits 5468733
zfs:0:arcstats:misses 261357
zfs:0:arcstats:mru_ghost_hits 15595
zfs:0:arcstats:mru_hits 312947
zfs:0:arcstats:mutex_miss 851
zfs:0:arcstats:other_size 7497456
zfs:0:arcstats:p 35364864
zfs:0:arcstats:prefetch_data_hits 70836
zfs:0:arcstats:prefetch_data_misses 214018
zfs:0:arcstats:prefetch_metadata_hits 45159
zfs:0:arcstats:prefetch_metadata_misses 18271
zfs:0:arcstats:size 79207344
zfs:0:arcstats:snaptime 27590.580059777

To overcome this issue its highly recommended to CAP the ZFS ARC memory configuration to specific amount that it should utilize for caching operations.

Memory configuration for ZFS ARC:

To reserve ZFS ARC memory size there are two options available:

1 - Configure "zfs_arc_max" value in /etc/system

2 - Use script "set_user_reserve.sh" 

option-1 is supported till Oracle Solaris 11.1 and starting from Oracle Solaris 11.2 and higher Oracle recommends to use "set_user_reserve.sh" script  . But option-2 cannot be used on pre solaris 11.2 operating environments.

- Option-1 needs reboot of the system for the parameter to be effective

- Option-2 doesn't need reboot but we need to update "/etc/system" file for the settings to be persistent after reboot.

Script "set_user_reserve.sh " can be downloaded from MOS tech note (Doc ID 1663862.1)

root@soltest:~/scripts# ./set_user_reserve.sh -fp 20
Adjusting user_reserve_hint_pct from 0 to 20
Adjustment of user_reserve_hint_pct to 20 successful.
Make the setting persistent across reboot by adding to /etc/system

# Tuning based on MOS note 1663862.1, script version 1.0
# added Friday, April 1, 2016 03:00:55 AM AST by system administrator : <me>
set user_reserve_hint_pct=20


- 20 value will reserve only 20% of system memory reserved for applications/database. The good thing is this value can be changed any time as desired

>> check the memory utilization:

>> Now change the user_reserve_hint value to 80 and we can see it will change dynamically

root@soltest:~/scripts# ./set_user_reserve.sh -fp 80
Adjusting user_reserve_hint_pct from 20 to 80
Friday, April 1, 2016 01:12:57 PM AST : waiting for current value : 43 to grow to target : 45
Friday, April 1, 2016 01:13:03 PM AST : waiting for current value : 47 to grow to target : 50
Friday, April 1, 2016 01:13:16 PM AST : waiting for current value : 50 to grow to target : 55
Friday, April 1, 2016 01:13:35 PM AST : waiting for current value : 57 to grow to target : 60
Friday, April 1, 2016 01:13:52 PM AST : waiting for current value : 60 to grow to target : 65
Adjustment of user_reserve_hint_pct to 80 successful.
Make the setting persistent across reboot by adding to /etc/system

# Tuning based on MOS note 1663862.1, script version 1.0
# added Friday, April 1, 2016 01:14:12 PM AST by system administrator : <me>
set user_reserve_hint_pct=80

>> Now start an I/O operation and check the ZFS file data utilization:

root@soltest:/u01/oradb/oracle/oradata# cp -r TESTDB2 TESTDB2.BKP &
[1] 1828
root@soltest:/u01/oradb/oracle/oradata# jobs
[1]+ Running cp -r TESTDB2 TESTDB2.BKP &

Though we are running a copy job for file around 4GB, still ZFS file data utilization is still 480 MB. Before setting this parameter it was utilizing all available memory.

In pre oracle Solaris 11.2 the only option to reserve ZFS ARC size is to set "zfs_arc_max"  parameter to the required value and reboot the server for the parameter to be effective

cat /etc/system
####ZFS ARCH 3.2 GB######
set zfs:zfs_arc_max = 3435973837

This parameter will limit the utilization of ARC size upto 3.2 GB. In Solaris 11.2 and Solaris 11.3   if we configure this parameter in "/etc/system" file then it will work but this setting is deprecated. So its recommended to use option-2 as listed above.


This parameter will update the system how much memory from the server it should utilize for ZFS caching, So its highly recommended to set this value appropriately before deploying any application/database on oracle Solaris operating Environment. In Solaris 11.2 to resever ZFS ARC reboot is not required. By setting this parameter administrators can meet all future application/database memory requirements.



6724 1 /
Follow / 5 Apr 2016 at 9:57am

This is something new for me and it will nice source for DBAs who manages database on solaris to understand new features :)