When the SYSAUX started to fill up AGAIN during my absence of leave, I’ve kind of had it…Already toying with the idea to create some procedure to offload all the stats into another database, it seemed Oracle had the same thought…
Talk about “great minds…”..Nah, lets not go down THAT road..
I’m not going to preach about the great advantages of AWR Warehousing, but still:
- Long term stats will help predict the growth.
- Show performance over the same period a year back!
- Keeps the sysaux lean in the sources.
Anyway..My setup is:
OEM: 12CR5 including agents.
OEM Repository: 12c.
Sources: 18.104.22.168 (single instance and RAC).
An oracle 12 db is out-of-the-box able to accommodate the AWR Warehouse, however I didn’t want to mess up the repository (beside the licensing issue), and have a dedicated database specific for the warehousing.
This database is version 22.214.171.124, and should be able to fulfill this purpose after applying a patch.
So, in short:
- Apply the patch to the AWR repository to be database.
- 126.96.36.199 (Linux x86-64) with required Patch 18547891.
- Config the AWR Warehouse database.
- Setup sources to use this database for AWR stats.
Download the patch 18547891 from Oracle Support.
Don’t let the description “LOADING A DUMP FILE INTO AWR FAILS WITH [END_TIME IS GREATER THAN SYSDATE]” fool you, nor the filesize (569.5 KB).
Place this file in a directory of choice (/oracle/patch), and unzip it..
Adjust some env setting to execute the patch:
$ export PATH=$PATH:/oracle/188.8.131.52/base/db/dbhome_1/OPatch
$ opatch lsinventory Oracle Interim Patch Installer version 184.108.40.206.4 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /oracle/220.127.116.11/base/db/dbhome_1 Central Inventory : /oracle/oraInventory from : /oracle/18.104.22.168/base/db/dbhome_1/oraInst.loc OPatch version : 22.214.171.124.4 OUI version : 126.96.36.199.0 Log file location : /oracle/188.8.131.52/base/db/dbhome_1/cfgtoollogs/opatch/opatch2015-09-18_11-11-41AM_1.log Lsinventory Output file location : /oracle/184.108.40.206/base/db/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-09-18_11-11-41AM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Database 11g 220.127.116.11.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = node01 Remote node = node02 -------------------------------------------------------------------------------- OPatch succeeded. $
Check if any previous patched collide with this specific patch:
$ cd /oracle/patch/18547891 $ opatch prereq CheckConflictAgainstOHWithDetail -ph ./ Oracle Interim Patch Installer version 18.104.22.168.4 Copyright (c) 2012, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /oracle/22.214.171.124/base/db/dbhome_1 Central Inventory : /oracle/oraInventory from : /oracle/126.96.36.199/base/db/dbhome_1/oraInst.loc OPatch version : 188.8.131.52.4 OUI version : 184.108.40.206.0 Log file location : /oracle/220.127.116.11/base/db/dbhome_1/cfgtoollogs/opatch/opatch2015-09-18_11-13-56AM_1.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded. $
Ok, so far, so good.
Now…the sh*tty part…the system has to be down to apply this patch..
Ensure that you shut down all the services running from the Oracle home.
– For a Non-RAC environment, shut down all the services running from the Oracle home.
– For a RAC environment, shut down all the services (database, ASM, listeners,
nodeapps, and CRS daemons) running from the Oracle home of the node you want
to patch. After you patch this node, start the services on this node.
Repeat this process for each of the other nodes of the Oracle RAC system.
OPatch is used on only one node at a time.
When everything is down, apply the patch:
$ cd /oracle/patch/18547891 $ opatch apply $ opatch apply Oracle Interim Patch Installer version 18.104.22.168.4 Copyright (c) 2012, Oracle Corporation. All rights reserved. Oracle Home : /oracle/22.214.171.124/base/db/dbhome_1 Central Inventory : /oracle/oraInventory from : /oracle/126.96.36.199/base/db/dbhome_1/oraInst.loc OPatch version : 188.8.131.52.4 OUI version : 184.108.40.206.0 Log file location : /oracle/220.127.116.11/base/db/dbhome_1/cfgtoollogs/opatch/18547891_Sep_18_2015_11_54_24/apply2015-09-18_11-54-24AM_1.log Applying interim patch '18547891' to OH '/oracle/18.104.22.168/base/db/dbhome_1' Verifying environment and performing prerequisite checks...All checks passed. This node is part of an Oracle Real Application Cluster. Remote nodes: 'NODE02' Local node: 'NODE01' Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/oracle/22.214.171.124/base/db/dbhome_1') Is the local system ready for patching? [y|n] Y Backing up files... Patching component oracle.rdbms, 126.96.36.199.0... Patching component oracle.rdbms.dbscripts, 188.8.131.52.0... Verifying the update... The local system has been patched. You can restart Oracle instances on it. Patching in rolling mode. The node 'NODE02' will be patched next. Please shutdown Oracle instances running out of this ORACLE_HOME on 'NODE02'. (Oracle Home = '/oracle/184.108.40.206/base/db/dbhome_1') Is the node ready for patching? [y|n] Y User Responded with: Y Updating nodes 'NODE02' Apply-related files are: FP = "/oracle/220.127.116.11/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/copy_files.txt" DP = "/oracle/18.104.22.168/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/copy_dirs.txt" MP = "/oracle/22.214.171.124/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/make_cmds.txt" RC = "/oracle/126.96.36.199/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/remote_cmds.txt" Instantiating the file "/oracle/188.8.131.52/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/copy_files.txt.instantiated" by replacing $ORACLE_HOME in "/oracle/184.108.40.206/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/copy_files.txt" with actual path. Propagating files to remote nodes... Instantiating the file "/oracle/220.127.116.11/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/copy_dirs.txt.instantiated" by replacing $ORACLE_HOME in "/oracle/18.104.22.168/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/copy_dirs.txt" with actual path. Propagating directories to remote nodes... Instantiating the file "/oracle/22.214.171.124/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/make_cmds.txt.instantiated" by replacing $ORACLE_HOME in "/oracle/126.96.36.199/base/db/dbhome_1/.patch_storage/18547891_Jul_28_2014_11_53_46/rac/make_cmds.txt" with actual path. Running command on remote node 'NODE02': cd /oracle/188.8.131.52/base/db/dbhome_1/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle/184.108.40.206/base/db/dbhome_1 || echo REMOTE_MAKE_FAILED::>&2 The node 'NODE02' has been patched. You can restart Oracle instances on it. There were relinks on remote nodes. Remember to check the binary size and timestamp on the nodes 'NODE02' . The following make commands were invoked on remote nodes: 'cd /oracle/220.127.116.11/base/db/dbhome_1/rdbms/lib; /usr/bin/make -f ins_rdbms.mk ioracle ORACLE_HOME=/oracle/18.104.22.168/base/db/dbhome_1 ' Patch 18547891 successfully applied Log file location: /oracle/22.214.171.124/base/db/dbhome_1/cfgtoollogs/opatch/18547891_Sep_18_2015_11_54_24/apply2015-09-18_11-54-24AM_1.log OPatch succeeded.
Nice! Patch applied. Now to the configuration part.
First we need to setup the AWR Warehouse database, next add the sources..
Fire up a browser and click your way to a source database you want to ship AWR logs from..
Note: the screen-shot shows pending, this is due to testing, in a fresh situation, it will say: “not configured” Also a job is show running the config, this will also not appear in a fresh state.
The following pop-up may pop-up as a warning:
When clicking on the job to monitor success:
When done, return to the AWR Warehouse setup.
Press ADD again.
Apparently the preferred credential of system (read: normal db access credential) is being used, and in our case this (system) user has no access to the “sys.dbms_swrf_internal”.
$ rlwrap sqlplus / as sysdbaSQL*Plus: Release 126.96.36.199.0 Production on Mon Sep 21 13:34:22 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 188.8.131.52.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> grant execute on sys.dbms_swrf_internal to system; Grant succeeded. SQL>
Ok, good. Now press add again…
Now a job will be scheduled in the background to upload all the AWR data, throttled when it is a lot , as not to impact performance to much.
Before we can do more, we need to add access to these reports.
Press the “privileges” button..
As a check in the ACTIVE JOBS (oem scheduler) a job should appear:
Select the line containing the database instance, press Actions, and
In most of the on-line resources everything went flawless…But either they have a better setup (read: sandbox/test environment) or they don’t tell everything..
At my site however, I run into some real world issues, so let’s continue with some troubleshooting..
One database ran it’s job, but no “recent” upload occurred..In the picture this is the second row database..
On the 8th row we see another issue: the database has 0 days of snapshots, whereas the count is 16..However it is not clear which databases are having troubles, but judging from the “newest” column, the last database has issues also.. In short: 4 issues need to be solved.
1st issue: no upload on the second database
Zooming in on the job, we see this:
SQL> BEGIN dbsnmp.mgmt_caw_extract.run_job_now; END; ERROR at line 1: ORA-20137: NO NEW SNAPSHOTS TO EXTRACT ORA-06512: at "DBSNMP.MGMT_CAW_EXTRACT", line 528 ORA-06512: at line 1
First check: job_queue_processes should be > 0
SQL> show parameter job_queue_processes NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ job_queue_processes integer 1000 SQL>
ok, we’re good there..
Next check: Directory CAW_EXTRACT should exist and point to directory in 12c agent home directory.
SQL> select * from dbsnmp.caw_extract_properties; "PROPERTY_NAME" "PROPERTY_VALUE" "run_etl_now" "FALSE" "extract_schedule" "FREQ=HOURLY;INTERVAL=1" "flag" "0" "target_name" "XXXXXX" "dump_dir_2" "/oracle/oem12cR5Agent/agent_inst" "dump_dir_1" "/oracle/oem12cR5Agent/agent_inst" "max_retry" "3" "max_transfer_pending_mb" "2048" "max_snaps" "500" "em_id" "2" "target_type" "rac_database" "upload_interval" "1"
Looks good to me!
Next: what is the error message stating in the source db?
SQL> select start_time, end_time, filename, err_msg from dbsnmp.caw_extract_metadata; "START_TIME" 23-SEP-15 09.51.03.185384000 AM "END_TIME" 23-SEP-15 09.51.06.204428000 AM "FILENAME" "2_2057270E9D84370AE053341C030A8F0F_4120286578_25167_255.dmp" "ERR_MSG" "ORA-04031: unable to allocate 56 bytes of shared memory (""streams pool"",""unknown object"",""streams pool"",""fixed allocation callback"")"
Ah, this is an issue unrelated to AWR, but still something is wrong in the db. I’ve found this issue already with other queries, and the Shared Pool was depleted. Solved this already by bouncing the db (quick fix).
So, I determine this was the cause of the issue, and I’ll just run the upload again.
SQL> exec dbsnmp.mgmt_caw_extract.run_extract(); PL/SQL procedure successfully completed. SQL>
This above procedure ran for a while, since a lot needed to be uploaded, I guess..After completion of this procedure, no apparent results where visible in the GUI, so I ran the “give me the error query again:
SQL> select end_time, filename, err_msg from dbsnmp.caw_extract_metadata; "END_TIME" 23-SEP-15 11.09.12.986240000 AM "FILENAME" "2_206736E41DB81CCCE053341C030A68F3_4120286578_25168_235.dmp" "ERR_MSG" ""
No errors. Good.
Next, I run the whole shebang again, by selecting “upload snapshots now” from this AWR Warehouse view.
One of the jobs in this multi-task process seemingly failed with this error:
SQL> SQL> SQL> BEGIN dbsnmp.mgmt_caw_extract.run_job_now; END; ERROR at line 1: ORA-20137: NO NEW SNAPSHOTS TO EXTRACT ORA-06512: at "DBSNMP.MGMT_CAW_EXTRACT", line 528 ORA-06512: at line 1
However (when you weren’t looking) I checked the location on the source database where the files should be placed before transport) the dmp file was there..The AWR was already extracted, so the error is basically correct..I ran this manually as shown above. In this situation the tasks continued (albeit with some more errors regarding transport etc) to the job “loadAWR”.
The log file of this job states (after a while):
conn_url: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=XXX)(PORT=1521)))(CONNECT_DATA=(SID=XXXX))), user: sys, role: sysdba Connected to database Loaded the snapshots in the AWR Warehouse Disconnected from database
And the results showed up in the GUI after some refreshing..
Next issue:the database with 0 days of snapshots..
- Check for error: no errors
- Run extract: dbsnmp.mgmt_caw_extract.run_extract();
- Re-run “upload snapshots”
Uhm..and problem was solved…(Don’t know why exactly, though).