Oracle :: AWR Data Extract

Firstly a small note of Thanks !!

Before even starting off this post, I would like to convey my heartiest thanks to Brian Diehl for sharing such a useful piece of information on the Oracle Blog.

Introduction

The Oracle RDBMS has an excellent repository of performance data that is collected automatically. It’s called Automatic Workload Repository, or AWR for short.

  • The AWR is a snapshot-based collector of database information.
  • This includes not only the information from V$SYSSTAT, but also snapshots of other configuration information like database parameter values.
  • This information is invaluable in diagnosing issues at the database-level.
  • AWR information is typically exchanged via the AWR reports in HTML format (created with the script: awrrpt.sql).

While this can be useful, what would be better–and more flexible–is to have the raw data used to create the HTML report.

This is possible using the AWR extract and import utility. The scripts are available in the same rdbms/admin directory under the Oracle Home.

The extraction script is awrextr.sql.

Run this script as a DBA user. First, you will see a list of available databases in the AWR repository. Unless you have imported previous AWR data from other databases, there should be only one DB ID listed.

How does this work? 

Similar to the HTML report, the script will prompt for the number of days to display a list of snapshots. Choose the appropriate number of days that includes the data you want to export. A sample run is as under:

Enter value for num_days: 1

Listing the last day's Completed Snapshots

DB Name        Snap Id    Snap Started
------------ --------- ------------------
ORCL               402 28 May 2013 09:06
                   403 28 May 2013 10:00
                   404 28 May 2013 11:00
                   405 28 May 2013 12:00
                   406 28 May 2013 13:00
                   407 28 May 2013 14:00
                   408 28 May 2013 15:00
                   409 28 May 2013 16:00
                   410 28 May 2013 17:00
                   411 28 May 2013 18:00

Enter the start and end snapshot dates when prompted.

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 403
Begin Snapshot Id specified: 403

Enter value for end_snap: 411
End   Snapshot Id specified: 411

Next, the script will display a list of available directories. The extraction process ultimately creates a dump file using Oracle Datapump. The resulting file will be in the directory you choose here. There is no default, but I usually choose the DATA_PUMP_DIR since it exists in instances by default.

Choose a Directory Name from the above list (case-sensitive).

Enter value for directory_name: DATA_PUMP_DIR

Using the dump directory: DATA_PUMP_DIR

Accept the default file name (format awrdat_<start_snap>_<end_snap>).

What happens next is important to understand.

  • A temporary schema is created and data from the core AWR tables is moved to copies in this schema.
  • This schema is exported using datapump.
  • The script will finish and give the location of the new dump file.
 
Using the dump file prefix: awrdat_403_411
|
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|  The AWR extract dump file will be located
|  in the following directory/file:
|   /u01/app/oracle/admin/orcl/dpdump/
|   awrdat_403_411.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|  *** AWR Extract Started ...
|
|  This operation will take a few moments. The
|  progress of the AWR extract operation can be
|  monitored in the following directory/file:
|   /u01/app/oracle/admin/orcl/dpdump/
|   awrdat_403_411.log
|

End of AWR Extract

The resulting dump file can be imported into another Oracle database using awrload.sql.

More to come …

I’d be covering about awrload.sql in another post.

Also, I would soon be firing off another post on some interesting new features on Oracle 12c, released this week. Stay tuned!

Let me know how it feels … 

Advertisements
Posted in Computers, database, experience, Oracle, Oracle DBA, Oracle Performance Tuning | Tagged , , , , , | Leave a comment

Oracle CBO :: Different Predicates, Same Execution Plan

There are some unfortunate and rare situations which I encounter on few misfortune days of my career, whose investigations lead me to new theories and discoveries. While internet becoming so much information rich these days that it becomes quite easy to find such solutions on a single search in Google, and opens up various solutions and other possibilities for a database administrator. So was the situation with me when I had gone completely frustrated to find some of the PeopleSoft application build queries taking up different predicates for same execution plan, and it became necessary for me to find where these plans were out of order, or were not matching with each other.

One tool that I very frequently use in my day to day investigations when it comes to SQL Tuning is the beautifully crafted SQLT Tool, developed by Carlos Sierra, which is available for free on the Metalink with any DBA having a valid registration.  The document ID on MOS for downloading and more information on SQLT is 215187.1. What I would provide below are sets of two very very useful scripts developed and shared by Carlos himself for tackling such situations like the one I’ve above.

Firstly, couple of situations (given below) wherein we can see such issues in the databases we support. There are surely more of them, which the Oracle support guys would be able to advise more.

  1. You have a SQL with multiple child cursors. They all could have the same Plan Hash Value (PHV) or maybe more than one. Performance wise even the same PHV can exhibit differences for multiple reason, and one of them is the possibility of having the same PHV but not exactly the same filter or access Predicates. If you have two or more predicates on the same Plan Operation, the difference could be the order of these Predicates, and if their execution performance is quite different, one version could perform better than the other.
  2. You have two or more versions of a core query, where some of these versions have slightly different Predicates (typical case where the SQL is dynamically assembled by an application). This family of similar queries would have different SQL_IDs but they may or may not produce the same PHV. When they do, it is hard to find in which Plan Operations the different Predicates are used.

Now, here’s the script (alongside the expected output) which would provide us with the difference in execution plans of such queries. The first inputs a SQL_ID and it outputs a list of those Predicates for each PHV where there is more than one version for any Plan Operation ID. The second inputs a PHV and performs the same analysis regardless if the PHV is associated to one or many SQL_IDs. Both are RAC aware. Both are stand-alone and install nothing.

Script # 1 : Input is the SQL_ID

SPO difpred1.txt;

SET HEA ON LIN 80 NEWP 1 PAGES 100 LIN 300 TRIMS ON TI OFF TIMI OFF;
-- shows which plan lines have different predicates for given sql_id
WITH d AS (
SELECT sql_id,
 plan_hash_value,
 id,
 COUNT(DISTINCT access_predicates) distinct_access_predicates,
 COUNT(DISTINCT filter_predicates) distinct_filter_predicates
 FROM gv$sql_plan_statistics_all
 WHERE sql_id = '&&sql_id.'
 GROUP BY
 sql_id,
 plan_hash_value,
 id
HAVING MIN(NVL(access_predicates, 'X')) != MAX(NVL(access_predicates, 'X'))
 OR MIN(NVL(filter_predicates, 'X')) != MAX(NVL(filter_predicates, 'X'))
)
SELECT v.plan_hash_value,
 v.id,
 'access' type,
 v.inst_id,
 v.child_number,
 v.access_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.sql_id = d.sql_id
 AND v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_access_predicates > 1
 UNION ALL
SELECT v.plan_hash_value,
 v.id,
 'filter' type,
 v.inst_id,
 v.child_number,
 v.filter_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.sql_id = d.sql_id
 AND v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_filter_predicates > 1
 ORDER BY
 1, 2, 3, 6, 4, 5;
SET HEA ON LIN 80 NEWP 1 PAGES 14 LIN 80 TRIMS OFF TI OFF TIMI OFF;
SPO OFF;

Sample Output of Script # 1:

old 8: WHERE sql_id = '&&sql_id.'

new 8: WHERE sql_id = 'cy9pxhp4y6u78'
PLAN_HASH_VALUE ID TYPE INST_ID CHILD_NUMBER
--------------- ---------- ------ ---------- ------------
PREDICATES
--------------------------------------------------------------------------------
 3724264953 2 filter 1 0
("C2"='WHATEVER' AND TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a')
3724264953 2 filter 1 1
(TRANSLATE(TO_CHAR(SIN("N4")),'1','a')='a' AND "C2"='WHATEVER')
That’s great! Now let’s look into the 2nd script.
Script # 2: Input is the Plan hash value
SPO difpred2.txt;

SET HEA ON LIN 80 NEWP 1 PAGES 100 LIN 300 TRIMS ON TI OFF TIMI OFF;
-- shows which plan lines have different predicates for given phv
WITH d AS (
SELECT plan_hash_value,
 id,
 COUNT(DISTINCT access_predicates) distinct_access_predicates,
 COUNT(DISTINCT filter_predicates) distinct_filter_predicates
 FROM gv$sql_plan_statistics_all
 WHERE plan_hash_value = &&plan_hash_value.
 GROUP BY
 plan_hash_value,
 id
HAVING MIN(NVL(access_predicates, 'X')) != MAX(NVL(access_predicates, 'X'))
 OR MIN(NVL(filter_predicates, 'X')) != MAX(NVL(filter_predicates, 'X'))
)
SELECT v.id,
 'access' type,
 v.sql_id,
 v.inst_id,
 v.child_number,
 v.access_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_access_predicates > 1
 UNION ALL
SELECT v.id,
 'filter' type,
 v.sql_id,
 v.inst_id,
 v.child_number,
 v.filter_predicates predicates
 FROM d,
 gv$sql_plan_statistics_all v
 WHERE v.plan_hash_value = d.plan_hash_value
 AND v.id = d.id
 AND d.distinct_filter_predicates > 1
 ORDER BY
 1, 2, 6, 3, 4, 5;
SET HEA ON LIN 80 NEWP 1 PAGES 14 LIN 80 TRIMS OFF TI OFF TIMI OFF;
SPO OFF;
Output of Script # 2:
Enter value for plan_hash_value: 2339135578

old 7: WHERE plan_hash_value = &&plan_hash_value.
new 7: WHERE plan_hash_value = 2339135578
ID TYPE SQL_ID INST_ID CHILD_NUMBER
---------- ------ ------------- ---------- ------------
PREDICATES
--------------------------------------------------------------------------------
 5 filter 72jhgzs8nb1p4 1 0
MAX("E2"."HIREDATE")=:B1
5 filter 8a4x4867rajuv 1 0
(MAX("E2"."HIREDATE")=:B1 OR :B2 IS NULL)
5 filter 8a4x4867rajuv 1 1
(MAX("E2"."HIREDATE")=:B1 OR :B2 IS NULL)
I certainly hope it helps you/all in case you ever face such situations in your career. I would be further investigating on the various causes of such occurrences, and would also try to dig out some typical packaged applications where we’d get these situations more often than others.
Important note: Please go through Carlos’ blog, which is full of such insights, more of such useful scripts and loads of information on SQLT Tool and Performance Tuning of Oracle databases in general.
Posted in Computers, Oracle, Oracle DBA, Oracle Performance Tuning, programming, scripting, technology | Tagged , , , , , , , | Leave a comment

Unreliable Oracle Metalink post !!

Usually all the Knowledge docs I’ve encountered in Oracle Metalink so far are quite able to provide the right in time solution for the Oracle database (and other products) related issues, but here’s one exception — an unreliable Oracle Metalink article which makes the database inconsistent!

Recently I have come across a great article on Pythian Blog by Christo Kutrovsky on a Metalink document that helps us retrieve deleted datafiles in UNIX/LINUX and the issues with the steps given in the document, which eventually puts the database in an inconsistent state! Below is his post, re-blogged, along with a test case which he was kind enough to share with all.

With not much alteration from the original post, here is the complete investigation.

The note ID is: 444749.1 “Retrieve deleted files on Unix / Linux using File Descriptors”.

 “This document is being delivered to you via Oracle Support’s Rapid Visibility (RaV) process, and therefore has not been subject to an independent technical review.”

The procedure outlined in the note describes how to recover the deleted file and put it in the same location as the deleted file. The problem is that it doesn’t include offlining/onlining the file, so the database ends up with two distinct copies of the file:

The deleted file is opened by dbwriter and whatever other user processes (connections) that had to read from the file.

The new, recovered file is open by all new sessions, or existing sessions that did not need to read from the file.

The problem with this state is that all write operations will go through the dbwriter, and thus end up in the deleted file. They will also end up in the Oracle buffer cache. They will be readable from all other sessions, as long as they are in the buffer cache. However if they get aged out of the buffer cache, “new” sessions will be reading the recovered file thus will not see these changes.

Here’s the test case by Christo Kutrovsky with inline explanations:

/ra5a/orabkp/test/TEST/datafile> sqlplus test/test

SQL*Plus: Release 10.2.0.3.0 – Production on Thu Nov 8 12:45:40 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:

Oracle Database 10g Release 10.2.0.3.0 – 64bit Production

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

SQL> select file_name from dba_data_files where tablespace_name=’LOST’;

FILE_NAME

—————————————————

/ra5a/orabkp/test/TEST/datafile/lost.dbf

That was the setup. We know which file to delete.

This is freshly-started database. We will need multiple sessions to best show the effect. We will call the current session “SESSION A”.

Running this query opens the file lost.dbf in the current session, and loads the buffer cache:

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

SQL> !rm /ra5a/orabkp/test/TEST/datafile/lost.dbf

We can still query the table as it is in the Oracle buffer cache:

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

We can still query the table after a buffer cache flush, as the current session has the file handle open (just as dbwriter does).

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

SQL>

Now let’s start another session and see what happens. We will call this one “SESSION B”.

/app/oracle> sqlplus test/test

SQL*Plus: Release 10.2.0.3.0 – Production on Thu Nov 8 12:50:17 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:

Oracle Database 10g Release 10.2.0.3.0 – 64bit Production

The file is currently removed, but we can still query the table from the Oracle buffer cache:

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

However, flushing the cache, we can no longer query the table, as this session cannot find the file we deleted earlier:

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from lost_table;

select count(*) from lost_table

*

ERROR at line 1:

ORA-01116: error in opening database file 6

ORA-01110: data file 6: ‘/ra5a/orabkp/test/TEST/datafile/lost.dbf’

ORA-27041: unable to open file

Linux-x86_64 Error: 2: No such file or directory

Additional information: 3

SQL>

Just for fun, we’re back in “SESSION A”, which still has the file handle open. We can read it and load the buffer cache:

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

SQL>

And now we can read it again from SESSION B (which cannot open the file):

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

Now we are going to implement the fix as described in Metalink Note:444749.1.:

/app/oracle> cat /proc/23937/fd/23 > /ra5a/orabkp/test/TEST/datafile/lost.dbf

/app/oracle> sqlplus test/test

SQL*Plus: Release 10.2.0.3.0 – Production on Thu Nov 8 12:54:10 2007

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:

Oracle Database 10g Release 10.2.0.3.0 – 64bit Production

Connecting to a brand new session, we can query the table (still from buffer cache):

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

At this point we should be okay, right? The table is readable from both new and old sessions, even after flushing the cache.

Wrong. The database is in an inconsistent state.

Here’s a test.

Back in SESSION A (with the open file)

SQL>

SQL> insert into lost_table select * From lost_table where rownum <=1;

1 row created.

SQL> commit;

Commit complete.

We inserted a row, now we count it and see it:

SQL> select count(*) from lost_table;

COUNT(*)

———-

50071

In “SESSION B”, we can also see it (through the buffer cache).

SQL>

SQL> select count(*) from lost_table;

COUNT(*)

———-

50071

However if we flush the cache:

SQL>  alter system flush buffer_cache;

System altered.

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

…we see the old data — unchanged. That is because the dbwriter process wrote to the file that we have deleted. When we “restored it” we restored a new “entity” of that file. I.e., we copied its current contents to a new file. However the old file is still there, and that’s where the changes went.

To further show how wrong this is — even though we still have the correct file still open, if we query the table, we will be reading from the buffer cache, which was populated from the Oracle session that had the incorrect file open:

SQL> select count(*) from lost_table;

COUNT(*)

———-

50070

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from lost_table;

COUNT(*)

———-

50071

Flushing the cache reads the “correct” data again.

Even if you insert the data in “SESSION B” (the one with the “new” file), it will still write to the old file, as the dbwriter process is writing all changes.

These are very simple cases to illustrate the mess this will create. Now consider transactions, undo data for non-existing data, and partial results from new and old file. I won’t even mention direct path reads and writes.

The critical step that’s missing is offlining the file, recovering it, and onlining it again. By offlining the file, dbwriter (as well as all other processes) will release the handle for that file, and its entity will cease to exist. Recovering the file will apply all changes that were missed.

You can follow Christo’s other useful entries at: http://www.pythian.com/blog/author/kutrovsky

Posted in Computers, database, education, experience, IT, Oracle, Oracle DBA | Tagged , , , , , , , , , | Leave a comment

A cool piece of PL/SQL – Spelling out a number in Oracle !!

Unlike the usual posts of mine, this one would be related to one interesting beauty of code I learnt today while I was investigating on Google for a “quick” solution to one interesting issue in Oracle at my work-place. This one has originated from a reader of asktom.oracle.com from India and is definitely one of the coolest piece of code I’ve encountered recently. These are the snippets that keep me engaged in IT and makes me wonder of the still unearthed possibilities of software programming. 

Originating from a simple functionality, Thomas Kyte, the Tom behind the above-stated AskTom website, ran on Oracle APEX, delivers a quick and simple solution of spelling out a number, no matter how big it is! Yes – you’ve heard it correctly – he has actually spelled a number. I curse myself to be a late-comer to a gem of a post like this one, but as they say – it is never late than sorry! You can follow Tom on tkyte.blogspot.com, as well as on the above website. 

Here’s the code:

create or replace
function spell_number( p_number in number )
return varchar2
as
     type myArray is table of varchar2(255);
     l_str    myArray := myArray( ”,
                            ‘ thousand ‘, ‘ million ‘,
                            ‘ billion ‘, ‘ trillion ‘,
                            ‘ quadrillion ‘, ‘ quintillion ‘,
                            ‘ sextillion ‘, ‘ septillion ‘,
                            ‘ octillion ‘, ‘ nonillion ‘,
                            ‘ decillion ‘, ‘ undecillion ‘,
                            ‘ duodecillion ‘ );
     l_num   varchar2(50) default trunc( p_number );
     l_return varchar2(4000);
 begin
     for i in 1 .. l_str.count
     loop
         exit when l_num is null;
         if ( substr(l_num, length(l_num)-2, 3) 0 )
        then
            l_return := to_char(
                            to_date(
                             substr(l_num, length(l_num)-2, 3),
                               ‘J’ ),
                        ‘Jsp’ ) || l_str(i) || l_return;
         end if;
         l_num := substr( l_num, 1, length(l_num)-3 );
     end loop;
     return l_return;
 end;
/

Once ran on the SQL*Plus prompt, it creates a function called “spell_number” which takes a number as input, however long it might be, and spells it out on the screen as output. 

Here’s a small test run, and its results:


select
spell_number( 12345678901234567890123456789012345678 )
from dual;

SPELL_NUMBER(1234567890123456789012345678901234567
————————————————–
Twelve undecillion Three Hundred Forty-Five decill
ion Six Hundred Seventy-Eight nonillion Nine Hundr
ed One octillion Two Hundred Thirty-Four septillio
n Five Hundred Sixty-Seven sextillion Eight Hundre
d Ninety quintillion One Hundred Twenty-Three quad
rillion Four Hundred Fifty-Six trillion Seven Hund
red Eighty-Nine billion Twelve million Three Hundr
ed Forty-Five thousand Six Hundred Seventy-Eight

Phew! 
Everyone should be reminded – yet again – that developers rule! 

Note: The script above and the output is taken “as it is written and described” from AskTom website itself, the direct URL of which is:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1407603857650

I hope developers have enjoyed it. 

P.S. : There might be further scope of improvisation as per requirement. Please feel free to leave your comments on the same. 






Posted in Computers, database, Oracle, PL/SQL, programming, Tom Kyte | Leave a comment