INSERT and Error Logging

I have come up to this feature of Oracle lately, when I was working on a certain script to insert and merge data from various tables into one, and was looking out for an effective way to record the errors which I get during my INSERT statements. I had already implemented it using a separate procedure, which I could call at any time and it would log the errors, as I needed them, in a separate table. However this way below looks to be more cooler than the usual ones.

Firstly we need to create a table which would hold the errors. This table is created with a name and the name of the table whose errors would be captured. So, I need to create my target table first, where I would insert data.

Note: For obvious reasons, I am using a plain example, and not the code which I was working on.

Here’s the code used for creating the target table:

CREATE TABLE raises (emp_id NUMBER, sal NUMBER CONSTRAINT check_sal CHECK(sal > 8000));

Now I can create the table which would log the errors.


As per the above statement, all errors which are received during any/all INSERT Statements on the ‘raises’ table would be recorded into the ‘errlog’ table. Below is the test run to see if it works:

Note: I am logged on to the ‘SCOTT’ account on my database.

2 SELECT empno, sal*1.1 FROM emp
3 WHERE comm > .2
4 LOG ERRORS INTO errlog (‘my_bad’) REJECT LIMIT 10;

0 rows created.

The reason why 0 rows were created is because the constraint which was created during the create table statement was violated. Let us now check the error log table.

SQL> SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;

ORA-02290: check constraint (SCOTT.CHECK_SAL) violated

ORA-02290: check constraint (SCOTT.CHECK_SAL) violated

ORA-02290: check constraint (SCOTT.CHECK_SAL) violated

As we see above, the rows, or the values in the EMP table to be specific, for which the errors are seen, are captured into the log table.

As per Oracle 10.2 documentation:

The DBMS_ERRLOG package provides a procedure that enables you to create an error logging table so that DML operations can continue after encountering errors rather than abort and roll back. This enables you to save time and system resources.

More details on the above procedure can be found here.

I would try this with UPDATE statements as well, but as the documentation says, it should work on any DML statement.


Posted in database, Oracle, Oracle DBA, programming | Leave a comment

IMP with RESUMABLE option

There are cases when we would need to use the RESUMABLE option with IMPORT.

Example: One such case I faced recently where my datafiles were not in the AUTOEXTEND ON mode, because of obvious reasons – so as to give more control to the database admins for controlling the usage of space. However, this rather creates many issues, especially with long running queries using more and more TEMP tablespace, and also with large IMPorts of data into the schema’s. In my case, it was the later, wherein I was importing 1.5G of data into one table in database.

The purpose of this post is to highlight the parameter in Oracle database which helps us control and make the most effective use of the RESUMABLE parameter in IMP.

The command I used in my case was something like: IMP file=*** log=*** tables=*** RESUMABLE=y

Question: How much time would the IMP process wait if I get any space related issues?

Answer: This is determined by the parameter “resumable_timeout” in Oracle. By default it is set to 0.

SQL> show parameter resumable

NAME                                 TYPE        VALUE
———————————— ———– ——————————
resumable_timeout                    integer     0

So, for my case, and since I am using SPFILE for this database, I set it to 1 hour, I.e., 3600, using the below steps:

SQL> alter system set resumable_timeout = 3600 scope=both;

System altered.

SQL> show parameter resumable

NAME                                 TYPE        VALUE
———————————— ———– ——————————
resumable_timeout                    integer     3600

What this does is –> It tells Oracle, and in turn the IMP executable, to give me one hour to sort out the space related issues which are faced. When I face any such issues with space in datafiles during IMP process, I would, hence, be getting at least and at max one hour to add more datafiles, or to resize the datafile where the table resides, etc.

Hope this helps.

Posted in database, Oracle, Oracle DBA | Leave a comment

How to Find High Water Mark in Oracle

Here’s an interesting find on how to find the high watermark in Oracle. It works fine in both the and at my laptop.

Use dbms_space.unused_space to find unused space, whose arguments are:
   segment_owner              IN  VARCHAR2, 
   segment_name               IN  VARCHAR2,
   segment_type               IN  VARCHAR2,
   total_blocks               OUT NUMBER,
   total_bytes                OUT NUMBER,
   unused_blocks              OUT NUMBER,
   unused_bytes               OUT NUMBER,
   last_used_extent_file_id   OUT NUMBER,
   last_used_extent_block_id  OUT NUMBER,
   last_used_block            OUT NUMBER, 
   partition_name             IN  VARCHAR2 DEFAULT NULL);

var tb number
var tby number
var ub number
var uby number
var l1 number
var l2 number
var l3 number
SQL> exec dbms_space.unused_space('R_STAGE1','EB_GL_TRANSACTIONS','TABLE',:tb,:tby,:ub,:uby,:l1,:l2,:l3)

PL/SQL procedure successfully completed.

SQL> print







So in this case, HMW is in the last block (ub or unused blocks and uby or unused bytes are 0).
To find where HMW is located physically,
SQL> select header_file, header_block from dba_segments where segment_name = 'EB_GL_TRANSACTIONS';

----------- ------------
         26         3205
SQL> alter system dump datafile 26 block 3205;

System altered.
In the dump file, we see
  Extent Control Header
  Extent Header:: spare1: 0      spare2: 0      #extents: 2690   #blocks: 1721598
                  last map  0x0ac0b405  #maps: 1      offset: 8224
      Highwater::  0x0981fe05  ext#: 2689   blk#: 640    ext size: 640
  #blocks in seg. hdr's freelists: 1406670
  #blocks below: 1721598
  mapblk  0x0ac0b405  offset: 1672
     Map Header:: next  0x0ac0b405  #extents: 1017 obj#: 38223  flag: 0x40000000
  Extent Map
   0x06800c86  length: 639
   0x08c00005  length: 640
   0x09000005  length: 640
   0x09400005  length: 640
   0x0a40b405  length: 640
   0x0a80b405  length: 640

  nfl = 1, nfb = 1 typ = 1 nxf = 0 ccnt = 344324
  SEG LST:: flg: USED   lhd: 0x0bc1016a ltl: 0x0981fe04
End dump data blocks tsn: 28 file#: 26 minblk 3205 maxblk 3205
The address of HMW is 0x0981fe05, or 159514117. Find the location:
SQL> select dbms_utility.data_block_address_file(159514117) file#, dbms_utility.data_block_address_block(159514117) block# from dual;

       FILE#       BLOCK#
------------ ------------
          38       130565
If data is populated up to the highest block (no block at the top that had data completely deleted), we can confirm it this way:
SQL> select * from dba_extents where file_id = 38 and segment_name = 'EB_GL_TRANSACTIONS' order by extent_id;

OWNER                          SEGMENT_NAME                                                              PARTITION_NAME                 SEGMENT_TYPE
------------------------------ --------------------------------------------------------------------------------- ------------------------------ ------------------
------------------------------ ---------- ---------- ---------- ---------- ---------- ------------
R_STAGE1                       EB_GL_TRANSACTIONS                                                                                       TABLE
R_STAGE_DATA                            4         38          5   10485760        640           38


R_STAGE1                       EB_GL_TRANSACTIONS                                                                                       TABLE
R_STAGE_DATA                         2689         38     129925   10485760        640           38
Notice that the starting block of the last extent is 129925. It has 640 blocks so the last block is
129925 + 640 – 1 = 130564. So 130565, where HWM points to, would be the first block beyond the last used block.

Posted in database, Oracle, Oracle DBA | Tagged , , , , , , | Leave a comment

Tuning Oracle with SQLT – Part # 2

Tuning Oracle with SQLT – Part  # 2

Here’s the 2nd part to the series covering SQLT as an important tool for SQL and Oracle Tuning for Oracle DBA’s.

In the last post of the series, we covered how to install this simple tool. Today we would see how we can run reports from it. There are various options of report available in SQLT tool but there are 2 of them which are used widely, and prove to be helpful in most of the situations.

Now that we’ve have installed it, we would go back to the run directory under the SQLT Installation directory. In Linux/Windows we can use the cd command to hover into the run directory. Here we find many scripts provided by this tool for our use.

Sample SQLT Report for an SQL Statement

What we are about to run is the SQLT Xtract tool provided as part of SQLT. This tool would require the SQL_ID or HASH_VALUE of an SQL we would want to investigate upon. To do so, we do the following:

  • Go to the  run directory of SQLT Installation directory using the cd command.
  • Log on to SQL*Plus: sqlplus / as sysdba
  • Now, we would run our query which we would investigate. Let’s take an example query to see the objects in the database:
    • select object_name from dba_objects; 
  • Now, we query the V$SQL dynamic view to find the SQL_ID.
    • select sql_id from v$sql where sql_text like ‘select object_name from dba_objects%’;
  • Once we have the SQL_ID in place, we would run the SQLT Xtract Report for this SQL using the below command:
    • @sqltxtract <SQL_ID>;
  • When we run the above script it would ask for the SQLT Username and Password which we have specified during the installation of SQLT tool. You can quickly refer the installation steps here.
  • Once we provide the same, it runs successfully, and generates a ZIP file as output with all the relevant files which we would check now!
  • Once the report is completed, it would show a message on the SQL*Plus prompt which says:
    • SQLTXTRACT completed
  • Now we exit from the SQL*Plus session.
  • Now
    • If we are on Windows O/S, then we can open the ZIP file using any utliity like WinZip, or whatsoever available.
    • If we are on Linux/Solaris or any other flavors of UNIX, we can use WinSCP tool to transfer the ZIP file to our local desktop and unzip it.

That’s it! We have now run the SQLT Xtract Report/Tool  and would like to see what it delivers to us regarding to this SQL.

Once we have unzipped it, open the file named sqlt_s <nnnnn> _main.html. The symbols “nnnnn” represent numbers created to make all SQLT reports unique on your machine.

The 2 most important SQLT Reports ?

Like the SQLT Xtract Report, we have another useful report which is SQLT Xecute Report. Let’s quick glance through the differences between them before going any further.

SQLT XTRACT is the easiest report to create because it does not require the execution of the SQL at the time of the report generation. The report can be collected after the statement has been executed. SQLTXECUTE, on the other hand, executes the SQL statement and thus has better run-time information and access to the actual rows returned. SQLTXECUTE will get you more information, but it is not always possible to use this method, perhaps because you are in a production environment or perhaps the SQL statement is currently taking three days to run, which is why you are investigating this in the first place.

We will look at both SQLTXECUTE and SQLTXTRACT report (and other SQLT options also).

Sample SQLT XTRACT Output

A sample of the SQLTXECUTE report looks like the below:

SQLT XTRACT Output - Main Page - Top Section
SQLT XTRACT Output – Main Page – Top Section

What does each of these links mean, and what information can they be able to provide us for better understanding of our SQL Statement and its execution? We’d see it in the next post. Stay tuned!


Posted in database, Oracle, Oracle DBA, Oracle Performance Tuning | Tagged , , , , , | Leave a comment

Small trick with Word 2010

Firstly, a big thanks to Suzanne, an MVP, for providing this easy resolution to a (however) a small issue on which she had replied on the Microsoft Tech Forums. I would provide more details on her work and valuable information she shared with all for free, but firstly, let’s look into the problem I was facing and how I resolved it.

I write a lot. Whether it is any technical documentation for my official deliveries, or for any of my friends and acquaintance, or for my blog. The first thing which I do when creating a post, like this one for example, is that I open Microsoft Word (Press Windows Key + R –> Type ‘winword’ –> Press Enter!) on my work-station and start writing my article, with a proposed headline, which may often change. While I do this from my office as well, I was seeing a particular issue from past few days.

As part of regular software updates and improvement roll-outs, there was an update sent to all of us recently, which changed the default look of the document, when anyone of us opens a new document. I was not liking the look and feel of it, and usually like more manual control on the various elements in my documents, because of which, as everyone of us would usually do, I typed a informal search in Google, and it directed me to a link in the Tech Forums, where I found this solution, and also a simple way to setup the default document template for Word on your desktop.

Steps to do so, are:

1. Open a new word document.  (Press Windows Key + R –> Type ‘winword’ –> Press Enter!)

2. Press Alt+F11 key combination –> This would open the Visual Basic editor.

3. Press Ctrl+G key combination –> This would open/activate the Immediate window.

4. The immediate window, for those of you who’re not aware of, would open up at the bottom of the screen, and here’s where we can type/execute immediate commands, hence the name!

5. Type this command in the immediate window, and press Enter: NormalTemplate.OpenAsDocument

6. A new word document opens, with the same format and look-&-feel, but it would open in the background, with focus on the Visual Basic Editor.

7. Close or Minimize the Visual Basic editor now.

8. Go to the new document, which is opened by the command in Step 5, which would say “Normal.dotm” at the top. This is the document name of the default template.

9. Now you make the necessary changes you want to make, like removing the headers and footers, setting up default font, setting default font size, setting page width, and so on.

10. Save (Ctrl+S) the “Normal.dotm” document, and exit from Word 2010.

11. Now, open a new Word document (Press Windows Key + R –> Type ‘winword’ –> Press Enter!) and you’d be able to see the new changes!

Now, back to Suzanne. And, this is important because it would unfold a valuable list of tips and tricks and tutorials for Word, which can sometimes prove very very handy. She is an MVP – Most Valuable Professional – for Microsoft, because of her life long association with various products, mainly Microsoft Word. Her website can be seen here and the list of tutorials she has provided can be browsed here. Lastly, here is a list of articles and questions on which she has actively contributed in the Microsoft Tech Forums over several years.

Posted in Computers, IT, Microsoft Word, technology | Tagged , , , , , , , | Leave a comment

Oracle :: Tuning with SQLT (Series) (Part # 1)

Looking for an elegant companion for SQL Tuning ? — The answer is: SQLT.

SQLT is a tool originally developed and still regularly updated, maintained by Carlos Sierra. You can visit his website for loads of information on this wonderfully crafted tool, which is one of the most essential tool a DBA must have  so as to easily debug lots and lots of background and supportive information on a badly performing SQL query on any version of Oracle database the DBA’s responsible for. Idea behind this article is to provide you all with a basic knowledge of this tool and how to install this tool on our database(s). Following posts would include more details on its usage, sample SQLT reports for you to learn more, some of the most important scripts which are part of this tool, the most important sections in the SQLT output and how to read/interpret them, and lots more.

I would also request all of you who’s reading this article now to keep an eagle’s eye on Carlos’s website for (almost) daily updates on this tool, its usage, and various other aspects of it, which would help you intensely in resolving SQL Query tuning issues in your day to day job.

No — It’s not a tool which would list a number of steps to solve the problem !!!

In today’s rapidly changing world of Technology, Oracle databases provide many features which are often missed out by many DBA’s.  It seems more of us are not looking for a great assistant to work with for resolving critical issues on our Production databases, but rather, we are seeking out for some intelligent software systems which can hover through our 10053 trace files and let us know there’s the bottleneck, and most importantly, what we can “run” on the SQL*Plus prompt to get rid of it. May this not be the intention of the readers of this article!

SQLT would provide you with utmost quality information about all aspects of a poorly running SQL – Statistics, histograms, tables, execution plans, system parameters, etc. It would not have a “Solution” section wherein commands would be listed out for you to run and resolve the issue. It would provide us, however, with enough information so that we can find and fix the root cause of the problem. 


It is quite a straight-forward tool, and can be downloaded from the Oracle Metalink. Those who have a valid CSI (Customer Support Identifier) can login to Oracle Support and search for SQLT in the Knowledge area, or you can also search with the Metalink ID: 215187.1. You can now scroll to the bottom of the note and choose the version of SQLT suitable for your environment. There are currently versions suitable from 9i to 11 g.

Download the version, specific to your database, and unzip it. When I installed it on Linux, I did it on $HOME of my ORACLE user which created a folder called “sqlt” (note the lower case!) inside which I could find the installation/un-installation and other scripts.

To install the tool simply run the “sqcreate.sql” script which would install this tool. During installation you would be required for some inputs. Below are the steps which are involved typically in the installation:

  • Connect as sys, e.g., sqlplus / as sysdba
  • Make sure your database is running
  • Run the sqcreate.sql script.
  • Select the default for the first option. 
  • Enter and confirm the password for SQLTXPLAIN (the owner of the SQLT packages).
  • Select the tablespace where the SQLTXPLAIN will keep its packages and data. 
  • Select the temporary tablespace for the SQLTXPLAIN user (in my case, TEMP).
  • Then enter the username of the user in the database who will use SQLT packages to fix tuning problems. Typically this is the schema that runs the problematic SQL.
  • Then enter “T”, “D” or “N.” This reflects your license level for the tuning and diagnostics packs. Most sites have both so you would enter “T”, (this is also the default). 
    • If you have the diagnostics pack, only enter “D”;
    • and if you do not have these licenses, enter “N”.

The last message you see is “SQCREATE completed. Installation completed successfully.

Well — it’s that simple! Once the installation is complete, we can, just to make sure the required and important scripts would run properly, manually login to the database with the SQLTXPLAIN user with the password (provided above in the steps of installation). If there are any issues with login, it should be resolved at this time!


Posted in Uncategorized | Tagged , , , , | 2 Comments

Oracle :: 12C … The new horizon!

Prologue …

Oracle 12C was officially released on 25th June 2013, and is now available for downloads at OTN. The documentation cab be found here.

Note: You can download it entirely from here.

Here’s a brief of what I have learnt about it from various reviews and experiences of some of the best DBA’s around and also from those wonderful people from Oracle who are regular bloggers in

Note: I would be writing more on 12c as I slowly begin exploring various areas of it. By the time I complete reading the fine manual, I believe I would have more blog posts on 12c and its pros and cons.

Before I even start,

  • Oracle has provided us with its interactive Learning Library  for Oracle 12c, which is accessible here.
  • Tim Hall is already onto it, and he has written handful of very very information and detailed articles on 12c installation and other areas of it, including the Enterprise Manager. You can find them here. For a quick start installation procedure on Linux 5, look here.
  • Video: Tom Kyte speaks about 12c with Chet Justice a.k.a Oracle Nerd here.
    • Or, watch it below ..!


Getting in …

Features that caught my attention:

1. Optimizer Enhancements:

1.a. Adaptive Query optimisation:

By far the biggest change to the optimizer in Oracle Database 12c is Adaptive Query Optimization. Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics.

1.b. Adaptive Statistics:

The quality of the execution plans determined by the optimizer depends on the quality of the statistics available. However, some query predicates become too complex to rely on base table statistics alone and the optimizer can now augment these statistics with adaptive statistics.

In Oracle Database 12c dynamic sampling has been enhanced to become dynamic statistics. Dynamic statistics allow the optimizer to augment existing statistics to get more accurate cardinality estimates for not only single table accesses but also joins and group-by predicates.

The parameter to look for in this case is: OPTIMIZER_DYNAMIC_SAMPLING.

1.c. Statistics Feedback, a.k.a. Cardinality Feedback:

Statistics feedback (formally known as cardinality feedback) is one form of reoptimization that automatically improves plans for repeated queries that have cardinality misestimates. During the first execution of a SQL statement, the optimizer generates an execution plan and decides if it should enable statistics feedback monitoring for the cursor. Statistics feedback is enabled in the following cases: tables with no statistics, multiple conjunctive or disjunctive filter predicates on a table, and predicates containing complex operators for which the optimizer cannot accurately compute cardinality estimates.

[ Something I was terribly missing in Oracle below a PeopleSoft 9 release! ]

1.d. New types of histograms –> Top-Frequency & Hybrid:

Histograms tell the optimizer about the distribution of data within a column. By default, the optimizer assumes a uniform distribution of rows across the distinct values in a column and will calculate the cardinality for a query with an equality predicate by dividing the total number of rows in the table by
the number of distinct values in the column used in the equality predicate. The presence of a histogram changes the formula used by the optimizer to determine the cardinality estimate, and allows it to generate a more accurate estimate. Prior to Oracle Database 12c, there were two types of histograms,
frequency and height balance. Two additional types of histogram are now available, namely, topfrequency and hybrid histograms.

1.e. Online Statistics Gathering:

When an index is created, Oracle automatically gathers optimizer statistics as part of the index creation by piggybacking the statistics gather on the full data scan necessary for the index creation. The same technique is now being applied for direct path operations such as, create table as select (CTAS) and
insert as select (IAS) operations. Piggybacking the statistics gather as part of the data loading operation, means no additional full data scan is required to have statistics available immediately after the data is loaded.

1.f. New reporting subprograms in DBMS_STATS:

In Oracle Database 12c, new reporting subprograms have been added to the DBMS_STATS package to make it easier to monitor what statistics gathering activities are currently going on and what impact changes to the parameter settings of these operations will have.

These are:


What about a complete PDF with more details on the above Optimiser enhancements and sample queries? Find it here … and here again!

2. Pluggable databases –> Now we’re talking CLOUD!! [ This is a optional and separately buy-able feature!! ]

Consolidation is an important business strategy to reduce the costs of infrastructure and operational expenses. In many production database servers, a big portion of CPU cycles go unused. By consolidating many databases into fewer database servers, both the hardware and operational staff can be more effectively utilized.

Oracle’s new pluggable database feature reduces the risk of consolidation because the DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application. When the user connects to a plugged database, the database environment looks exactly as if the user had connected to a traditional database.


a. pluggable databases do lower resource consumption.
b. Memory and processes are owned by the container database and shared by all pluggable databases, improving resource usage overall.
c. It is also easy to unplug a database and convert the pluggable database to a traditional database if required.
d. In addition, you can back up and recover pluggable databases independently of the container database; you can also perform a point-in-time recovery of a pluggable database.
e. Further, Resource Manager can be used to control resources consumed by a pluggable database.
3. Concurrent execution of union [Okay, we are back again to the optimizer — but this is interesting !!]

Traditionally, queries with union or union all branches execute one after another, meaning that one branch of the union or union all is executed, followed by the next branch, and so on. Version 12c introduces concurrent execution of union branches, meaning that one set of parallel servers will be executing one branch, a second set of parallel servers will be executing a different union all branch, and so on, all at the same time.

This concurrent execution feature will be very useful if the majority of the query execution time is spent outside of the database, such as when waiting for a SQL*Net message from a remote database link or for an SOA call response. The effective use of this new feature could reduce wait time dramatically, improving SQL elapsed time. (Incidentally, with Version 12c, SQL*Net packets can be compressed for database traffic, helping to reduce latency in a WAN environment.)
4. Restore a TABLE easily !! [Wait – there are some cons as well!]

Traditionally, restoring a single table is a cumbersome process involving a tablespace restore, exporting the restored table from the restored tablespace, and then importing to the production database. The new restore table command in Recovery Manager (RMAN) simplifies this task.

A command, like the below one, can be used in such cases:

UNTIL TIME “to_date(‘2013-06-26:14:45:00′,’YYYY-MM-DD:HH24:MI:SS’)”
DUMP FILE ‘tst_dump2.dmp’;

From the log files, it seems it processes the above command in the below manner:

  1. Create an auxilliary instance and load it (restore and recover) with the most recent backup of the database from where we have “lost” the TEST_TABLE.
  2. Again, restore and recover this aux instance to the time specified above in UNTIL_TIME clause.
  3. Opening the auxilliary database after restore completes above. All files of this auxilliary database is kept in /tmp, as mentioned in the option AUXILLIARY DESTINATION in the RMAN command above.
  4. Take a datapump export (EXPDP) of the lost table TEST_TABLE in /tmp, as mentioned in the DATAPUMP_DESTINATION option with dump file name equivalent to DUMP FILE option, both specified in the above RMAN command.
  5. Shut down the auxilliary instance.
  6. Perform an IMPDP of the TEST_TABLE to our main instance.
  7. Finally, remove the auxilliary instance and all its files from /tmp.

Note: I am yet to go through the documentation of 12c entirely, but this looks to be a bit time and space consuming. I’d dig into this further. However, this is quite an easy method of Table restoration in 12c.

Note again: Steps 1 and 2 are restoring + recovering the mandatory tablespaces (SYSTEM,UNDO, etc) and only the tablespace from where the table TEST_TABLE is lost, and not the entire database!
5. Parallel (Rolling) Upgrades:

In highly available environments, keeping the database down for a database upgrade, even for a few hours, is cost-prohibitive. Rolling upgrades are not always possible for major database software upgrades, and it is always a risky adventure to tune Database Upgrade itself. Version 12c uses parallelism to improve the database upgrade to reduce upgrade-related downtime.

The meta-list of information!

Taken from Steve Karam, here is as of now one of the most comprehensive list of what has already been discovered in Oracle 12C in the Oracle Guru’s zone!

Official Oracle Links

Oracle 12c Articles from the Community

Category Article Author
New Features 12c New Features from Oracle Base Tim Hall
New Features Pluggable Databases — An extra charge option Chris Kanaracus
Performance Optimizer Details from Oracle Optimizer Maria Colgan
Installation Upgrade, Migrate, and Consolidate to Oracle 12c slide deck Mike Dietrich
Installation Oracle 12c Silent Installation Daniel Westermann
Internals 12c Parameters Marco Gralike
New Features DB Express, the 12c replacement for DB Console Marco Gralike
New Features Getting Started with Oracle 12c DB Express Marco Gralike
Installation OPatch in Oracle 12c Bobby Curtis
Internals Changing VARCHAR2 to 32k Bobby Curtis
New Features The death of Advanced Replication and Streams Bobby Curtis
New Features 12c licensing first look Paul Bullen
New Features New features in EXPDP Osama Mustafa
Internals Moving datafiles online Osama Mustafa
Installation Queryable Patch Inventory (QOPatch) Osama Mustafa
New Features Security Features of Oracle 12c Osama Mustafa
New Features Enable DB Express Osama Mustafa
Internals Generated as Identity Columns (Sequence Replacement) Osama Mustafa
New Features Restore a single table with RMAN Raheel Syed
New Features RMAN now supports SELECT statements Wissem El Khlifi
Tools More details on single table restores in 12c RMAN Wissem El Khlifi
Tools RMAN Multisection Image Copies Wissem El Khlifi
Tools New ASMCMD commands Wissem El Khlifi
Internals Invisible Columns Wissem El Khlifi
Video @oraclenerd interviews Tom Kyte about 12c at KScope13 Chet Justice
Video @oraclenerd interviews @leight0nn (Leighton L. Nelson) about his gutsy 0-day 12c demo Chet Justice
New Features Collaborate 2013 Oracle 12c Whitepapers IOUG
New Features 5 reasons to upgrade to Oracle 12c Samer Forzley
New Features Top 10 New Features Paulo Portugal
Installation Oracle 12c on Solaris Mike Gerdts
New Features Some goodies to start with Aman Sharma
New Features Capture privilege usage Biju Thomas
New Features In-Line PL/SQL functions in SQL queries Lucas Jellema
Tools Multitenant Options and Application Express Jason Straub
Installation Make your own VM training/lab environment for $900 Steven Karam
New Features How long can I remain on Oracle 11g? Steven Karam
Installation Upgrade Oracle Grid Infrastructure (GI) from to online Ronald Rood
Installation Exadata Patching in Oracle 12c Robert Freeman
Tools XMLDB is mandatory in Oracle 12c Marco Gralike
Installation Oracle 12c RAC on your laptop Step by Step Yury Velikanov
New Features SQL developer and Multitenant (Pluggable DB) Jeff Smith
New Features Oracle Database 12c New Features Part 1 Syed Jaffer Hussain
More to come …

I’d dig further in, and see if I can find some more interesting features. This post would have their siblings in near future.

Posted in Computers, database, experience, Oracle, Oracle DBA, technology | Tagged , , , , , , , , | Leave a comment