Oracle Database Upgrade to 12c (In a Fail Safe Environment)

This a step-by-step detailed procedure for upgrading one Oracle Database 11gR2 ( on Windows Server 2012 to the latest version 12c ( with the latest available Windows Bundle Patch. The database is configure with Fail Safe (Active/Passive) cluster to provide some sort of high availability. This is an actual live upgrade for one of my customers.

Although this is a long post, but I hope it will be of a good value for any person doing the same task in the future.

Current Environment:



What is Fail Safe?

Oracle Fail Safe is a clustering solution from Oracle. It is an extra layer above “Microsoft Failover Cluster”.

Usually, this type of cluster consists of two servers with shared storage, and configured as Active/Passive cluster.

This means that one node will own the storage at any given point of time. This node will have the database up and running. Whenever there is any issue with node, the cluster will failover the database (and the shared storage) to the other node. End users see the cluster as one entity with one virtual IP, they don’t care which is the active node?. They got the same service from whatever active node. There are more complex cluster configuration, but let us keep it simple. Fail Safe is similar to “RAC One Node”, but it is not based on the Oracle Clusterware, and most important it is free!.

Upgrade Plan

  • Check certification/Compatibility between all components.
  • Install Oracle Software 12c ( along with the latest patches on both nodes.
  • Remove the 11gR2 Database from its FS group
  • Upgrade the Database to 12c
  • Verify the 12c Standby Database
  • Add the database back to its original FS Group
  • Finishing Tasks
  • Testing and Go-Live

We will proceed with the above plan step-by-step

Check certification/Compatibility between all components

I’m lucky as the installed Fail Safe version is the latest one (4.1.1), so, need for any patching at the Fail Safe level.

Let us check all certification of the different components through Oracle Support

Fail Safe Certification with Windows

Fail Safe is certified with Windows 20122

Fail Safe Certification with Oracle 12c

Oracle FS is supported with the 12c (Non-CDB)

Oracle Fail Safe Release 4.1.1, is supported in a multitenant container database environment (FS 4.1 is not supported).


Non-CDB or CDB Configuration

In database 12c, you can configure a normal database (Non-CDB or not-Multi-Tenant Database as the of 11g) or you may configure a CDB (or Multi-Tenant Database). So, which configuration should we choose for our upgrade?


From the Oracle Upgrade Blog:

you will find the following statement:

“Beginning with Oracle Database a non-CDB architecture is deprecated”

But in the same blog in the comments:

I just can tell you now that this (Single/Multi-Tenant is a must) won’t happen in Oracle 12.2.

We officially say (and write it down) that all potential architectures (non-CDB == as you know it, CDB with either one PDB or many) will be available free of choice until at least Oracle Database 12.2.

So, we still have time to enjoy the Non-CDB configuration.

So, my final decision is to go with Non-CDB for the following reasons:

There are many features that are still not available with the CDB/PDB architecture like:

  • Database Change Notification
  • Continuous Query Notification (CQN)
  • Client Side Cache
  • Heat Map
  • Automatic Data Optimization
  • Oracle Streams
  • Flashback Pluggable Database (Flashback Database works but will flashback CDB$ROOT including all PDBs)
  • Data Recovery Advisor (DRA)
  • Flashback Transaction Backout


Officially, you can use Non-CDB with version and the coming 12.2 without any problem, so we have a plenty of time to use Non-CDB

There is no critical need for my client for using CDB/PDB architecture at the moment              We don’t have time luxury to retest the application again in PDB


To reduce the down time, the new 12c version can be installed on each node while the old 11gR2 database is still running in the Fail Safe group.

Backup the 11g database

Ensure consistent stat of the cluster

Unzip The Software on both nodes

On Node1:


Extract the 12102 software:56

Also, extract the patch p22809813_121020_MSWIN-x86-64:


Repeat the extraction process on Node2

Ensure that you are login to Node 1 with the “Domain Administrator”, which is the user used initially to install the whole cluster.

Stop the currently running Oracle Services on Node 1 (just for safety!):


Note that the current:

11g ORACLE_HOME=C:\app\Administrator\product\11.2.0\dbhome_1

11g ORACLE_BASE= C:\app\Administrator


The current “Regedit” view is as follows:


Install Oracle Software 12c ( along with the latest patches on both nodes.

Install Oracle Database 12c on Node1

Start the OUI on node1:


It looks like, it is a must to specify non-administrator user in 12c !


Here I need to have a big stop regarding the above screen of choosing Oracle Home owner!!!

As per 12c Oracle Documentation, you can choose an existing Domain or Local user or you can create a new local user as I selected above. This is clear in the following part of the Database Installation Guide:


So, there is no problem of choosing local windows user to be the software owner, Actually I completed the installation successfully and upgraded the database successfully, But at the last step od adding the database back to the Fail Safe cluster, I got an error. After searching about this error, I found that the fix of this error is to use a Domain User as a software owner when using 12c with Fai Safe. This is not documented in any place. I just found the following screen shot on one tutorial:


But this was an example, not a mandatory requirements mentioned.

So, to save your time, please don’t use local user as a software owner. I had to create a domain user called “oinstall” and pass it in the above screen. But I had first to clean my 12c installation on both nodes and re-install them again.

<<<< Don’t use local user a software owner, this is a life safer! >>>>

<<<< Don’t use local user a software owner, this is a life safer! >>>>

<<<< Don’t use local user a software owner, this is a life safer! >>>>


Following is the Windows Services status after the installation:


Install Oracle® Database Windiows Bundle Patch (22809813) version


As per the patch readme file:

You must use the OPatch utility version or later to apply this patch. Oracle recommends that you use the latest released OPatch 12.1, which is available for download from My Oracle Support patch 6880880 by selecting the release.

Install the latest Opatch Utility

The current installed opatch version available with 12102 is:


Although the current installed opatch version is ok, but I decided to download and install the latest opatch

The latest version is as per:


Download the above opatch and move to both nodes:


To install the latest opatch patch you just need to unzip it under the ORACLE_HOME after deleting the old OPatch directory. Now the installed opatch version is


Windows BP Installation (2 min):

Now follow the following steps to install the Windows BP:

Shut down all instances and listeners associated with the Oracle home that you are updating

  • We don’t have right now, except the following service:


Explicitly stop the “Distributed Transaction Coordinator” service (which is not an Oracle service) if it is running.

  • net stop msdtc


Open CMD window

set ORACLE_HOME=D:\app\pisdb\product\12.1.0\dbhome_1

set PATH=%ORACLE_HOME%\perl\bin;%PATH%

D:\app\pisdb\product\12.1.0\dbhome_1\OPatch\opatch apply


But I got the following errors:

E:\DB12102\22809813>D:\app\pisdb\product\12.1.0\dbhome_1\OPatch\opatch apply

Exception in thread “main” java.lang.NoClassDefFoundError:




I found that there is an extra space on the above ORACLE_HOME setting. It should be:

set ORACLE_HOME=D:\app\pisdb\product\12.1.0\dbhome_1

instead of:

set ORACLE_HOME= D:\app\pisdb\product\12.1.0\dbhome_1

I corrected and run the opatch again:


Bundle Patch was applied successfully.

Check the installed patches:

D:\app\pisdb\product\12.1.0\dbhome_1\OPatch\opatch lsinv


Install Oracle Database 12c on Node2


Repeat the same steps done above at Node1 on Node2

Although, it should be safe to install 12c while the 11gR2 database is running on Node2, I decided to switchover the database and cluster to Node1, while installing on Node2.

Upgrade the Oracle Database to 12c

Run the Pre-Upgrade Information Tool (preupgrd.sql)

Copy preupgrd.sql and utluppkg.sql from the D:\app\oinstall\product\12.1.0\dbhome_1\rdbms\admin directory to C:\temp


set ORACLE_HOME=C:\app\Administrator\product\11.2.0\dbhome_1



cd c:\temp

sqlplus / as sysdba


A subdirectory or file C:\app\Administrator\product\11.2.0\dbhome_1\rdbms\log\ already exists.

Loading Pre-Upgrade Package…


Executing Pre-Upgrade Checks in PIS…


====>> ERRORS FOUND for PIS <<====

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed prior to attempting your upgrade.

Failure to do so will result in a failed upgrade.


Check Summary: Check that recycle bin is empty prior to upgrade

Fixup Summary:

“The recycle bin will be purged.”

You MUST resolve the above error prior to upgrade


************************************************************                ====>> PRE-UPGRADE RESULTS for PIS <<====


  1. Review results of the pre-upgrade checks:


  1. Execute in the SOURCE environment BEFORE upgrade:


  1. Execute in the NEW environment AFTER upgrade:



Pre-Upgrade Checks in PIS Completed.




SQL> @C:\app\Administrator\cfgtoollogs\PIS\preupgrade\preupgrade_fixups.sql

Pre-Upgrade Fixup Script Generated on 2016-05-23 11:52:15  Version: Build: 012

Beginning Pre-Upgrade Fixups…

Executing in container PIS


Check Tag:     EM_PRESENT

Check Summary: Check if Enterprise Manager is present

Fix Summary:   Execute emremove.sql prior to upgrade.


Fixup Returned Information:

WARNING: –> Enterprise Manager Database Control repository found in the database

In Oracle Database 12c, Database Control is removed during

the upgrade. To save time during the Upgrade, this action

can be done prior to upgrading using the following steps after

copying rdbms/admin/emremove.sql from the new Oracle home

– Stop EM Database Control:

$> emctl stop dbconsole

– Connect to the Database using the SYS account AS SYSDBA:




Without the set echo and serveroutput commands you will not

be able to follow the progress of the script.



Check Tag:     AMD_EXISTS

Check Summary: Check to see if AMD is present in the database

Fix Summary:   Manually execute ORACLE_HOME/oraolap/admin/catnoamd.sql script to remove OLAP.


Fixup Returned Information:

INFORMATION: –> OLAP Catalog(AMD) exists in database

Starting with Oracle Database 12c, OLAP Catalog component is desupported.

If you are not using the OLAP Catalog component and want

to remove it, then execute the

ORACLE_HOME/olap/admin/catnoamd.sql script before or

after the upgrade.




Check Summary: Check that recycle bin is empty prior to upgrade

Fix Summary:   The recycle bin will be purged.


Fixup Succeeded




Check Summary: Check that APEX will need to be upgraded.

Fix Summary:   Oracle Application Express can be manually upgraded prior to database upgrade.


Fixup Returned Information:

INFORMATION: –> Oracle Application Express (APEX) can be

manually upgraded prior to database upgrade

APEX is currently at version and will need to be  upgraded to APEX version 4.2.5 in the new release.

Note 1: To reduce database upgrade time, APEX can be manually upgraded outside of and prior to database upgrade.

Note 2: See MOS Note 1088970.1 for information on APEX installation upgrades.



[Pre-Upgrade Recommendations]



********* Dictionary Statistics *********


Please gather dictionary statistics 24 hours prior to upgrading the database.

To gather dictionary statistics execute the following command while connected as SYSDBA:

EXECUTE dbms_stats.gather_dictionary_stats;



************* Fixup Summary ***********

1 fixup routine was successful.

3 fixup routines returned INFORMATIONAL text that should be reviewed.

**************** Pre-Upgrade Fixup Script Complete *********************

So, based on the outcome from the preupgd.sql script, I’m going to take the following actions:

  • Remove the EM:

Copy the D:\app\pisdb\product\12.1.0\dbhome_1\rdbms\admin\emremove.sql to c:\temp

emctl stop dbconsole

sqlplus / as sysdba





  • Remove the OLAP Catalog component as follows:


Where ORACLE_HOME here is referring to the 11g home


  • Purge Recyclebin è already done by the sql script
  • Gather dictionary statistics:

EXECUTE dbms_stats.gather_dictionary_stats;

  • There are 181 invalid objects (non-sys objects)


select owner,object_type,count(*) from sys.registry$nonsys_inv_objs group by owner,object_type order by owner,object_type;

Compile invalid objects:


This reduced the number of invalid objects from 181 to 98



I will run the preupgrd.sql script again after the above fixes:


Now there is no errors

Remove the database from the Fail Safe Configuration

On Node1, from the “Fail Safe” Manager:



Now the group contains the following resources:56

Now, the Oracle services on Node1 looks as follows:


and, the Oracle services on Node2 looks as follows:



Upgrade the Database using the DBUA

Take another final offline backup for the database.

Start > Programs > Oracle-HOME_NAME > Configuration and Migration Tools >Database Upgrade Assistant.

{{Take care: Start the DBUA from the new oracle home 12c, not that of 11g!!}}596061626364656667

I asked to create a new non-default listener , until we register with the FS listener.


M:\ is the shared storage


Actually we are using pfile with FS not spfile

I clicked “Abort” in the above dialog

I created spfile from pfile:


It looks it continued successfully after that!




Upgrade Results

Database upgrade has been completed successfully, and the database is ready to use.

Source Database Target Database
Oracle Home: c:\app\administrator\product\11.2.0\dbhome_1 D:\app\pisdb\product\12.1.0\dbhome_1

Pre-Upgrade Checks

Database contains INVALID objects prior to upgrade. The list of invalid SYS/SYSTEM objects was written to registry$sys_inv_objs. The list of non-SYS/SYSTEM objects was written to registry$nonsys_inv_objs unless there were over 5000. Use utluiobj.sql after the upgrade to identify any new invalid objects due to the upgrade.
Oracle Application Express (APEX) can be manually upgraded prior to database upgrade.

Upgrade Details

The following summary lists the steps performed during the upgrade process. Log files for all the steps, as well as this summary, are available at “D:\app\pisdb\cfgtoollogs\dbua\PIS\upgrade1”.

Step Name Log File Name Status
Database Backup Backup.log Successful
Pre Upgrade PreUpgrade.log Successful
RDBMS Upgrade Oracle_Server.log Successful
Post Upgrade PostUpgrade.log Successful
Enterprise Manager Configuration emConfigUpgrade.log Successful
Timezone Upgrade UpgradeTimezone.log Successful
Generate Summary generateSummary.log Successful

Step Execution Information:

Database Backup
ORA-01565: error in identifying file ‘%ORACLE_HOME%\DATABASE\SPFILE%ORACLE_SID%.ORA’ORA-27041: unable to open file
The Database can be restored by running the following script “M:\Oracle\PIS\FLASH_RECOVERY_AREA\PIS\PIS_restore.bat” from the backup location.NOTE: This operation will remove all the upgrade changes made by the Database Upgrade Assistant.
Post Upgrade
A persistent initialization parameter file (spfile) has been created at the following location: D:\app\pisdb\product\12.1.0\dbhome_1\database\spfilePIS.ora.
Enterprise Manager Configuration
The Database will be managed by Enterprise Manager Database Express.
Timezone Upgrade
The time zone version and TIMESTAMP WITH TIME ZONE data of the Database has been successfully upgraded from version 14 to version 18.

Initialization Parameter changes

The following changes have been made in the initialization parameters:
Parameters Added:
Name Value
pga_aggregate_limit 2147483648
Parameters Updated:
Name Old Value New Value
diagnostic_dest C:\APP\ADMINISTRATOR D:\app\pisdb
Parameters Commented:
Name Value

Now Oracle services at Node1 looks like:

As a validation:

Post Upgrade Tasks

As a post upgrade steps:

  • Collect fixed objects statistics:


  • Change the database to run in archivelog mode(This is independent of the upgrade process):
  • Compile invalid objects again:


  • Testing the EM Database Express 12c connection:



Looks like it is working (from iexplorer, but has some issues with Chrome)

Verify the 12c Standby Database

Now it is the time to bring back our database to the cluster umbrella. The first step is to validate this database from the “Fail Safe Manager”.

Before doing that, and as I had bad experience configuring database with spfile with Fail Safe, I’m going to create a pfile from the current spfile and put it on the shared storage.

Backup the old pfile/spfile(s)

Sqlplus / as sysdba

Create pfile=’M:\Oracle\PIS\Database\initPIS.ora’ from spfile;


Now run the “Fail Safe Manager” to verify the standalone database:


It failed as expected, as the database was started using a local spfile.

As another test case I moved the spfile to the shared storage and created a new pfile in the shared storage also that contains the following line:


I stopped the database and started it again with the above pfile/spfile that reside on the shared storage.


Now I’ll try to validate the standalone database again:


The validation passed.

Add the database back into the group


I doubt that the above error is due to the referring to spfile, but to isolate this point, I deleted the one-line file init.ora file and used the original pfile, and I’ll not use any spfile at this stage.

Restart the database with the pfile:



Server parameter files (spfile) are now required to be stored on a shared disk. The text parameter file (pfile) must contain a spfile reference to the server parameter file. The text parameter file may be stored on a local disk if there are node specific parameters that need to be set. In general, unless there is a need to use different parameters on different cluster nodes, Oracle suggests storing the text parameter file on a shared disk.

Revalidate the standalone database


Now try to add the database to the group, following the same above steps:
I got the following error:

0x523: The name provided is not a properly formed account name

As I mentioned earlier this error was due to using local windows owner as the oracle software owner, after using a domain user, I was able to add the database successfully to the Fail Safe Group.



Fail Safe Tutorial

Database Upgrade Guide

 -Oracle® Fail Safe Concepts and Administration Guide

Release 4.1.1

for Microsoft Windows



Why Query is not using the Index?

This post answers the following specific question:

                Why “distinct query” is not using the index (in some cases)?

Let us try the following test case:

Create sample table “t_distinct” and add an index to the “ID” column and collect statistics for table and index:





Let us run the following basic query:


It is clear that optimizer selected a Full Table Scan to retrieve the results, although there is an index on the ID column.

If we repeat the same query with the “Index” hint:


Again, the optimizer is still using full table scan to access the table and ignored the Hint!

What is the reason behind the optimizer choice of Full Table Scan?

The reason is that the B*Tree Index by nature doesn’t store the NULL values, so, Oracle Optimizer assumes that not all distinct values could be retrieved from scanning the index only.

So, what if the “ID” column is NOT NULL, will the optimized change his mind? Let us try:



Now, the Oracle Optimizer is able to use the index.

If you omit the Index hint:


Note that “Index FAST FULL SCAN” is using multi-block read instead of single-block sequential read, which is much faster.

Note the big difference in the cost (and consistent gets) between the three cases:

Access Method Cost Consistent Gets


If you cannot add a not null constraint to the table, you may achieve the same result by adding a not null predicate to the query, as follows:

alter table t_distinct modify(id null);

Now if you tried to select from the table, you will get the old full table scan:


But, when you add a not null query predicate:


Note that for all the following queries, the optimizer will choose FULL TABLE SCAN, although there is an index and the id column is not null:


This is ok, as the index  doesn’t contain all columns, and the query references more than 5% of the table rows, so full table scan will be more efficient.



The beauty of Visualization

I’m going to just touch this huge topic. I may post many other detailed posts about specific visualization techniques. This is just a brief introduction to get the feeling of how visualization can help a DBA in his daily activities.

Let us start by example:

If you want to query the database about how many archivelogs were generated/hour along the day across the previous month?

Fine, it is simple. Let us count the number of log switches in the alert.log!

Or let us write a couple of queries, that each one answer one part of our question and then merge all the results together.

But, what if we can answer the above question with a simple heat map graph like the following one?:


So, for each hour across the day/across the month I can easily see how many archive logs were generated with different colors based on the number of archive logs.

{ Most of the number are zeros as I run this report against my laptop’s database which has no load , but if you run it against a production database, you will feel the difference!}

{Credit to this type of heat map graphs goes to David Mann}

Same, if we need to know how much archive logs (in GB) were generated:


What if you want to see the DB workload profile along the day and across one month, see the following graph nice high level graph, from which you can easily pinpoint the peak times:


{The above graph was generated against one live database, which has a real performance issues}

What if you want to see some performance statistics about the current database performance?


There is a lot that we can say about how visualization can simplify our life.

We may need a separate post for each one of the above graphs to explain how we can generate it and the different tools that we may use, so stay tuned for my coming posts!


The Power of the Oracle Trace Files

Although I had different topics planned for my first blog, but I just received this client issue this morning, and I decided to share it with you to explain how to take advantage of the powerful information contained in the oracle trace files.

What is the issue?

I received a call from one of my clients. He is complaining that the oracle database is producing large amounts of trace files per sec (he mentioned 500MB/sec!). This is very interesting.

The client DBA is manually deleting the trace files to free space, until reaching a solution.  He sent me the alert file and sample traces.

Database version is on Windows platform (Windows 2008).


Quick investigation:


Alert.log file has no major issues or errors.

The trace file has the following information:




It is clear that the above insert statement is not shareable and has 32767 child cursors.

The reason(s) for not being shared:



So, it is clear from the trace file that:

  • The problem is coming from the Enterprise Manager Control and is related to the Auditing.
  • There is one statement that is not shareable – due to the above two reasons – and that is executed frequently.


More Investigation:

I asked my client to run the following query to explore the above doubts more:

First Query:

select sql_id,version_count,substr(sql_text,1,30)

from v$sqlarea where version_count > 100

order by 2 desc;


————- ————- ——————————

4vs91dcv7u1p6         32769 insert into sys.aud$( sessioni

This is the same info that we got from the trace file.

Second Query:

SELECT ‘UNBOUND_CURSOR:                 ‘||SUM(TO_NUMBER(DECODE(unbound_cursor,’Y’,1,’N’,’0′))),

‘SQL_TYPE_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(sql_type_mismatch,’Y’,1,’N’,’0′))),

‘OPTIMIZER_MISMATCH:             ‘||SUM(TO_NUMBER(DECODE(optimizer_mismatch,’Y’,1,’N’,’0′))),

‘OUTLINE_MISMATCH:               ‘||SUM(TO_NUMBER(DECODE(outline_mismatch,’Y’,1,’N’,’0′))),

‘STATS_ROW_MISMATCH:             ‘||SUM(TO_NUMBER(DECODE(stats_row_mismatch,’Y’,1,’N’,’0′))),

‘LITERAL_MISMATCH:               ‘||SUM(TO_NUMBER(DECODE(literal_mismatch,’Y’,1,’N’,’0′))),

‘FORCE_HARD_PARSE:               ‘||SUM(TO_NUMBER(DECODE(force_hard_parse,’Y’,1,’N’,’0′))),

‘EXPLAIN_PLAN_CURSOR:            ‘||SUM(TO_NUMBER(DECODE(explain_plan_cursor,’Y’,1,’N’,’0′))),

‘BUFFERED_DML_MISMATCH:          ‘||SUM(TO_NUMBER(DECODE(buffered_dml_mismatch,’Y’,1,’N’,’0′))),

‘PDML_ENV_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(pdml_env_mismatch,’Y’,1,’N’,’0′))),

‘INST_DRTLD_MISMATCH:            ‘||SUM(TO_NUMBER(DECODE(inst_drtld_mismatch,’Y’,1,’N’,’0′))),

‘SLAVE_QC_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(slave_qc_mismatch,’Y’,1,’N’,’0′))),

‘TYPECHECK_MISMATCH:             ‘||SUM(TO_NUMBER(DECODE(typecheck_mismatch,’Y’,1,’N’,’0′))),

‘AUTH_CHECK_MISMATCH:            ‘||SUM(TO_NUMBER(DECODE(auth_check_mismatch,’Y’,1,’N’,’0′))),

‘BIND_MISMATCH:                  ‘||SUM(TO_NUMBER(DECODE(bind_mismatch,’Y’,1,’N’,’0′))),

‘DESCRIBE_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(describe_mismatch,’Y’,1,’N’,’0′))),

‘LANGUAGE_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(language_mismatch,’Y’,1,’N’,’0′))),

‘TRANSLATION_MISMATCH:           ‘||SUM(TO_NUMBER(DECODE(translation_mismatch,’Y’,1,’N’,’0′))),

‘BIND_EQUIV_FAILURE:             ‘||SUM(TO_NUMBER(DECODE(bind_equiv_failure,’Y’,1,’N’,’0′))),

‘INSUFF_PRIVS:                   ‘||SUM(TO_NUMBER(DECODE(insuff_privs,’Y’,1,’N’,’0′))),

‘INSUFF_PRIVS_REM:               ‘||SUM(TO_NUMBER(DECODE(insuff_privs_rem,’Y’,1,’N’,’0′))),

‘REMOTE_TRANS_MISMATCH:          ‘||SUM(TO_NUMBER(DECODE(remote_trans_mismatch,’Y’,1,’N’,’0′))),

‘LOGMINER_SESSION_MISMATCH:      ‘||SUM(TO_NUMBER(DECODE(logminer_session_mismatch,’Y’,1,’N’,’0′))) ,

‘INCOMP_LTRL_MISMATCH:           ‘||SUM(TO_NUMBER(DECODE(incomp_ltrl_mismatch,’Y’,1,’N’,’0′))),

‘OVERLAP_TIME_MISMATCH:          ‘||SUM(TO_NUMBER(DECODE(overlap_time_mismatch,’Y’,1,’N’,’0′))),

‘EDITION_MISMATCH:               ‘||SUM(TO_NUMBER(DECODE(edition_mismatch,’Y’,1,’N’,’0′))),

‘MV_QUERY_GEN_MISMATCH:          ‘||SUM(TO_NUMBER(DECODE(mv_query_gen_mismatch,’Y’,1,’N’,’0′))),

‘USER_BIND_PEEK_MISMATCH:        ‘||SUM(TO_NUMBER(DECODE(user_bind_peek_mismatch,’Y’,1,’N’,’0′))),

‘TYPCHK_DEP_MISMATCH:            ‘||SUM(TO_NUMBER(DECODE(typchk_dep_mismatch,’Y’,1,’N’,’0′))),

‘NO_TRIGGER_MISMATCH:            ‘||SUM(TO_NUMBER(DECODE(no_trigger_mismatch,’Y’,1,’N’,’0′))),

‘FLASHBACK_CURSOR:               ‘||SUM(TO_NUMBER(DECODE(flashback_cursor,’Y’,1,’N’,’0′))),

‘ANYDATA_TRANSFORMATION:         ‘||SUM(TO_NUMBER(DECODE(anydata_transformation,’Y’,1,’N’,’0′))),

— NOTE: Next column only for

—       Please refer to

‘INCOMPLETE_CURSOR:             ‘||SUM(TO_NUMBER(DECODE(incomplete_cursor,’Y’,1,’N’,’0′))),

— NOTE: Next column only for and above

—       Please refer to

‘PDDL_ENV_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(pddl_env_mismatch,’Y’,1,’N’,’0′))),

‘TOP_LEVEL_RPI_CURSOR:           ‘||SUM(TO_NUMBER(DECODE(top_level_rpi_cursor,’Y’,1,’N’,’0′))),

‘DIFFERENT_LONG_LENGTH:          ‘||SUM(TO_NUMBER(DECODE(different_long_length,’Y’,1,’N’,’0′))),

‘LOGICAL_STANDBY_APPLY:          ‘||SUM(TO_NUMBER(DECODE(logical_standby_apply,’Y’,1,’N’,’0′))),

‘DIFF_CALL_DURN:                 ‘||SUM(TO_NUMBER(DECODE(diff_call_durn,’Y’,1,’N’,’0′))),

‘BIND_UACS_DIFF:                 ‘||SUM(TO_NUMBER(DECODE(bind_uacs_diff,’Y’,1,’N’,’0′))),

‘PLSQL_CMP_SWITCHS_DIFF:         ‘||SUM(TO_NUMBER(DECODE(plsql_cmp_switchs_diff,’Y’,1,’N’,’0′))),

‘CURSOR_PARTS_MISMATCH:          ‘||SUM(TO_NUMBER(DECODE(cursor_parts_mismatch,’Y’,1,’N’,’0′))),

‘STB_OBJECT_MISMATCH:            ‘||SUM(TO_NUMBER(DECODE(stb_object_mismatch,’Y’,1,’N’,’0′))),

‘CROSSEDITION_TRIGGER_MISMATCH : ‘||SUM(TO_NUMBER(DECODE(crossedition_trigger_mismatch,’Y’,1,’N’,’0′))),

‘PQ_SLAVE_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(pq_slave_mismatch,’Y’,1,’N’,’0′))),

‘TOP_LEVEL_DDL_MISMATCH:         ‘||SUM(TO_NUMBER(DECODE(top_level_ddl_mismatch,’Y’,1,’N’,’0′))),

‘MULTI_PX_MISMATCH:              ‘||SUM(TO_NUMBER(DECODE(multi_px_mismatch,’Y’,1,’N’,’0′))),

‘BIND_PEEKED_PQ_MISMATCH:        ‘||SUM(TO_NUMBER(DECODE(bind_peeked_pq_mismatch,’Y’,1,’N’,’0′))),

‘MV_REWRITE_MISMATCH:            ‘||SUM(TO_NUMBER(DECODE(mv_rewrite_mismatch,’Y’,1,’N’,’0′))),

‘ROLL_INVALID_MISMATCH:          ‘||SUM(TO_NUMBER(DECODE(roll_invalid_mismatch,’Y’,1,’N’,’0′))),

‘OPTIMIZER_MODE_MISMATCH:        ‘||SUM(TO_NUMBER(DECODE(optimizer_mode_mismatch,’Y’,1,’N’,’0′))),

‘PX_MISMATCH:                    ‘||SUM(TO_NUMBER(DECODE(px_mismatch,’Y’,1,’N’,’0′))),

‘MV_STALEOBJ_MISMATCH:           ‘||SUM(TO_NUMBER(DECODE(mv_staleobj_mismatch,’Y’,1,’N’,’0′))),

‘FLASHBACK_TABLE_MISMATCH:       ‘||SUM(TO_NUMBER(DECODE(flashback_table_mismatch,’Y’,1,’N’,’0′))),

‘LITREP_COMP_MISMATCH:           ‘||SUM(TO_NUMBER(DECODE(litrep_comp_mismatch,’Y’,1,’N’,’0′))),

‘PLSQL_DEBUG:                    ‘||SUM(TO_NUMBER(DECODE(plsql_debug,’Y’,1,’N’,’0′))),

‘LOAD_OPTIMIZER_STATS:           ‘||SUM(TO_NUMBER(DECODE(load_optimizer_stats,’Y’,1,’N’,’0′))),

‘ACL_MISMATCH:                   ‘||SUM(TO_NUMBER(DECODE(acl_mismatch,’Y’,1,’N’,’0′))),

‘FLASHBACK_ARCHIVE_MISMATCH:     ‘||SUM(TO_NUMBER(DECODE(flashback_archive_mismatch,’Y’,1,’N’,’0′))),

‘LOCK_USER_SCHEMA_FAILED:        ‘||SUM(TO_NUMBER(DECODE(lock_user_schema_failed,’Y’,1,’N’,’0′))),

‘REMOTE_MAPPING_MISMATCH:        ‘||SUM(TO_NUMBER(DECODE(remote_mapping_mismatch,’Y’,1,’N’,’0′))),

‘LOAD_RUNTIME_HEAP_FAILED:       ‘||SUM(TO_NUMBER(DECODE(load_runtime_heap_failed,’Y’,1,’N’,’0′))),

‘HASH_MATCH_FAILED:              ‘||SUM(TO_NUMBER(DECODE(hash_match_failed,’Y’,1,’N’,’0′))),

‘PURGED_CURSOR:                  ‘||SUM(TO_NUMBER(DECODE(purged_cursor,’Y’,1,’N’,’0′))),

‘BIND_LENGTH_UPGRADEABLE:        ‘||SUM(TO_NUMBER(DECODE(bind_length_upgradeable,’Y’,1,’N’,’0′)))

FROM   v$sql_shared_cursor

WHERE  address IN (SELECT address FROM   v$sqlarea WHERE  sql_id = ‘4vs91dcv7u1p6’);


The above reasons are also matching the two reasons that we got from the trace file.

Third Query:

select * from v$sql_shared_cursor where address in (select address from v$sqlarea where sql_id = ‘4vs91dcv7u1p6’);



This query also produced a number of records = the number of the child cursors ( 32767 rows)

As a quick action and to confirm my understanding, I asked my client to stop the EM DBConsole (actually they don’t use it !!), and we were happy that no more trace files were produced!!!.

Second Issue:

After fixing the above issue (with a workaround , not a final solution ), the client told me that when he is trying to login to the database with normal database user, he is getting the following errors:


ORA-17059 is known to be related to memory corruptions. I asked my client to either restart the database or flush the shared pool using:

connect / as sysdba

alter system flush shared_pool;

alter system flush shared_pool;

alter system flush shared_pool;


and this fixed the above login error.


Final Conclusion:

Although there are many bugs and related patches for fixing similar non-shareable cursors issues ,like:

Bug 10636231 – High version count for INSERT .. RETURNING statements with reason INST_DRTLD_MISMATCH [ID 10636231.8]

My advise for my client was to leave this buggy oracle release ( and to upgrade the database to the latest available patchset ( or to 12c (

My First blog post!

So, I decided to start blogging !.

I’ll share with you some tips, tricks, knowledge about Oracle products and solutions, like Database, RAC, DG, Exadata, Migrations, Upgrades , Performance Tuning,….etc.

I’m decide to write about any other subjects, but the blog will be mainly around Oracle.

Hope to  spend nice time on my blog.

Keep in touch.

Ahmed Abdel Fattah